Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

The Clark-Wilson Information Integrity Model

 News RBAC and Role Engineering in Large Organizations Recommended Links Separation of Duties Principle of Least Privilege
Domains and assess matrix Protection matrix Capabilities Mandatory access control (MAC) Authentication and Accounts Security
Solaris RBAC Kerberos AppArmor sudo Unix permissions model
Bell LaPadula Security Model THE BIBA MODEL The Clark Wilson Model RBAC, SOX and Role Engineering in Large Organizations Etc

Introduction

Clark and Wilson, in their landmark 1987 paper on computer support for information integrity (Clark, 1987), argued that

Clark and Wilson have created a model which pays primary attention to data integrity:

The Clark-Wilson model also tries to address the relationship between the system and the acceptance of information from outside world by insisting on auditing of transactions. This will not help security/integrity but it can detect breaches. In summary:

The issues raised by Clark-Wilson have long been a part of standard operating practice in information systems. If NIST can invest a modest amount of resources --- but greater than what it has been able to do in the past ---   much can be accomplished over the near term.

Wilson and Clark were among the many who had observed by 1987 that academic work on models for access control emphasized data’s confidentiality rather than its integrity (i.e., the research papers exhibited greater concern for unauthorized observation than for unauthorized modification). Accordingly, they attempted to redress what they saw as a military view that differed markedly from a commercial one. In fact, however, what they considered a military view was not pervasive in the military.

The Clark-Wilson model consists of subject/program/object triples and rules about data, application programs, and triples. The following sections discuss the triples and rules in more detail.

Triples

All formal access control models that predate the Clark-Wilson model treat an ordered subject/object pair — that is, a user and an item or collection of data, with respect to a fixed relationship (e.g., read or write) between the two. Clark and Wilson recognized that the relationship can be implemented by an arbitrary program. Accordingly, they use an ordered subject/program/object triple.

They use the term “transformational procedure” for program to make it clear that the program has integrity-relevance because it modifies or transforms data according to a rule or procedure. Data that transformational procedures modify are called constrained data items because they are constrained in the sense that only transformational procedures may modify them and that integrity verification procedures exercise constraints on them to ensure that they have certain properties, of which consistency and conformance to the real world are two of the most significant. Unconstrained data items are all other data, chiefly the keyed input to transformational procedures.

Once subjects have been constrained so that they can gain access to objects only through specified transformational procedures, the transformational procedures can be embedded with whatever logic is needed to effect limitation of privilege and separation of duties. The transformational procedures can themselves control access of subjects to objects at a level of granularity finer than that available to the system. What is more, they can exercise finer controls (e.g., reasonableness and consistency checks on unconstrained data items) for such purposes as double-entry bookkeeping, thus making sure that whatever is subtracted from one account is added to another so that assets are conserved in transactions.

Rules

To ensure that integrity is attained and preserved, Clark and Wilson assert, certain integrity-monitoring and integrity-preserving rules are needed. Integrity-monitoring rules are called certification rules, and integrity-preserving rules are called enforcement rules.

These certification rules address the following notions:

•  Constrained data items are consistent.
•  Transformational procedures act validly.
•  Duties are separated.
•  All accesses are logged.
•  Unconstrained data items are validated.

The enforcement rules specify how the integrity of constrained data items and triples must be maintained and require that subjects’ identities be authenticated, that triples be carefully managed, and that transformational procedures be executed serially and not in parallel.

Summary

Of all the models discussed, only Clark-Wilson contains elements that relate to the functions that characterize leading access control products. Unified access control generalizes notions of access rules and access types to permit description of a wide variety of access control policies.


Top updates

Softpanorama Switchboard
Softpanorama Search


NEWS CONTENTS

Old News ;-)

The Clark-Wilson Security Model by Sonya Q. Blake

May 17, 2000

Introduction

Information systems security concerns itself with three essential properties of information: confidentiality, integrity, and availability. These three critical characteristics of information are major concerns throughout the commercial and military industry. Historically, confidentiality has received the most attention, probably because of its importance in military. The military environment main objective is to prevent disclosure of information. Unlike the military security systems, the main concern of commercial security systems is to ensure that the integrity of data is protected from improper modifications and inappropriate actions performed by unauthorized users. Confidentiality is equally important within the commercial environment, however, David D. Clark and David R. Wilson argue that the integrity of the information is more important than its confidentiality in most commercial systems. Since much of the attention in the security arena has been devoted to developing sophisticated models (e.g. Bell-LaPadula model) and mechanisms for confidentiality, capabilities to provide confidentiality in information systems are considerably more advanced than those providing integrity. Accordingly, recent efforts by National Institute of Science and Technology (NIST) are focused on the integrity issue. In this paper, we will explore the nature and scope of the Clark-Wilson (CW) model.

Security Terminology

As a background to the discussion of the CW model, a brief description of specific security terms used within the security world is in order. The terminology presented below came from the review of written material from various sources and is merely summarized here.

Integrity

There has been a large amount of debate over the meaning of integrity in the information security community. For the purposes of this paper, data integrity is defined as the, quality, correctness, authenticity, and accuracy of information stored within an information system. (Summers, p.141). Systems integrity is the successful and correct operation of information resources. Together, these definitions define integrity as information is not modified in unauthorized ways, that it is internally consistent and consistent with the real-world objects that it represents, and that the system performs correctly. (Summers, p.152).

Security Policy

The goal of information systems is to control or manage the access of subjects (users, processes) to objects (data, programs). This control is governed by a set of rules and objectives called a security policy. Security policies are governing principles adopted by organizations [NCSC 1988]. They capture the security requirements of an organization, specify what security properties the system must provide and they describe steps an organization must take to achieve security.

In the ANSA enterprise projection, a policy consists of an objective, missions and constraints. Each part of the policy has a security aspect [16].

The objective defines what future state is desired for security.

The security aspect of the missions will define the activities associated with security such as granting or revoking access rights.

The security aspect of the constraints defines the rules that must be followed to preserve security when carrying out the activities defined by the missions.

Security Models

Security models are often regarded as a formal presentation of the security policy enforced by the system [NCSC 1988] and are used to test a policy for completeness and consistency [17]. They describe what mechanisms are necessary to implement a security policy.

Security Mechanisms

A security mechanism enforces or implements some component of the security policy.

Principles of Integrity

Security principles are simply a collection of generally accepted standards of good practice that is thought to further the enforcement of security policies. Every organization is different and the interpretation and the adoption of principles will depend on specific circumstances. There are several principles for achieving and maintaining information integrity, but we are only going to focus on two basic principles that Clark and Wilson suggest are the most important. The principles are the well-formed transaction and separation of duty, which are abstracted from the Clark and Wilson papers [8].

Preserves the external consistency of data by ensuring that data in the system reflects the real-world data it represents.

A Security Model for Integrity

Integrity models are used to describe what needs to be done to enforce the information integrity policies. There are three goals of integrity: (Summers, p.142)

To accomplish these goals, a collection of security services that embody the properties needed for integrity as well as a framework for composing them is needed. The needed security properties for integrity include integrity, access control, auditing, and accountability.

The Clark-Wilson (CW) model is an integrity, application level model which attempts to ensure the integrity properties of commercial data and provides a framework for evaluating security in commercial application systems. It was published in 1987 and updated in 1989 by David D. Clark and David R. Wilson (a computer scientist and an accountant). (Anderson, p.188)

Clark and Wilson partitioned all data in a system into two -constrained data items (CDI) and unconstrained items (UDI), data items for which integrity must be ensured. The (CDI) are objects that the integrity model is applied to and (UDI) are objects that are not covered by the integrity policy (e.g. information typed by the user on the keyboard). Two procedures are then applied to these data items for protection. The first procedure integrity verification procedure (IVP), verifies that the data items are in a valid state (i.e., they are what the users or owners believe them to be because they have not been changed). The second procedure is the transformation procedure (TP) or well-formed transaction, which changes the data items from one valid state to another. If only a transformation procedure is able to change data items, the integrity of the data is maintained. Integrity enforcement systems usually require that all transformation procedures be logged, to provide an audit trail of data item changes. [NCSC 1991] To provide a clear understanding of what this exactly mean, it is sufficiently valuable to have a look at a real world example:
  1. Purchasing clerk creates an order for a supply, sending copies to the supplier and the receiving department.
  2. Upon receiving the goods, a receiving clerk checks the delivery and, if all is well, signs a delivery form. Delivery form and original order go to the accounting dept.
  3. Supplier sends an invoice to the accounting dept. Accounting clerk compares the invoice with original order and delivery form and issues a check to the supplier.

This example is presented in terms of constrained data items which are processed by transformation procedures. Data items are changed only by transformation procedures, thereby maintaining their integrity. The users are purchasing clerk, receiving clerk, supplier and accounting clerk. The transformation procedures (TP) are create order, send order, create delivery form, send delivery form, sign delivery form, create invoice, send invoice, compare invoice to order, and so on. The constrained data items are order, delivery form, invoice and check. Users may only invoke some Transformation Procedures, and a pre-specified set of data objects or CDIs, as their duties see fit which enforces the notion of separation of duty.

To ensure that integrity is achieved and preserved, Clark and Wilson declare that certain integrity-monitoring (certification rules) and integrity-preserving rules (enforcement rules) are needed [6]. The integrity-monitoring rules are enforced by the administrator and the integrity-preserving rules are enforcement rules guaranteed by the system.

In the formulation in Amoroso [1] the nine rules for CW model are:

Certification Rules

Enforcement Rules

The CW model differs from the other models that allow subjects to gain access to objects directly, rather than through programs. The access triple is at the heart of the CW model, (Summers, p.145) which prevents unauthorized users from modifying data or programs.

From what is presented above, we see that the CW model shows that the rules seek to enforce the needed security properties for integrity, which are described below: [18]

Integrity
An assurance that CDIs can only be modified in constrained ways to produce valid CDIs. This property is ensured by the rules: C1, C2, C5, E1 and E4.
Access control
The ability to control access to resources. This is supported by the rules: C3, E2 and E3.
Auditing
The ability to ascertain the changes made to CDIs and ensure that the system is in a valid state. This is ensured by the rules C1 and C4.
Accountability
The ability to uniquely associate users with their actions. This requires authentication of such users which is enforced by rule E3

Overview of Windows NT Security Model

Before we discuss an interpretation of the CW model, an overview of the Windows NT security model is in order.

Windows NT was built to incorporate networking, security and audit reporting as services within the operating system. The Windows NT Security Model was designed to monitor and regulate access to objects and it maintains security data for each user, group, and object.

Described in this section are the basic components of the Windows NT security model (Ivens & Hallberg, p. 40).

Logon process, which accept logon request from users. It is the process that accepts the user’s initial interactive logon, password, authenticates it, and grants entry into the system.

The LSA is the heart of the security subsystem. It verifies the logon information from the SAM database and ensures that the user has permission to access the system. It generates access token, administers the local security policy defined in the system and is responsible for auditing and logging security events.

Security Account Manager (SAM) is the database that contains information for all user and group account information and validates users.

Security Reference Monitor provides real-time services to validate every object access and action made by a user to ensure that the access or action is authorized. This part enforces the access validation and audit generation policy defined by the Local Security Authority.

Resources, such as processes, files, shares, and printers are represented in Windows NT as objects. Users never access these objects directly, but Windows NT acts as a proxy to these objects, controlling access to and usage of these objects. A subject in Windows NT is the combination of the user's access token plus the program acting on the user's behalf. Windows NT uses subjects to track and manage permission for the programs each user runs (Ivens & Hallberg, p. 42).

This is the most basic object in Windows NT, Security Identifiers (SIDs), are internal numbers used with a Windows NT system to describe a user and a group uniquely amongst other Windows NT systems. Owners, users or groups are assigned permissions to an object and are identified by their SID. The security information for an object is encoded in a special data structure called the Security Descriptor (SD). The SD for an object contains the following components: (Minasi, p.1222)

This is the SID of the user or group who owns the object.

This is a primary group associated with the object. It is optional and is not used by the Windows NT file-system system security. It is included to simplify the implementation of a POSIX-compliant file system.

It identifies the user and group SIDs that are to be granted or denied access for the object

This is what controls the auditing message that the system will generate.

Each user of Windows NT has a unique security ID (SID). When a user logs on, Windows NT creates a security access token. The token contains information about the user account which includes a security ID for the user, as well as other security IDs for the groups to which the user belongs, and permissions assigned to the user. The security access token created for the logged-in user is attached to all processes that are started by the user. When the process tries to access a particular object, the SRM checks to see whether any of the SIDs in the security access token attached to the process match a list called the access-control list (ACL) attached to that process. The ACL contains access-control entries (ACE) for each user authorized to access the object

Windows NT includes an auditing mechanism that can be used to audit successful and unsuccessful attempts for operations on files and directories. (Ivens & Hallberg, p.44) This mechanism enables you monitor events related to system security, to identify any security breaches, and to determine the extent and location of any damage.

Windows NT Interpretation of the Clark-Wilson model

An interpretation of the CW model in Windows NT is discussed in the following section [9].

Clark-Wilson Model Windows NT Security Model
Rule 1. The system will have an IVP for validating the integrity of any CDI. In Windows NT there is a local security authority (LSA) which checks the security information in the subject's access token with the security information in the object's security descriptor
Rule 2.The application of a TP to any CDI must maintain the integrity of that CDI In Windows NT, most subjects cannot change the attribution of the objects, but some subjects have this privilege, such as administrator But this is only limited to some special users. So this rule is not applied to Windows NT strictly
Rule 3. A CDI can only be changed by a TP As mentioned above some special users can change attribution of the objects, and no other methods can be applied to change objects
Rule 4. Subjects can only initiate certain TPs on certain CDIs In windows NT, the subject's access token includes what kinds of operations are permitted. Only when information of the access token is consistent with the information in the object's security descriptor, the operation is allowed
Rule 5. CW-triples must enforce some appropriate separation of duty policy on subjects In Windows NT, administrator can do anything. So this rule is not applied
Rule 6. Certain special TPs on UDIs can produce CDIs as output In Windows NT, users can change the object from without ACL state to with ACL state. Generally, this operation is performed by Administrator
Rule 7. Each TP application must cause information sufficient to reconstruct the application to be written to a special append-only CDI In Windows NT, audit services can collect information about how the system is being used
Rule 8. The system must authenticate subjects attempting to initiate a TP In Windows NT, any user has her or his SID, and any process in behalf of this user copies the same SID. By this way, Windows NT can authenticate subjects attempting to initial a TP
Rule 9. The system must only permit special subjects (i.e., security officers) to make any authorization-related lists. In Windows NT, only administrator can do and view some high security events

Based on the information presented above, it is easy to see that the security mechanisms of Windows NT satisfy the axioms of the CW model and that the CW model could be implemented with security mechanisms of Windows NT.

Summary

Integrity models may be implemented in several ways to satisfy the integrity requirements specified in a security policy. Model implementations describe how specific mechanisms can be employed in a system to ensure that the goals of the security policy are met. The Clark-Wilson model emphasizes how integrity is key to the commercial environment and it seeks to develop better security systems for that environment.

In general, it is important to recognize that by itself, a security model is not a panacea to information security issues. Security models have theoretical limits and do not establish security. So, why use models? The fact is that security models are generally used to evaluate existing secure system designs rather than a guide to developing secure systems. It is an effective method for verifying security. Security models are important and necessary, but focusing and relying only on a model can lead to a false sense of security.

Confidentiality, integrity, availability are very important and much related aspects of security. To achieve any of these goals, the objective is to strike a balance between applying generally accepted models and incorporating the latest security technologies and products, applying security patches, risk management, adhering to industry standards and guidelines, and implementing sound management principles to achieve secure systems. It is an on-going process.

References

Published Works:

[1] Amoroso, Edward. Fundamentals of Computer Security Technology. Prentice Hall, 1994.

[2] Summers, C. Rita. Computer Security: Threats and Safeguards. New York: McGraw Hill, 1997.

[3] Anderson, Ross, Security Engineering: A Guide to Building Dependable Distributed Systems. New York: Wiley Computer Publishing, 2001.

[4] Minasi, Mark. Windows NT Server 4, Sixth Edition. Alameda: SYBEX, Network Press, 1999.

[5] Ivens, Kathy and Hallberg, Bruce. Inside Windows NT Workstation 4. Indianapolis: New Riders Publishing, 1996.

[6] Krause, Micki, and Tipton F. Harold. Handbook of Information Security Management. CRC Press LLC, 1998.

Research Articles, White Papers and Workshops:

[7] National Computer Security Center Report 79–91."Integrity in Automated Information Systems." September 1991.
URL: http://www.radium.ncsc.mil/tpep/library/rainbow/C-TR-79-91.pdf

[8] David D. Clark and David R. Wilson "A Comparison of Commercial and Military Computer Security Policies." IEEE Symposium of Security and Privacy, 1987, pages 184-194.

[9] Biba, K.J. "Integrity Considerations for Secure Computer Systems." Bedford, The MITRE Corporation, 1977.

[10] Xiao, Lei. "Clark-Wilson in Unix or NT. " Assignment 3. 9 January 1999.
URL: http://www.tml.hut.fi/Opinnot/Tik-110.401/1999/Tehtavat/answer3.html

[11] Goguen, J.A. and J. Meseguer. Security Policies and Security Models. Proceedings of the 1982 Berkeley Conference on Computer Security, 11- 20. Berkeley, CA, 1982.

[12] Roskos, J.E., Welke, S.R., Boone, J., and Mayfield, T., "A Taxonomy of Integrity Models, Implementations and Mechanisms," Proceedings of the 13th National Computer Security Conference, pp. 541-551, October 1990.
URL: http://www.radium.ncsc.mil/tpep/library/rainbow/NCSC-TG-010.pdf

[13] Welch, Ian "Reflective Enforcement of the Clark-Wilson Integrity Model", 2nd Workshop on Distributed Object Security. 2 November 1999.
URL: http://www.cs.ncl.ac.uk/research/dependability/reflection/downloads/dos99.pdf

[14] SANS Related GSEC Practical

[15] Olovsson, Tomas. "A Structured Approach to Computer Security." Technical Report No. 122, 1992. URL: http://www.ce.chalmers.se/staff/ulfl/pubs/tr122to.pdf

[16] Bull, John and Rees, Owen. "A Framework for Federating Secure Systems." ANSA Phase III, 1994. URL: http://www.ansa.co.uk/ANSATech/93/Primary/10060002.pdf

[17] Pfleeger. "Trusted Operating System Design." ECE-C352 Lecture 8. (1999) URL: http://www.ece.drexel.edu/ECE/ECE-C352/lectures/lecture8.pdf

[18] "Prof. E. Stewart Lee, Director. "Essays about Computer Security." Centre for Communications Systems Research, Cambridge, 1999. URL: http://www.cl.cam.ac.uk/~mgk25/lee-essays.pdf

[19] MSDN Online Library, Microsoft Corporation, 1999

Clark-Wilson in Unix or NT by Lei Xiao

As a network operate system, security of Windows NT is a very essitial
part. In this essay, I will disscuse its security policy first, then 
introduce he Clark-Wilson Integrity Model, and at last implemented CW 
model with Windows NT Security model.

1. Overview of Windows NT security model

Windows Nt security model is consisted by three part:
  + Logon process, which accept logon request from users. These include the
    intial interactive logon, which displays the intial logon dialog box to
    the user, and remote logon process, which allow access by remote users
    to a Windows NT server process.

  + Local security Authority, which ensure that the user has permission to 
    access the system. This part is the center of the Windows NT security
    subsystem. It generates access token, manages the local security policy
    and logs the audit messages generated by the Securymty monitor.
  
  + Security Account Manager (SAM), which maintains the user accounts database.
    This database contains information for all user and group account. SAM
    provides user validation services, which are used by the local Security
    Authority.
  
  + Security Reference Monitor, which checks to see if the user has permission
    to access an object and perform whatever action the user is attempting.
    This part enforces the access vakidation and audit  generation policy
    defined by the Local Security Authority. It provides services to both
    kernel and user mode to ensure the users and processes attempting access
    to an object have the necessary permissions.
  
  Together, these components are known as the security subsystems and is 
  designed for C2 level security as defined by the U.S. Department of Defense.
  
2. Overview of Clark-Willson Model

  The CW model is expressed in terms of a collection of rules on the operation
  and maintenance of a given computer environment or application. These rules
  are designed to provide a degree of integrity protection for some identified
  subset of data in that environment or application. The critical notion in 
  the CW model is that these rules are expressed using so-called well-formed
  transactions in which a subject initiates a sequence of actions that is 
  carried out to completion in a controlled and predictable manner.
  The rules for CW model are:
  
  Rule 1: IVPs must be available on the system for validating the integrity
          of any CDI.
  Rule 2: Application of a TP to any CDI must maintain the integrity of that
          CDI.
  Rule 3: A CDI can only be changed by a TP.
  Rule 4: Subjects can only initiate certain TPs on certain CDIs.
  Rule 5: CW-triples must enforce some appropriate separation of duty policy 
          on subjects.
  Rule 6: Certain special TPs on UDIs can produce CDIs as output.
  RULE 7: Each TP application must cause information sufficient to reconstruct
          the application to be written to a special append-only CDI.
  Rule 8: The system must authenticate subjects attempting to initiate a TP.
  Rule 9: The system must only permit special subjects (i.e., security officers)
          to make any authorization-related lists.
          
3. Implementation of CW model with Windows NT security model

   A subject in Windows NT is the combination of the user's access token plus
   the program acting on the user's behalf. Windows NT uses subjects to track
   and manage permission for the programs each user runs.
   
   A object in Windows NT is files, databases, printers, processes, named
   pipes and so on.
   
   The key objective of the Windows NT security model is to monitor and control
   who accesses which objects. It controls not only which subjects can access
   which objects and also how they may be accessed.
   
   Security Information for subjects
   
   Each user of Windows NT has a unique security ID (SID). When a user logs on,
   Windows NT creates a security access token. This includes a security ID for
   the user, other seurity IDs for the groups to which the user belongs, plus
   other information such as the user's name and the groups to which that user
   belongs. In addition, every process that runs on behalf of this user will
   have a copy of his or access token.

   Security Information for objects
   The security attributes for an object are described by a security descriptor.
   It includes four parts
   
   + An owner security ID, which indicates the user or group who owns the object.
   + A group security ID, which is only used by POSIX subsystem.
   + A dicretionary access control list (ACL), which idetifies which users and
       groups are granted or denied which access permissions.
   + A system ACL, which controls which auditing messages the system will generate.
   
  Window NT implement its security methods by Access Validation and Audit.
  
  When a subject tries to access an object, Windows NT compares security
  information in the subject's access token with the security information in the
  object's security descriptor. If they matche, the subject is allowed to access the
  object, otherwise denied.
     
  Windows NT includes auditing features you can use to collect information about
  how your system is being used. These features also allow you to monitor events
  related to system security, to identify any security breaches, and to determine
  the extent and location of any damage.
 
 Now let us see how the CW model implemented with NT.
 
 CDI --- objects with ACL
 UDI --- objects without ACL
 TP  --- changes to objects made by subjects
 
  Rule 1: IVPs must be available on the system for validating the integrity
          of any CDI.
          In Windows NT, there is a local security authority which checks the
          security information in the subject's access token with the security
           information in the object's security descriptor.
  Rule 2: Application of a TP to any CDI must maintain the integrity of that
          CDI.
          In Windows NT, most subjects cannot change the attribution of the 
          objects, but some subjects have this previllage,such as administrator
          But this is only limited to some spciecal users.
          So this rule is not applied to Windows NT strictly.
  Rule 3: A CDI can only be changed by a TP.
    As mentioned above some special users can change attribution of the 
          objects, and no other methods can be applied to change objects.
  Rule 4: Subjects can only initiate certain TPs on certain CDIs.
    In windows NT, the subject's access token includes whatkinds of 
      operations are permitted. Only when information of the access token is 
        consistent with the information in the object's security descriptor,
	  the operation is allowed.
  Rule 5: CW-triples must enforce some appropriate separation of duty policy 
          on subjects.
          In Windows NT, administrator can do anything. So this rule is not 
          applied.
  Rule 6: Certain special TPs on UDIs can produce CDIs as output.
    In Windows NT, users can change the object from without ACL state to 
      with ACL state. Generally, this operation is performed by Administrator.
  RULE 7: Each TP application must cause information sufficient to reconstruct
          the application to be written to a special append-only CDI.
          In Windows NT, audit services can collect information about how the 
          system is being used.
  Rule 8: The system must authenticate subjects attempting to initiate a TP.
    In Windows NT, any user has her or his SID, and any process in 
      behalf of this user copies the same SID. By this way, Windows NT 
        can authenticate subjects attempting to initial a TP. 
  Rule 9: The system must only permit special subjects (i.e., security officers)
          to make any authorization-related lists.
          In Windows NT, only administrator can do and view some high security 
          events.
          
          
4. Conclusion
   From what is presented above, we see that CW model can be implemented with
   Windows NT security model. 
 
 Reference
[1] Edward Amoroso, Fundamentals of Computer Security Technology,
    Prentice Hall 1994,  ISBN 0-13-305541-8.
[2] MSDN Online Library, Microsoft Corporation, 1999

Optimistic Security: A New Access Control Paradigm by Dean Povey

Cooperative Research Centre for Enterprise Distributed Systems

povey@dstc.edu.au

Abstract

Despite the best efforts of security researchers, sometimes the staticnature of authorisation can cause unexpected risks for users working in a dynamically changing environment. Disasters, medicalemergencies or time-critical events can all lead to situations where the ability to relax normal access rules can become critically impor-tant.

This paper presents an optimistic access control scheme where en-forcement of rules is retrospective. The system administrator is relied on to ensure that the system is not misused, and compensatingtransactions are used to ensure that the system integrity can be recovered in the case of a breach. It is argued that providing an opti-mistic scheme alongside a traditional access control mechanism can provide a useful means for users to exceed their normal privilegeson the rare occasion that the situation warrants it.

The idea of a partially-formed transaction is introduced to showhow accesses in an optimistic system might be constrained. This model is formally described and related to the Clark-Wilson in-tegrity model.

1 Introduction Bob is a nurse at a small rural hospital which has been physicallyisolated due to a heavy storm. Communications are down, and the local doctor is unable to be located. Bob has to attend to a life-threatening emergency, for which he needs immediate access to a patient's medical records. However, Bob is not authorised to accessthe information, putting the patient's life at risk.

Regardless of how flexible or expressive access control mechanismsbecome, there will always be a gap between what organisations need, and what mechanisms can implement. One reason for thisis that access control systems are unaware of dynamically changing situations that are external to the system. Alice is working late one night, when she notices a rogue process

\Lambda The work reported in this paper has been funded in part by the Cooperative Research Centre Program through the Department of Industry,Science and Tourism of the Commonwealth Government of Australia.

is going crazy and forking off multiple children. Alice knows thatif she doesn't kill the process quickly, it is going to eventually kill the machine, bringing down a mission-critical system in the middleof its nightly batch scripts. However, Alice doesn't have superuser privileges on the box, so all she can do is wait and watch as thesystem begins to fall over.

Whether it's a disaster, medical emergency, or just a need to meet acritical deadline, sometimes the necessary authorisation depends on situations which are unforeseen, and which a system cannot be eas-ily made aware of. An organisation that wishes to define a security policy which differentiates roles depending on circumstances, willfind themselves unable to implement the policy using traditional schemes which enforce least privilege. This paper investigates an optimistic access control scheme as anew paradigm for constraining access in such situations. Optimistic access control takes the approach of assuming that most ac-cesses will be legitimate, and relies on controls external to the system to ensure that the organisations security policy is maintained.The scheme allows users to exceed their normal privileges in a way which is constrained so that it is securely audited and maybe rolled back. Ways of minimising risks in an optimistic access control scheme are discussed, and a formal model is given basedon the Clark-Wilson Integrity Model[2]. Finally, the paper concludes by proposing some novel applications of the optimistic se-curity scheme.

2 Optimistic Security 2.1 Overview The basic approach of an optimistic security system is to assumethat any access is legitimate and should be granted. Its goals can be perhaps best summarised by the principle suggested by Bob Blake-ley in his 1996 NSPW paper, The Emperor's Old Armor[1]:

Make the users ask forgiveness not permission. In an optimistic system, enforcement of the security policy is retro-spective, and relies on administrators to detect unreasonable access and take steps to compensate for the action. Such steps might in-clude:

ffl Undoing illegitimate modifications ffl Taking punitive action (e.g. firing, or prosecuting individuals) ffl Removing privileges.

These measures should act as both a deterrent and a means to re-cover the system to a valid state. Such a system assumes that the risk of failure and the cost of recov-ery is low compared to the cost of not granting access in a given situation. Optimistic security is not suited to financial or trading sys-tems where the risk of fraud is high, but may be useful in situations such as the protection of private medical information, where emer-gency access may save someone's life, or in a time-critical system where the person's with the necessary privileges to perform sometask may be unavailable.

Optimistic measures may exist alongside a traditional pessimisticscheme, with a mechanism to exceed the current privilege set under certain circumstances. This is similar to existing schemes such asthe UNIX setuid system call which allows a user to gain superuser access for certain trusted programs. However, the idea of optimisticsecurity is to provide the ability to rollback from such actions, and to ensure that they are securely audited. The traditional authori-sation set will be allocated in line with good security design principles, such as least privilege and separation of duty [6] and willbe reserved for everyday use of the system. The organisation may then define either an inclusive or exclusive list of authorisations foruse in exceptional circumstances (which may also be explicitly defined). In general the authorisations in the exceptional set will bemuch less restrictive than the everyday set, and only those actions which may cause catastrophic or irrecoverable damage will be ex-cluded.

2.2 Requirements for Optimistic Security Providing an optimistic security system requires ways to ensure thatthe likelihood and consequences of a user maliciously or inadvertedly misusing the system are minimised. To meet this objective thefollowing controls should be considered:

2.2.1 Constrained entry points Exceeding privilege should be a rarity, rather than a norm. If usersneed to exceed their privileges on a regular basis, then the organisation should consider whether the list of privileges given to the useris too restrictive, or whether the access control mechanism is too inflexible to support complex policies. It should not be possible fora user to accidentally invoke higher privileges, but should require an explicit, conscious decision. Users should be warned that themechanism must only be used after careful consideration, and that misuse will have negative (external) consequences. This provides adeterrent, and reminds users of their obligations to the organisation.

The risk of misuse may also be limited by using a threshold schemeto ensure that

m of n users must agree to the extra privileges beforethey will be granted. This limits the risk of a single malicious user

causing damage to the system. 2.2.2 Accountability The system must strongly authenticate users so that they may beassociated with given actions. This provides a deterrent, as users know they will be clearly implicated in any misuse, and ensuresthat only the guilty are targets of any punitive action. If the authentication mechanisms are inadequate, then the risk of misuse willbe unacceptable, as users will be able to both masquerade as other users, and repudiate their own illegitimate actions.

2.2.3 Auditability The system must log the actions of users in detail, so that a post-mortem analysis can determine whether an access is legitimate or not. It may be useful to require that users give a reason for usingthe optimistic mechanisms, and that this is associated with the audit data. The audit data should also be kept secure so that accessto the optimistic mechanism does not allow audit information to be compromised. It should be noted that analysis of the audit trail by asystem administrator will be labour intensive - a further motivation for educating users to use the mechanism only in extreme circum-stances.

In addition to information about the action being logged, other datasuch as the state of the current system may also be required to determine whether an access was justified. It should be noted, that an important issue to be addressed will bethe retention period for such audit data. This is an important consideration for practical implementations of optimistic security, asthe ability to recover from breaches may be limited by the ability to reconstruct an accurate picture of the breach from audit logs. 2.2.4 Recoverability

Accesses which write, modify or delete data must be able to berolled back to ensure that a user cannot irreparably damage a system. Actions which have external behaviour (e.g. firing a missile,sending a letter) should be associated with compensating actions to restore the system to a stable state (e.g. abort the missile, send apol-ogy letter). The issue of recovery in advanced concurrency control systems has been well studied,1 and as such is not further consid-ered in this paper. In general it is assumed that for any transformation on data or security properties of that data (confidentiality,integrity etc) there is a compensating transaction which exists to reverse this transformation.2 2.2.5 Deterrents One effective way of reducing risks in an optimistic system is byusing punitive measures to deter misuse. The punitive measures themselves can be either optimistic - with the system administra-tor enforcing the measure on detection of misuse; or they can be pessimistic - with the punitive measure implemented immediatelyand reversed if the action is determined to be legitimate. An example of a workable pessimistic punishment scheme suggested in [1]is to automatically debit a user's bank account when an action is invoked, and refund the money if the access is deemed legitimate. 3 Formal model To show how integrity can be maintained in an optimistic system,a formal model is needed that ensures the requirements outlined above are realised by the system. The formal model presented inthis section is based on the Clark-Wilson Integrity model[2], and supports the notions of accountability, auditability and recoverabil-ity.

1See IEEE's Executive Briefing: Advances in Concurrency control and Transaction Processing[5] for an excellent overview of the state of the artin this area. Of particular interest for optimistic security is the notion of

sagas[3], which are long running transactions consisting of independentcomponents. Sagas use compensating transactions to maintain consistency

2Where this situation does not exist, it would be inappropriate to use an optimistic system anyway.

Clark and Wilson defined the concept of a well-formed transactionas a transaction where the user is unable manipulate data arbitrarily, but only in constrained ways that preserve or ensure the integrityof the data [2]. A security system in which transactions are wellformed ensures that only those actions that have been certified byan administrator as safe can be executed.

Clark and Wilson's formal model for data integrity consists of ninerules for constraining transactions. The rules describe constraints on transformations operating on two types of data:

Constrained Data Items (CDIs) Data items to which the integritymodel must be applied. Unconstrained Data Items (UDIs) Data items not covered by theintegrity policy (eg. information typed by the user on the keyboard).

In addition, the Clark Wilson model defines Integrity VerificationProcedures (IVPs), which are used to verify that CDIs are in a valid state, and Transformation Procedures (TPs), which are functionsthat meet the definition of a well-formed transaction. The model uses two types of rules - certification rules enforced by the admin-istrator; and enforcement rules guaranteed by the system. The nine rules are summarised in Figure 1.

Rule DescriptionC1 IVPs must be certified to ensure that the system is

validC2 TPs on CDIs must be certified to ensure that they result in a valid CDIC3 TPs must be certified to ensure they implement the principles of separation of duties & least privilegeC4 TPs must be certified to ensure that their actions are loggedC5 TPs which act on UDIs must be certified to ensure that they result in a valid CDIE1 Only certified TPs can operate on CDIs E2 Users must only access CDIs through TPs for whichthey are authorised E3 Users must be authenticatedE4 Only administrator can specify TP authorisations

Figure 1: Clark-Wilson Integrity Rules 3.1 Partially-formed transactions A partially-formed transaction is defined as a transaction wherethe integrity of the data is not guaranteed, but where a compensating transaction exists to return the system to a valid state. Thetransaction is only partially-formed, as the integrity of the system is guaranteed by the compensating transaction, and not by constrain-ing the actual action itself.

In this section, a formal model for integrity using partially-formedtransactions is described3. This model is based on Clark-Wilson's integrity model, but also adds the following components:

3This model has benefited from feedback during the workshop. In particular, rules have been added to account for dependency relationships in thetransactions, and the concepts of PTPs and PCDIs have been added to avoid

confusion with the Clark-Wilson model. For a description of the rules forpartially-formed transactions as they were initially proposed, see [4].

PTP A Partial Transformation Procedure. This corresponds to theconcept of a partially-formed transaction and describes a procedure which operates on CDIs, but which is not guaranteedto result in valid CDIs.

Compensating TP A transformation procedure which reverses theactions of a PTP. PCDI A partially-constrained data item. A CDI which has beenoperated on by a PTP.

Like the Clark Wilson model, IVPs are needed to verify that thesystem is in a valid state both before and after the execution of one or more partially-formed tranqsactions. This gives the first rulein the integrity model for partially-formed transactions, which is identical to that in the Clark-Wilson model:

C1 IVPs must be certified to ensure that all data items are in a validstate at the time the IVP is run.

The second certification rule in the system applies to PTPs, andoutlines the main requirement for optimistic security - i.e. the existence of a valid compensating transaction. This is the cornerstoneof the partially-formed transaction integrity model.

C2 All PTPs must be certified to provide a compensating TP thatwill return any modified CDI to a valid state. The following enforcement constraints exist to ensure that the PTPis authorised, and that the accountability and auditability requirements are maintained.

E1 The system must ensure that only PTPs that have been certifiedagainst requirement C2 are allowed to run. E2 The system must ensure that users can only use those PTPs forwhich they have been authorised. E3 The system must authenticate the identity of each user. E4 Each PTP must write to an append-only log all the informationrequired to reconstruct and reverse the operation

E5 Only an administrator is permitted to authorise users to accessPTPs.

These rules have similar counterparts in the Clark-Wilson model.However, rule E4 is specified as an enforcement rule rather than a certification rule (as in C4 in Clark-Wilson). This is a tacit recogni-tion that programs usually do a poor job of audit, and as the integrity of the model relies on the ability to recognise and reverse anoma-lous behaviour, we need to ensure this function. This rule requires then that the operating system/security system provide audit inde-pendently of the programs it is constraining. An example of this is the tudo system which uses the system call tracing features of theSolaris /proc filesystem to enable changes to constrained files to be audited and recovered[4]. In addition to the above rules, a mechanism is needed for ensuringthat CDIs operating on by PTPs can be validated. This is done by first requiring that PTPs mark CDIs they have accessed as PCDIs,and then by ensuring that rules exist to convert these PCDIs back to valid CDIs. Hence the following rules:

E6 CDIs which are acted on by PTPs must be marked as PCDIs C3 Compensating TPs must be certified to result in valid CDIs. C4 The administrator must certify PCDIs as being valid CDIs orif invalid, must apply the compensating transaction to restore

the PCDIs to valid CDIs.

Rule C3 is needed to ensure that the compensating transaction iswell-formed. In theory, this is a necessary precondition for ensuring integrity. However, in practice it may be possible to relax thisconstraint (see section 3.1.2). Rule C4 is needed to ensure that the administrator enforces the retrospective security policy. Lastly, the system needs to deal with the case where PCDIs are usedas inputs to other PTPs. In the case that a PTP is reversed, we need to ensure that any other PTPs that rely on the compensated PTP arealso reversed. Hence the following rule:

E8 If a PTP on a PCDI is reversed via a compensating TP, then allsubsequent PTPs to PCDIs that depend on this item must also

be reversed.

Note that the definition of depend is left to the implementer. In asystem in which TPs consist of simple reads and writes, the depends relation could be simply defined by the recursive predicate: a depends b ) (9PTP

i

fflPTP

i read

b ^ PTP

i wrote

a) .

(9PTP

j

ffl 9c ffl (PTP

j read

c ^ PTP

j wrote

a) ^ (c depends b))

Together these three certification, and eight enforcement rules con-stitute the basis of an integrity model for partially-formed transactions. In a manner similar to the Clark-Wilson integrity model, itcan be shown that the application of these rules leads to a secure system. To summarise:

ffl Rule C1 ensures that we can be certain that the system is ini-tially valid. ffl Rules C2 and E1 ensure that any transformations to the systemcan be reversed, and rule C3 ensures that this reversal results

in a valid system. ffl Rules E2-E4 provide the accountability and auditability prop-erties which are desirable to reduce risks in an optimistic system, and which may be necessary to enforce the compensatingTP.

ffl Rule E5 makes the scheme mandatory. ffl Rules E6 and C4 ensure that either the system is verified to bevalid, or the compensating transaction is applied. This means

that integrity is guaranteed for both those transformations thatare legitimate, and those that are deemed to be violations of the security policy. ffl Rule E8 guarantees integrity for those transformations whichrely on other erroneous transformations.

By iterating through these rules, we can see that if the system isinitially valid, the application of a PTP will always lead to a valid system.

3.1.1 Reducing program certification The partially-formed transaction rules mean that less certificationof programs needs to be performed than is required for well-formed transactions. This is possible, as it is generally easier to determinewhat effects a program has had on its environment than it is to certify it as exhibiting a given behaviour. If integrity is assured afterthe fact, it is possible to compare actual behaviour with expected behaviour, and reverse the actions in the event of a compromise.Hence, while partially-formed transactions require only one less certification rule, certification of compensating transactions shouldbe much simpler than certifying individual programs as it should be largely possible to provide compensating TPs which are generic fora large range of applications (e.g. reverse all modifications to the filesystem). However, it should be noted that the gain from reducedcertification of programs needs to be balanced against the extra load on the security administrator in analysing the audit logs and apply-ing IVPs to determine whether or not the accesses were legitimate. It is interesting to note that Clark and Wilson pointed to the largenumber of certification rules as a weakness in their model:

It is desirable to minimise certification rules, be-cause the certification process is complex, prone to error, and must be repeated after each program change. Inextending this model, therefore, an important research goal must be to shift as much of the security burden aspossible from certification to enforcement[2].

3.1.2 Composition of well-formed and partially-formed transactions One important issue is whether TPs and PTPs can be composed.This is necessary if optimistic and pessimistic security systems are to coexist. In order to determine this, there are two issues whichneed to be addressed:

Composibility of completed PTPs the first issue is whether PTPswhich have either been validated or compensated can be composed with Clark-Wilson TPs. Providing the TP does not relyon any intermediate state of the PTP, then the integrity properties of the partially-formed transaction model ensure that theTP should operate on valid objects. This idea can be taken further and we can see that taken as a single atomic unit, asequence of one or more PTPs which have all either been validated or compensated, actually meet the requirements for aClark-Wilson TP. This is because, providing all the rules in the partially-formed transaction integrity model hold, the C2rule from the Clark-Wilson model (TPs on CDIs must result in valid CDIs) must also hold. This property allows us to ar-bitrarily nest PTPs inside TPs. Another corollary alluded to earlier, is that it is possible for a compensating transactions tocontain elements which are partially-formed but which when taken as a single atomic unit, form a well-formed transactions.This can provide the equivalent of local undo-redo semantics, where some elements of a compensating transaction also havethe ability to be recovered from. The idea of encapsulating a number of transactions which are not well-formed within aTP is consistent with the Clark-Wilson model, as Clark and Wilson themselves state:

During the mid-point of the execution of a TP,there is no requirement that the system be in a

valid state[2].

While Clark and Wilson were referring here specifically toserialiseability and concurrency control of TPs, the same applies to any transaction which exhibits external consistency,but which may be internally inconsistent.

Composibility of TPs with PCDIs in some cases it may be desir-able to have TPs which operate on PCDIs, thus exposing the

intermediate state of the PTP. A concrete example is the abil-ity for the compensating action to be well-formed. However, this issue is simply addressed by having the TP treat the PCDIas though it were a UDI. In this case, the Clark-Wilson certification rule C5 (TPs which act on UDIs must be certifiedto ensure they result in a valid CDI), provides the necessary properties for these procedures to be composed.

4 Applications of optimistic security 4.1 Emergency "break-glass" tools Equipment such as alarms, emergency stop buttons and fire-fightingequipment (e.g. axes, hoses etc) are often stored in "break-glass" containers for emergency use. The benefit of such containers is theyrequire users to make a conscious decision about using the equipment, thus preventing accidental misuse. In addition, "break-glass"containers are usually labelled with warnings about the penalties of deliberate misuse, providing a significant deterrent. Such a device in a computer security system could be a useful incoping with an emergency situation. The software equivalent of the "break-glass" container would be a program which is suitablyconstrained using an optimistic security system, and which gives stern warnings about misuse before it is activated. In an emergencysituation, the user could operate the tool, but would have to explain themselves to the system administrator after the event in order toavoid the associated penalty. If the system administrator decided the use was not legitimate, they could use the recovery mechanismand enforce the penalty.

4.2 Retrospective content filtering One of the negative aspects of systems which provide filtering ofmaterial which is deemed harmful or inappropriate, is that the algorithms used to determine which content to filter can often result infalse matches. The result of this is that users can be denied access to legitimate content, leading them to search for ways to circumventthe system.

By applying the principles of optimistic security, users would beable to access any material they desired, and an administrator would log all material accessed and run the content filtering algorithm ret-rospectively. This would give a list of matches which the administrator could then further investigate to determine whether the con-tent is inappropriate.

Such a mechanism when coupled with an appropriate system ofpunitive measures (e.g. reprimanding a child, dismissing an employee, or posting a list of those users who accessed pornographyin the last week on the company notice-board!) can be more effective than simply disallowing access as its enforcement enablesthe administrator to reinforce the policy to both the culprit and any other potential perpetrators.

4.3 Sandboxing "somewhat-trusted" applica-tions Traditionally, the focus of "sandboxing" (or constraining the accessprivileges of programs) has been on untrusted code that is downloaded from the Internet. However, many users of personal oper-ating systems use a large number of applications which have unrestrained access, and hence the potential to damage the integrityof their systems. For such applications, it is often not possible to maintain integrity using traditional Clark-Wilson principles, as it iseither too expensive to certify the program, or its source code is unavailable for certification. By creating a sandbox along optimistic principles, the damagecaused by the use and misuse of these "somewhat-trusted" applications could be limited, while still allowing full-functionality. Forexample: an optimistic sandbox could track the changes made to the filesystem by a word processing program, and allow the user toundo these changes in the event of a crash or malicious macro virus. This would result in greatly improved security (and safety) for theseapplications without loss of functionality, or expensive certification of the programs. 4.4 Watching your system administratorwatching you

The formal integrity model presented in this paper specifies amandatory access control system (authorisation determined by an administrator). This is an artifact of basing the model on Clark-Wilson. However, there is no reason why an optimistic security model could not be applied to a discretionary access control sys-tem (authorisation determined by the objects owner/creator). Currently system administrators have more or less unfettered access toa system including data which users may wish to keep private (e.g. private email). By applying an optimistic security system, accessto these files could be constrained such that the user is informed whenever an administrator accesses files that they consider to partof their private set. There may be occasions where such access is legitimate, but others where this may be intrusive or a breachof the users privacy. By ensuring that users know when management/administrators are accessing their files, users can have moreconfidence that their privacy is being maintained.

5 Conclusions The Clark-Wilson integrity model provides a means by which asystem can be constrained to ensure that only legitimate accesses can be executed. However, this paper argues that under exceptionalcircumstances this requirement should be able to be relaxed. The notion of a partially-formed transaction provides a mechanism bywhich a system can seek to be optimistic about authorising user actions, while still maintaining system integrity. It is believed thatsuch a mechanism can help to increase the flexibility of security systems in environments where hard and fast rules are not alwaysthe best option.

This paper has shown that an integrity model for an optimistic sys-tem is feasible, and moreover that providing certain preconditions, the transformations for such a system are composable with thosewhich use a more traditional pessimistic system. Finally a number of novel applications of the optimistic security model are given thatshow how such a system could be useful.

Modeling External Consistency of Automated Systems by James G. Williams, Leonard J. LaPadula

The MITRE Corporation

Published in
Journal of High-Integrity Systems,
Vol. 1, No. 3, pp. 249-267, 1995.

Abstract: We present two models of how to achieve an external-consistency objective, motivated by the enterprise integrity of Clark and Wilson. Our notion of external consistency deals with the correctness of both assertions—propositions claimed to be true by their source— and requests—commands, authorizations, and similar sentences intended to induce action or elicit information. Each model considers assertions and requests from the perspective of both users and the automated system; this gives rise to defined responsibilities and requirements for both. The first, simple model assumes that users can reliably provide correct inputs to the automated system. The second model, taking a more realistic view of human enterprises, assumes that only experienced users have such capabilities. In both cases, the primary requirement imposed on the automated system is that it produce correct outputs on the basis of presumably correct inputs. We call this an “output warranty” requirement to take account of the vendor’s responsibility for the quality of the automated system.

1 Introduction

This paper exhibits a formal model of higher-level system requirements related to the commercial integrity policy of Clark and Wilson (Clark, 1987) and investigates consequences for the design of high-integrity systems.

1.1 Background

High-level security goals for an automated system most often specify that the system should help prevent inappropriate disclosure and theft of information, should promote correctness of information and information sources, or should ensure availability of computing resources and services. Until recently, security designs have emphasized prevention of unauthorized disclosure, focusing primarily on nondisclosure and privacy for data files, with secondary security features provided to support this central capability. A well-grounded technology, nurtured primarily by the US Government through the National Computer Security Center, supports the design, development, and evaluation of such systems. Today's computer security technology, perhaps best represented by the now famous Orange Book (so named for the color of its cover) (NCSC, 1985), can support a wide variety of enterprise security policies, as discussed in Bell's paper on shared properties of policies (Bell, 1991). But it is not the whole story.

Clark and Wilson, in their landmark 1987 paper on computer support for information integrity (Clark, 1987), argued that

We agree with the main tenets of their arguments. However, while they and others have put their attention on the foundational characteristics of mechanisms that can support a commercially oriented integrity policy, we have investigated a fundamental aspect of the policy itself. In this paper we augment the findings of Clark and Wilson (Clark, 1987; Clark, 1989) with models of external consistency.

1.2 Purpose

In the context of computing systems, external consistency is the ability of a computing system to give correct information about its external environment. Some typical examples are:

This paper explores external consistency and related ideas to attain a better understanding of integrity in automated systems. To ensure that our findings are based on more than intuitive notions, we use formal reasoning for convincing ourselves, and the reader, of the validity of key claims.

1.3 Content and Organization

Section 2 first characterizes an automated information system in terms of its interface with its users. This provides the framework for discussing the external consistency objective as it applies to some typical examples.

Sections 3 and 4 present successively more complex models showing different ways of allocating responsibility for achieving this objective between users and the automated system.

In both sections we state and prove theorems that show that an external consistency objective can be met and that characterize the systems being modeled. The same theorems are stated and proved again in Appendix A, this time in a formalized framework, but completely parallel to the development in the body of the paper.

In section 5 we summarize our view of the models, give conclusions about the value of modeling integrity objectives and external-interface requirements, and indicate directions for development of more complex models that may better serve the needs of real enterprises.

Throughout this document, defining occurrences of technical terms are given in italics. These terms are collected in a glossary in Appendix B.

2 External Consistency

We start by describing an automated information system in terms of its interface with its users. The external-consistency objective is then defined in this context.

2.1 External System Interface

Key to modeling external consistency is the fact that computers can produce output that users interpret as assertions about real-world entities. We envision a situation in which users exchange assertions, requests, and questions with an automated information system. We call these units of information exchange sentences. Sentences discuss conceptual and real-world situations including, possibly, the state of the automated system itself.

An assertion states a proposition claimed as true by its author. A question states a proposition and indicates that its author wishes to know if the proposition is true or, more generally, wishes to find a true instantiation of it. For our purposes, requests include commands, authorizations, and similar sentences whose purpose is to effect changes in the author's environment; thus, a request states a proposition that is to become true. We do not distinguish among the different kinds of requests, because the issue of whether a request should, must, or merely may be fulfilled does not affect our models. We are interested in sentences like the following:

user: Withdraw $150.00 from my checking account (immediately; issued 10/2/1993, 11:45:00 AM EST by a user authenticated as holder of account #11695-30009). Authentication of the user may have been done directly by the system or indirectly by a bank teller who clarified the term "my checking account" by identifying the customer's account number.

system: Please take your cash.

user: Display my current balance (issued 10/2/1993, 11:45:21 AM EST).

system: Account # 11695-30009 has a current balance of $-53.87 (issued 10/2/1993, 11:45:27 AM EST).

system: What is the current status of mortgage application #1276-89-003?

user: Application #1276-89-003 has been canceled.

There are several potential difficulties associated with real-world propositions that do not arise when working with the propositions of a formal system such as first-order logic. The truth or falsity of a real-world proposition can vary with time unless it contains explicit qualifications explaining when it is true. Another potential difficulty with real-world propositions is that they are only approximately correct unless degree of precision is included in the proposition itself or in its context of interpretation. For example, the following usefully simple assertion is unlikely to be exactly correct because most events do not happen on minute boundaries.

system: The sun will rise at 5:47 AM today.

Another potential difficulty is that real-world propositions tend to rely heavily on context for their meaning through the use of complex semantic conventions.

We account for the potential problems just mentioned by introducing the notion of a stable proposition. A stable proposition is one whose truth is independent of when the proposition is issued and whose precision is specified within the proposition itself. Thus, for modeling purposes, the assertion about the sun rising might be replaced with the following assertion.

system: At Logan Airport on 23 September 1992, the sun rises between 5:46:59 AM EST and 5:47:01 AM EST.

In this form, the truth or falsity of the assertion depends only on the language used to express it, English in this case, and on facts of astronomy, but not on contextual information such as the time or place it was issued or the intent of its author.

For the sake of a simple model, we view a computing system through a stabilizing filter that maps each proposition given to or received from the system to a corresponding stable proposition. For a particular system, the stabilizing filter is determined by system and application-software documentation and, possibly, by the user community. This stabilizing filter is part of the model interpretation and is not explicitly involved in the construction of the system, although particular integrity mechanisms may include implementations of stabilizing filters.

Finally, with each user-supplied sentence, there is an associated direct observation to the effect that that input occurred. This direct observation includes the uninterpreted, literal input as well as related facts about the input device used and the value of the system clock at the time of the input. Other direct observations might be associated with output ‑ the system can observe its own actions. Still other observations arise from internal events. For example, input events might happen at too fast a pace, so that the system hardware directly observes buffer overflow and takes appropriate action. One's view of direct observations will depend on the level of abstraction used for modeling. At a concrete level, a direct observation might consist of a hardware interrupt with accompanying transfer of data from an input device to an internal register. A more abstract interpretation might view a parsed input sentence accompanied by a user identification as a direct observation. In this case, the modeler sufficiently trusts the system's elementary parsing and authentication mechanisms to place them beneath the level of the model. In any case, direct observations must be correct because they are the foundation upon which the integrity of the system is based.

2.2 External-Consistency Objective

The external-consistency objective is that each output assertion or request be real-world correct. We motivate the more detailed explanation of what it means to be "real-world correct" by examining the meaning of "consistency". Informally, consistency is agreement or harmony of parts or features, specifically, the ability to be asserted together without contradiction (Merriam-Webster, 1991). This informal meaning suggests its technical uses in formal logic. One of these is that of consistency between a formal system and a particular semantic interpretation — each theorem of the system must be true in the given interpretation. This is the use given in (Woodcock, 1988):

If P denotes a set of sentences, then a formal system is consistent with respect to an interpretation if whenever P |– W then P |= W — that is, if we can derive something formally then we can reason about the meanings of the sentences concerned and arrive at the same conclusion [where |– is the syntactic turnstile and |= is the semantic turnstile for the interpretation at hand].

In our models, the syntactic turnstile could represent an informal deduction of the system, such as “there are 3 widgets on shelf C-54b in warehouse 17-R [stabilization component: 1993, July 15, 7:43 AM]”. The semantic turnstile could represent a conclusion arrived at by inspection of the warehouse, such as “there are 2 widgets on shelf C-54b in warehouse 17-R [stabilization component: 1993, July 15, 2:54 PM]. We have intentionally used the quantities “3” and “2” to highlight the point that "consistency" does not imply "equality". One can imagine, for example, that the “formal system” is a modal logic and that in the time period from 7:43 AM to 2:54 PM one widget was picked for shipment. Thus, the meaning suggested by “consistency” as sometimes used for formal logics seems appropriate for our model. In case the inspection of the warehouse should produce the conclusion “there are 2 widgets on shelf C-54b in warehouse 17-R [stabilization component: 1993, July 15, 7:43 AM], we would have an inconsistency — exactly what our model intends. Thus, the sentences “there are 3 widgets” and “there are 2 widgets” may validly correspond to or agree with each other, with formal consistency being achieved via stabilization.

We have said that the external-consistency objective is that each output assertion or request be real-world correct. For an assertion, correctness means correspondence to reality. For a request, correctness means legitimacy according to some preassigned criteria. Informally, legitimacy means consistency with some predetermined goals of the enterprise employing the automated system. In our models, questions are treated as requests for information. For questions, legitimacy normally entails relevancy to the predetermined goals of the enterprise.

We thus have two slightly different but closely related objectives. The external-consistency objective for assertions is that each assertion received from the system and recast in stable form be a true description of reality. The external-consistency objective for requests is that each request received from the system be legitimate according to some enterprise-specific or application-specific criteria. We assume for modeling purposes that the legitimacy criteria for requests are stable propositions. We also assume that each enterprise or application provides criteria for judging the legitimacy of requests, that is, for judging their consistency with, or relevance to, the goals of the enterprise employing the system. In the case of specific applications, these criteria will generally be suggested by the application itself. The objective is that these associated legitimacy criteria be satisfied at the time a request is made.

For convenience, we combine these closely related objectives into a single objective for modeling.

External-Consistency Objective

Each assertion or request received from the system and recast in stable form is correct: an assertion is a true description of reality; a request is legitimate according to some enterprise-specific or application-specific criteria for judging its consistency with the goals of the enterprise employing the system.

This objective is relevant to all assertions made by the system, such as account status reports, financial transactions, and purported facts about the behavior of programs. It is relevant to all requests for action made by the system, such as missile launch orders, stock transactions, or requests for maintenance. And it is relevant to all requests for information issued by the system, such as requests for status information, requests for corroborating input, requests for users' opinions of value of alternatives, and requests for estimates of cost or duration for project activities.

In many applications, the legitimacy criteria for a request will include the constraints that the request be both satisfiable and not yet satisfied. The legitimacy of an input request may depend on which I/O device is used and when, so that the legitimacy of the request is derived from the direct observation, not just its contained request.

In some contexts, questions (that is, requests for information) must make sense or be not immediately answerable from known information in order to be legitimate. In others, they must not attempt to violate privacy constraints. In a classroom, virtually any question might be considered legitimate. Classroom questions illustrate the fact that there can be large classes of requests that are always legitimate.

2.3 Relationship to the Clark-Wilson Model

Clark and Wilson (Clark, 1987) identify a single main objective for a commercial integrity policy, namely "to ensure integrity of data to prevent fraud and errors." They point out two mechanisms at the heart of fraud and error control ‑ the well-formed transaction and segregation of duty among employees. The idea behind the well-formed transaction is that a user should not manipulate data arbitrarily, but only in constrained ways that help preserve integrity of the data. As an example, Clark and Wilson mention the principle of double entry bookkeeping. The second mechanism, segregation of duty, attempts to ensure the external consistency of data objects: the correspondence between data objects and the real world objects they represent.

Our general notion of external consistency ‑ the ability of a computing system to give correct information about its external environment ‑ states the higher level objective from which the Clark-Wilson objectives derive. We further explicate this relationship in the framework of a taxonomy that we have developed for characterizing security policies, models, or requirements.

This taxonomy, first presented at the 4th Annual Computer Security Foundations Workshop (LaPadula, 1991), identifies 5 layers representing stages in the elaboration of security requirements. The first stage in this taxonomy is that of high-level objectives that are to be met through the combined efforts of a computing system and its users. The second stage identifies user responsibilities and system requirements sufficient to achieve the objective in the first stage. Later stages identify internal system structure and mechanisms that will guarantee satisfaction of the requirements.

Table 1. Taxonomy for Elaboration of Requirements

Stage

Title

Description

1

Trust Objectives

A trust objective specifies what is to be achieved by an information-processing enterprise, an important component of which is a computing system. A nondisclosure objective, for example, might state that there should be no unauthorized viewing of classified data. Our external-consistency objective is another example. A trust objective implicitly constrains the relationship between the computing system and its environment.

2

External-Interface Requirements Model

An external-interface requirements model describes the behaviors of the computing system, its users, and other entities in the system's environment in such a way as to allocate responsibilities for achieving the identified trust objectives. In particular, the external model shows how the system supports (i.e., contributes to) the identified trust objectives. The external model treats the system as an atomic unit, constraining its behavior without unduly constraining its internal structure. This treatment might be accomplished by identifying requirements which must hold at the system's external interface.

3

Internal Requirements Model

An internal requirements model describes, in an abstract manner, how the system responsibilities given in the external model are met within the system. It may do this by specifying constraints on the relationships among system components and, in the case of a TCB‑oriented design, among controlled entities.

4

Rules of Operation

Rules of operation explain how the internal requirements developed in the internal model are enforced.

5

Functional Design

A functional design, like the rules of operation, specifies the behavior of system components and controlled entities, but is a complete functional description.

The first step in elaborating integrity objectives is to allocate responsibility between a system and its users, identified as stage 2 in the taxonomy above. The second step one might take is to construct a simplified model of the system's internal workings to ensure that the system's responsibilities are met. Our work concentrates on the first step of elaboration, whereas that of Clark and Wilson focuses on the second. Not surprisingly, however, some of their internal mechanisms are analogous to some of our interface requirements. In section 4 we will introduce integrity validators, which will be seen to be similar to specialized Transformation Procedures (TPs) that are used to convert unconstrained user inputs to Constrained Data Items (CDIs).

3 Warranties on Correctness of System Output

In this section we develop a first model aimed at realizing the external-consistency objective for a computer-based enterprise. This first model specifies an allocation of responsibilities between the vendor and the system's users that allows the vendor to actively share responsibility for achieving the external-consistency objective. The fundamental responsibility of the users is to provide correct inputs to the automated system. This model may be naive for many applications because users cannot always be trusted to provide correct inputs. Nevertheless, this first simple model explicates the basic modeling approach and provides a common basis for the more useful model developed later in the paper.

The formal models that we present can be interpreted as models of "turn-key" systems produced by system integrators for end users. Such systems, normally dedicated to supporting a single enterprise's activities, generally consist of many components, possibly sited throughout the geographical space of the enterprise. Many of the examples of external consistency that we have cited are immediately relevant to such systems. Equally, though, our models are relevant to other, more basic systems such as general-purpose computing platforms produced by operating-system and DBMS vendors. When interpreted in this way, later integration contractors are among the users of the provided computing system, and application software is installed via inputs to the basic system. We invite the reader to interpret the models we present in either or both ways.

In devising requirements for the automated system, we consider those interface properties whose correct realization are the responsibility of the vendor. We want to determine what the vendor-supplied hardware/software configuration should do to support the external-consistency objective. Consequently, we are primarily interested in properties that are satisfied whenever the system is installed properly and has not been tampered with or otherwise inappropriately modified.

There are some output assertions that a properly installed system, and its vendor, might accept full responsibility for. When the system is turned on, it might report that the system clock has gained three days since the system was turned off. This report is based solely on readings of the system clock, a trusted input device that directly observes the passage of time. The correctness of most output assertions, however, depends not only on correct system functioning, but on the correctness of previous relevant inputs. In the case where an operating-system is being modeled, these inputs would encompass the installation and maintenance of relevant application software. The sunrise program is a very simple example, but the correctness of its output still depends on both the correctness of its software and the correct setting of the system clock.

A more realistic example is the stock-market sell order. In issuing the order, the system may be implicitly asserting that the sell order was approved by an authorized user other than the one initiating the sell order because its controlling organization expects such an assertion to be true, when the system issues a sell order, in order for the sell order to be legitimate. However, the computer system cannot force this assertion to be true; it can, at best, assert that it has received a sell request from one user and an approval from another user who was authenticated as different from the one making the request.

3.1 Warranted Outputs and Bases

For the stock-market sell order, we want the sell order issued by the computer system to be a correct representation of the will of the enterprise it works for. In the terms of the external-consistency objective, the sell order, recast in stable form, is to be a legitimate request. What the vendor and the computer system can do is to warrant the sell order: they can guarantee that if the user-supplied information on which the sell order is based is correct, then so is the sell order. For reasons of audit and to serve other, related enterprise activities such as error correction, we would like the computer system to maintain an accurate record of the particular information that supports the correctness of the sell order. We call this set of information an input/output (I/O) basis.

For reasons of efficiency, we assume that not all system outputs must meet the external-consistency objective. Those that should can be marked as warranted by the system. Warranted outputs in our models might be produced by certified transformation procedures in the sense of Clark and Wilson (Clark, 1987), or they might be produced by the underlying operating system.

For each warranted output, there is, in principle, an associated I/O basis, consisting of previous inputs and outputs used in the computation of that output. The motivation for including previous outputs in an I/O basis may not be immediately obvious; they are included to allow a more natural correspondence with typical business practices. For example, the I/O basis for this month's bank statement includes the closing balance from last month's statement, which is appropriate, if it wasn't contested. The I/O basis for this month's inventory includes the previous inventory report, which is applicable if it was accepted by the management of the enterprise.

3.2 Automated System Requirements

In the above examples, the system can support external consistency for a warrantable output by guaranteeing that if the user-supplied information on which that output is based is correct, then so is that output. We thus have the following output-warranty[1] requirement.

Output-Warranty Requirement

The system's vendor shall guarantee that every output marked as warranted is warrantable in the following sense:

An I/O basis is warrantable if, and only if, all of its outputs are warrantable in the above sense.

In effect, the vendor supplies a limited warranty, where the limitations for a given output are given by its I/O basis. The above notions of warrantable output and warrantable I/O basis are defined by mutual recursion; circularity is avoided provided all I/O histories are finite. It is conceivable that the vendor's system produces an initial "seed" output which is a compiled constant, such as version number. Such an output depends for its correctness solely on the fact that it is produced by the vendor's system; it is an output for which the vendor accepts full responsibility. We note that the output-warranty requirement implies a need for some form of automated deduction, since a machine is unlikely to have a way of making semantic judgments directly.

For some warranted outputs, the relevant I/O basis will include one or more direct observations made by the automated system. For example, a stock-market sell order is likely to be issued at a time specified by an authorized user, so that the I/O basis for the sell order includes, at least implicitly, an observation of the system clock. Thus, we have the following correct-observation requirement.

Correct-Observation Requirement

Each direct observation is a correct, stable assertion.

The correct-observation requirement is the only direct link between the system and the users' semantic environment. This requirement captures the idea that direct observations behave like axioms of a theory.

3.3 User Responsibilities

External consistency depends on correct inputs. In this first admittedly naive model we put the full burden for correct inputs on every user of the automated system.

Input-Correctness Requirement

All assertions and requests made to the system by its users must be correct.


This user requirement presupposes another that should be acknowledged. Since each assertion or request made to the system must be semantically the same for all interested parties, users must agree on a common language for describing real-world situations. This description language may be partially constrained by the vendor of the system. For example, the vendor may supply basic constructs with which propositions in the users' description language may be expressed.

To satisfy the input-correctness requirement, someone must restrict physical access to the system so that only those individuals who will meet the responsibilities of this requirement are allowed to provide input.

In the case where the model is applied to an operating system, the inputs to the system will usually include software installations. In this case, the installation of an application program is considered an assertion made by the user installing the program and is subject to the input-correctness requirement. Correctness of output from the system depends partly on the correctness of these software-installation inputs.

There is one more responsibility that we mention but do not formalize because it is motivated by availability concerns. Users should be allowed to designate specified classes of output as "warrantable", implying that these outputs are to be marked as warranted by the system. We refer to this as the "output-marking requirement". The distinction between warrantable and other outputs allows users to make tradeoffs between resource consumption and external consistency.

A useful way of meeting the output-marking requirement is to reserve certain devices exclusively for warranted outputs. This "device-restriction" requirement may help remind users which outputs are warranted. For example, a check-writing device is less likely to be used for writing bogus checks if it handles only warranted outputs.

3.4 Meeting the Objective

The following result tells us basically what we want to know. This result is clearly sharp because its proof makes essential use of all user responsibilities and system requirements in the model.

Theorem 1 If the user input-correctness requirement and the above system requirements are met, then the external-consistency objective is satisfied for warranted outputs.

Proof We need to prove that each warranted output is correct. The output-warranty requirement guarantees that each warranted output is warrantable. This means that each warranted output has an I/O basis that is warrantable, and that each warranted output is correct if each sentence in its I/O basis is correct. Thus, to establish that each warranted output is correct, we must show that each sentence in its I/O basis is correct. We prove this by inductive reasoning.

The I/O basis for a first warranted output must be either empty or comprised only of inputs. If this I/O basis is empty, we appeal to our earlier remarks about the correctness of the vendor-supplied system. If the I/O basis consists only of inputs, we note that each sentence is either a direct observation or a user input, so that it is correct by the correct-observation requirement and the input-correctness requirement. For the induction step, we consider a given warrantable output and assume that the I/O bases of all previous warrantable outputs are correct. The inputs in its I/O basis are correct for the reasons just mentioned. It remains to show that each output in this I/O basis is correct. Each such output is warrantable, according to the output-warranty requirement. Thus, by our inductive argument, each output in this I/O basis is correct. Since each sentence in its I/O basis is correct, the given warranted output is correct. o

In an actual system, there will be sentences that are warrantable in the sense of having warrantable I/O bases, even though they are not warranted outputs. All such warrantable sentences are correct, as can be seen from the above proof.

3.5 Appropriateness of the Model

Our view of output correctness is that it has two parts. The first part is the correctness of the vendor-supplied system; our understanding of it rests on an evaluation of system hardware and critical system software. The second part is the correctness of any other information needed to properly qualify the system's warranty for the correctness of an output; we refer to this other information as the basis for the output. Thus, the basis for an output must be such that if the output is incorrect, then the error is traceable either to some erroneous assertion in the outputs' basis or to imperfections in the development and evaluation of the vendor's system.

External consistency of an enterprise's information is seen to rest, then, on more than I/O bases — it depends also on the integrity of the system. The integrity of the automated system rests, in turn, on some fundamental assumptions and these should, perhaps, be made explicit to users by the automated system in initial dialogues with new users. For example, when a system that supports label-based confidentiality gives an output warranty for label correctness, the warranty may necessarily be conditional on the assumption that users have not exploited vulnerabilities relating to aggregation, inference, or covert channels. According to our model, this assumption cannot be included in a basis unless it is an I/O event. The automated system can accommodate the model by presenting this assumption in an opening dialogue with a new user, thereby allowing the assumption to be included implicitly in relevant future I/O bases.

The assertion that an algorithm is correct is not necessarily an input. By requiring that such an assertion be an input so that it can be included in an I/O basis for an output, we are strongly predisposing ourselves towards systems in which a program-certification mechanism is included as a basic part of the security architecture.

We have not explicitly recognized compound outputs such as DBMS reports consisting of some number of selected fields from a database. To deal with such compound objects, one can interpret the model, with respect to warranted outputs, to some appropriate level of granularity, such as to the field level for a DBMS application. Then, the system could warrant a subset of the elements of a compound output. Alternatively a version of our model might take explicit account of compound outputs, as suggested in (Motro, 1989). Motro uses the word "certified" in place of "warranted".

Our model, when interpreted to an operating system, strongly suggests the notion of an integrity trusted computing base (ITCB) as an implementation strategy to realize the model's objectives. The ITCB is that part of the system which has to be right for the vendor to meet his responsibility to implement the advertised integrity policy. To see this, consider that, in an analogic sense, the system must in essence perform deductions. The "logic system" for these deductions must be embodied in some part of the system that is "trusted" and protected from tampering — thus, the ITCB.

For the system to conclude that an output is a consequence of its I/O basis, the system, depending on its ITCB, must produce an informal proof showing that the output is a consequence of the I/O basis. The role of axioms or postulates for the underlying "logic system" is played by assertions with empty I/O bases. These are assertions whose truth is established through direct observation and processing by trusted software, in the ITCB.

Finally, we note that an output can have more than one basis. The usual basis for accepting a monthly bank statement relies on a previous output, namely the beginning balance for the month. An alternate basis would consist of all transactions on the account, together with the assertion that the bank's computer processed them with a correct algorithm, one carrying out the bank's announced policy for maintaining checking accounts. In general, there are many possible bases, depending on how many months one has to go back to find a balance one is willing to accept. But in the model an output has just one I/O basis — the one actually used in its computation.

4 Identifying Correct Inputs

We next consider the case where there is a mix of naive and experienced users, in which experienced users are able to avoid mistakes and catch errors made by others, and are responsible for doing so. We have two motives for this distinction among users. One is that some users are more likely to have correct input resulting from their roles in the enterprise. The other is that some users have more responsibility and will figure more prominently in future work in terms of accountability and their ability to perform error correction.

Central to the case at hand is the notion that the automated system will be receiving both correct and incorrect inputs. What is needed are procedures and mechanisms by which both users and the automated system can distinguish outputs based on correct inputs from outputs that may have been contaminated by incorrect inputs. We begin with some examples and observations.

An inventory might be maintained as a large database defined by an underlying database schema. The schema might rule out negative inventories and might specify maximum inventory levels for certain items. Traditionally, updates that are not consistent with database schemas are simply disallowed. Apparently, the schemas of the database administrator are assumed to be correct, and subsequent attempted updates of a contrary nature are not. Database schemas provide a means of making integrity-validation checks on user inputs that, in principle, may be arbitrarily complex.

A missile system may require launch orders from two different operators before going into action. A bank may require tellers to get approval from bank officers for certain classes of transactions. A brokerage may require senior approval for large stock transactions. In these examples, an input is considered correct when it is appropriately corroborated by two different qualified users. An input that is endorsed by other qualified users is corroborated. In some situations, only corroborated inputs might be regarded as correct.

4.1 External System Interface

To begin with, we require that some "experienced" users tell the automated system which kinds of inputs they certify as being correct, in contrast to other kinds of inputs that might not be correct because, for example, they are submitted by "inexperienced" users. The automated system must then internally keep track of data derived from or based on certifiably correct inputs so that it can tell which of its outputs it can warrant. For the system to do this, there must be a way for experienced users to identify certifiably correct inputs to the system.

We call the entering of an input to the automated system an input event. Descriptions of correct inputs can be based on attributes associated with input events, called input-event attributes. Examples of such attributes are type of data, the input device or communications path used, whether the input passes specified integrity-validation checks, and, for user inputs, user identity, user role, and explicit evidence of correctness provided by the user. Requiring a user input to be corroborated by previous inputs with specified input attributes is a kind of integrity-validation check whose success or failure may be recorded in an input-even attribute. For an input from another system, an input-event attribute might indicate that the input is a warranted output from that system. The attributes associated with an input event are collectively referred to as its input-event profile. Those input-event profiles which are believed by experienced "administrative" users to ensure correctness of input are correctness-ensuring profiles.

For reasons of both external consistency and availability, the system needs to be informed as to which input-event profiles are correctness ensuring. Given this information, it can warrant outputs that are presumably based on correct inputs and are thus also correct. It can also refuse to use inputs when their input-event profiles are not correctness ensuring and their use would produce nonuseful output, thereby heading off denial-of-service attacks.

The system-defined language for specifying input-event profiles may provide for the specification of user-supplied or built-in integrity validators to catch incorrect inputs. Integrity validators are conceptually similar to those transformation procedures of Clark and Wilson that are used to transform inputs into constrained data items. In the case of a database system, for example, integrity validators might ensure referential integrity (that is, ensure that all referenced objects exist) and thereby avoid ill-formed input, because sentences that do not make sense cannot be correct.

As explained more carefully below, the system accepts an input as being certifiably correct if it can warrantably claim that its input-event profile is correctness ensuring. This normally involves running integrity-validation checks as part of determining the input's input-event profile. That is, one or more of the attributes of a correctness-ensuring profile may result from integrity validators that have been applied to the input.

In the Clark-Wilson model, this assessment of input-event profiles and the associated checking of input by integrity validators would be done by transformation procedures responsible for gathering user input. The resulting certifiably correct inputs would then be stored in CDIs. In Biba's integrity models (Biba, 1977), inputs with correctness-ensuring profiles might be labeled with mandatory integrity values that encoded their input-event profiles and the associated integrity checks they had passed.

The well-formed transaction requirement in the Clark-Wilson model is needed partly to avoid ill-formed sentences. When our model is interpreted to an operating system, the certification of TPs is actively modeled in terms of certifiably correct user input, and the well-formed transaction requirement becomes a key tenet of correctness-ensuring profiles for application software.

4.2 Automated-System Requirements

The system must be able catch inputs that fail to pass examination by certified integrity validators. It must also be able to associate input-event profiles with inputs that are not rejected by these integrity validators. Correct association may rely on previous certified inputs. For example, the input-event profile of a corroborated input will necessitate identification of previous corroborating inputs by one or more other users. Finally, for these requirements to serve their intended purpose, the system must be able to tell users which outputs are based on certified inputs, because the correctness of other outputs cannot be assumed.

Output-Warranty Requirement (Extended)

The system's vendor shall guarantee that every output marked as warranted is warrantable in the following sense:

(a) it has a credible I/O basis, and

(b) it is correct provided each sentence in its I/O basis is correct.

An I/O basis is credible if, and only if, its user inputs are certified and its outputs are warrantable — that is, it is a warrantable I/O basis whose user inputs are certified.

A user input is certified if, and only if, the assertion that it has a correctness-ensuring profile is warrantable.[2]


Integrity validators are normally required to be certified for availability reasons. If, for example, an integrity validator is too restrictive, it may prevent legitimate inputs from being used, thereby causing loss of availability. Worse, it might place an inappropriate bias on inputs. Consider a neural-network program being used as an integrity validator to screen loan applications. If the program's reasons for rejecting a loan application are unknown, they may be illegal as well. For example, the program might be keying on surnames of a certain ethnicity as a basis for rejection. In this example, the certification of the integrity validator must be based, in part, on its training data and the fact that the data are not racially biased. There is nothing in this model, however, to prevent a certified integrity validator from basing its decision partly on uncertified data.

In practice, it may be advisable for the system to inform users about certified inputs. Using warrantable user prompts, it may be beneficial for the system to tell a user providing an input that the input is supposed to be certifiable and whether it has successfully been certified by the system, through its associated correctness-ensuring profile. This could help users to avoid making false assumptions about the inputs they provide.

The system may perform predefined integrity-validation checks on administrative inputs for availability reasons. For example, it might ensure that the system cannot get into a state where there are no more administrative users left to provide administrative inputs.

4.3 User Responsibilities

Explicit indications of correctness allow us to relax the user's input-correctness requirement of section 3 by applying it only to certain inputs, as the following user requirement and proposition illustrate. In exchange, however, some new availability requirements arise.

Input-Correctness Requirement (Revised)

All certified inputs to the system must be correct.


Certified inputs used to explain which profiles are correctness-ensuring are examples of "administrative"[3] inputs. Users capable of providing certified administrative inputs are referred to as "administrative" users. Notice that administrative inputs do not necessarily have to be certified. An administrator can supply an uncertified input that defines a correctness-ensuring profile in terms of a user identity, user role, and other attributes. This profile might become certified when a second administrator corroborates its correctness. Alternatively, there might be a user-accepts-responsibility attribute required for certification, so that the profile becomes certified when the profile's user asserts his willingness to accept the responsibilities associated with its role.

The profile-administration role is not explicitly modeled because the only integrity requirement on this role is that certified inputs used to define correctness-ensuring profiles must actually be correct, and the above requirement already captures this. However, there are obvious availability requirements for correct, certified profile-defining inputs. If there are several profile administrators, then there must be some convention for sharing profile administration.

The specification of correctness-ensuring profiles involves some subtleties imposed by the fact that all propositions in the model must be stable. The model distinguishes among a profile's being (a) always correctness ensuring, (b) correctness ensuring until a specified time, and (c) correctness ensuring until a specified invalidating input event. We think these distinctions will become less critical after the addition of a retraction mechanism, which we are currently working on.

The output-marking user requirement mentioned in Section 3 required users to explain which kinds of outputs were warrantable. In this model, the system can determine which outputs are warrantable, based partly on administrative inputs about correctness-ensuring profiles. Thus, the output-marking requirement is now subsumed by the new user requirement and the system requirement of determining which outputs are warrantable.

4.4 Achieving the Objective

The following Theorem 2 is sharp in that all responsibilities and requirements of the model are needed in order to get the proof. Theorem 3 shows that the present model is a nontrivial improvement over the model of section 3 in that it really does allow users to make mistakes. Finally, Theorem 4 points out that the input-correctness requirement leans implicitly on a vendor's trusted-delivery responsibility.

Theorem 2 If the revised input-correctness requirement and the extended output-warranty and correct-observation[4] requirements are met, then the external-consistency objective[5] is achieved for warranted outputs; moreover, every warrantable sentence is correct.

Proof Since warranted outputs are warrantable, by the output-warranty requirement, it suffices to prove that all warrantable sentences are correct. By the output-warranty requirement, a warrantable output is correct provided each sentence in its I/O basis is correct, so that it suffices to show each sentence in the I/O basis of a warrantable sentence is correct. Let s be any sentence of the basis. If s is an output, then, by the output-warranty requirement, it is warrantable and, thus, correct by induction. If s is an input, it is either a direct observation or was a sentence provided by a user. If s is a direct observation, then, by the correct-observation requirement, it is correct. If s was provided by a user, then, by the output-warranty requirement, the assertion that s has a correctness-ensuring profile is warrantable. Thus, s is a certified input, and, therefore, is correct by the input-correctness requirement. o

Theorem 3 The input-correctness requirement allows users to make the following kinds of mistakes:

• Any user can provide erroneous inputs with profiles that are not correctness ensuring.

• Any user can make mistakes that are caught by profile-specified integrity validators.

• If all correctness-ensuring profiles require corroboration of their associateed inputs and if only one user provides erroneous input, then that user can make an unlimited number of errors.

Proof The input-correctness requirement is that all certified inputs be correct.

• Any certified input has a correctness-ensuring profile, by the definition of certified input, so that inputs that do not have correctness-ensuring profiles are not certified and, therefore, are not covered by the input-correctness requirement.

• A profile-specified integrity validator is one that the automated system is to consider relevant in determining whether an input has a correctness-ensuring profile. Thus, if an errors is caught by a profile-specified integrity validator, the associated input will not have a correctness-ensuring profile; therefore, as already shown, it is not covered by the input-correctness requirement.

• Finally, if all correctness-ensuring profiles require corroboration of their associated inputs, then all certified inputs require corroboration, by the definition of certified input. In this case, if only one user makes an error, his erroneous input cannot corroborate a previous incorrect input and thus cannot have a correctness-ensuring profile, so that it is not certified and need not be correct. o

A profile-administration input is a certified user input used by the system to conclude that an input-event profile is correctness-ensuring. When the automated system accepts an input as certified, and the input is not a direct observation, the system can warrantably claim that the certified input has a correctness-ensuring profile. Normally, we expect this claim to remain an implicit assertion of the system. The profile-administration input for the correctness-ensuring profile is in the I/O basis for this implicit assertion. In any system that will allow profile-administration inputs there will be a first such input. This input itself has to have a correctness-ensuring profile. The I/O basis for the implicit assertion that this input has a correctness-ensuring profile is either empty or consists solely of direct observations. In either case, the criteria for accepting this input as certified are hard-wired into the system. We have just shown the following:

Theorem 4 Any system that allows profile-administration inputs allows at least some of them on the basis of predefined criteria.

The input-correctness requirement applies, in particular, to these inputs whose certification criteria are hardwired. Users cannot be sure of meeting such a requirement without a trusted-delivery mechanism that prevents unauthorized access to the system.

5 Summary and Conclusions

Clark and Wilson’s notions of certified transformation procedures and integrity verification procedures broke new ground in computer security by illuminating an area of integrity in automated processing that had formerly been seen only dimly for the spotlight that had been put on internal access controls. What we have done so far highlights the importance of Clark and Wilson’s new view of a computing enterprise as consisting of more than automated internal access controls. Starting from the general notion that an information processing enterprise depends for its integrity on people and application programs as much as it does on automated controls, we have delineated a model that can accommodate various user responsibilities, including correct inputs, endorsement, and corroboration. In developing the model we have identified a new, basic automated system requirement—the output warranty. These fundamental ideas, expressed in a general, formalized, framework, can provide a foundation for the design and development of automated systems that can effectively contribute to the overall integrity of an enterprise.

At the outset in developing the ideas in this paper, we had in mind to develop a formal model. We began, naturally enough, with an informal model describing user responsibilities and automated system requirements. This was subjected to several rounds of intense review, scrutiny, and refinement. When we had a model that was at least understandable to us, we then attempted the formal model, with good results, we believe, as well as some surprises.

Putting the objectives, responsibilities, and requirements into a form that could enable proofs of assertions had a strong, immediate impact. Having stated a plausible conjecture, we found that some system requirements were user responsibilities in disguise. For example, users must usually judge which input-event profiles are correctness-ensuring (some crucial exceptions were noted in Theorem 4). We also discovered that some user responsibilities could easily be replaced with corresponding system requirements, thereby making the system more user-friendly and robust. For example, the system is best able to determine which outputs are warrantable.

Several iterations in which we compared various versions of the models led to simplification and strengthening of requirements. Now the main system requirement in each model is essentially that each warranted output be warrantable, with a stronger definition of warrantability in the second model. Critical distinctions were clarified, including the distinction between user inputs and direct observations.

The theorem-proving effort held additional surprises. Initially, the propositions were easy to prove, but they were not sharp. Requirements that seemed to be needed on the basis of experience did not show up in the proofs. Further consideration showed that

• Some requirements were needed for availability reasons — requiring integrity validators to be certified is an example.

• Some requirements could be implicit — the constraints on when administrative duties are performed is an example.

• Some requirements became necessary when the formalization became less simplistic — the correct-observation requirement is an example.

Considerations of sharpness thus had a strong impact on the modeling effort.

Before the work reported in this paper, there were few, if any, formal models of how a security objective decomposes into user responsibilities and system requirements. We believe that our modeling shows how this can be done and provides a basis for designing automated systems to support enterprise integrity. By staying at the system interface, we have allowed flexibility in system design. For example, support for well-formed transactions is essential in any implementation, but our model avoids saying how this might be achieved. Support could be achieved entirely by software certification. Alternatively, we allow the possibility that some support for well-formed transactions is built directly into the functioning of the operating system, so that it need not complicate the input profiles for application software.

Acknowledgments

The authors wish to thank Terry Mayfield for many valuable suggestions. A preliminary version of this work appeared in (Williams, 1993). The current paper is a new presentation of the material with significant refinements and extensions of the content; we are grateful for the opportunity to have "tested" the early version of the work at the 1993 Computer Security Foundations Workshop.

References

Bell, D. E., October 1991, "Putting Policy Commonalities to Work," Proceedings of the 14th National Computer Security Conference, pp 456-471.

Biba, K. J., April 1977, "Integrity Considerations for Secure Computing Systems," ESD-TR-76-372, NTIS# AD-A039324, Electronic Systems Division, Air Force Systems Command.

Clark, David D., and D. R. Wilson, April 1987, "A Comparison of Commercial and Military Computer Security Policies," Proceedings of the 1987 Symposium on Security and Privacy, IEEE.

__________, January 1989, "Evolution of a Model for Computer Integrity," in Report of the Invitational Workshop on Data Integrity, Special Publication 500-168, pp. A.2.1-A.2.13, NIST.

LaPadula, Leonard J., and James G. Williams, June 1991, "Toward a Universal Integrity Model," Proceedings of the Computer Security Foundations Workshop IV, IEEE.

Merriam-Webster, 1991, Webster's Ninth New Collegiate Dictionary.

Motro, A., December 1989, "Integrity = Validity + Completeness," ACM Transactions on Database Systems, Vol. 14, No. 4, pp. 480-502.

National Computer Security Center, December 1985, Department of Defense Trusted Computer System Evaluation Criteria, DOD 5200.28-STD.

Williams, James G., and Leonard J. LaPadula, June 1993, "Automated Support for External Consistency," Proceedings of the Computer Security Foundations Workshop VI, IEEE.

Woodcock, Jim, and Martin Loomes, 1988, Software Engineering Mathematics, Addison-Wesley.

Appendix A: Formal Theorems and Proofs

Section A.1 provides background notation for discussing events and the semantics of languages. Sections A.2, A.3, and A.4 formalize material contained in Sections 2, 3, and 4, respectively, and are completely parallel to the main body of this paper.

A.1 Introduction

In this subsection, we introduce notions that are shared between users and the computing system but are independent of the system design. Throughout this appendix, we use “iff” as an abbreviation for “if, and only if.”

A.1.1 Notation For Time

For any function f, domain(f) is the domain of f; we write f|X to mean the restriction of f to values in the set X. Thus, domain(f|X) is a subset of X, and (f|X)(y) = f(y), provided y in X.

We shall model time as the set of all real numbers. That is, a particular time is a real number. The variables i, j, k will vary over all possible times. A history is a finite partial function from time to arbitrary values. That is, its domain is a finite set of real numbers. The function last‑time(h) identifies the largest element of domain(h). For any history h, we define h before j to be h | {k : k < j}. For any histories h and h', we say h' extends h iff h = h' before last-time(h). In general, we shall follow the convention that whenever a function contains a “current” history argument, it is the last argument

A.1.2 Notation for Language

Our model of external consistency rests partly on a simple theory of language. In this theory, the words symbol, proposition, declarative, imperative, interrogative, and entity are treated as undefined primitives. Informally, a symbol is a word or other graphic symbol used to build terms, phrases, propositions, and other syntactic objects. In the model, propositions will be combined with the three form markers , declarative, imperative, and interrogative, to build assertions, requests, and questions as follows: A sentence consists of a proposition and a form marker. An assertion is a sentence that contains the declarative marker; a question contains the interrogative marker, and a request contains either the imperative or the interrogative marker. Treating a question as a request for information allows us to simplify our investigation by explicitly considering only two kinds of sentences.

By an entity, we mean any conceptual or real‑world entity. We refer to conceptual entities as values. Real‑world entities include temporal entities that change over time (e.g., a clock, a computer), as well as activities that embody change over time (e.g., flight of a bumble bee, I/O history of a computer).

By a semantic environment (or more simply, just an environment) we mean any partial function from symbols to entities. A symbol fails to belong to the domain of a semantic environment iff it has no meaning in that environment. Variables are examples of symbols that are not ordinarily assigned meanings by semantic environments, as are nonvariables for which no meaning has yet been assigned. A question or request may contain variables or similar undefined symbols that must be assigned values in the process of answering the question or fulfilling the request. For example, in "The basket contains how many eggs?" the term "how many" has no preassigned value and is much like a mathematical variable in need of instantiation.

For any semantic environment E, symbol s, and entity x, we define E with [s => x] to be the semantic environment obtained from E by assigning the value x to the symbol s. That is, E with [s => x] is the semantic environment F such that F(s) = x, and F(s') = E(s'), if s' _ s. Notice that s belongs to the domain of E with [s => x] whether or not s belonged to the domain of E. As indicated below, the with operator can be used to explain quantification.

A satisfaction relation is a binary relation between semantic environments and propositions. Its purpose is to explain which (variable‑free) propositions are true in a given environment. Satisfaction relations are written in an infix notation. For a given satisfaction relation, satisfies, the intended meaning of E satisfies f is that f is true in the semantic environment E. We will sometimes apply satisfaction relations to sentences, intending application to the proposition contained in the sentence. Questions containing variables are answered by assigning values which allow the question to be satisfied. For example, if E Óis an environment in which there is a basket containing 3 eggs, then

E with ["how many" => 3] satisfies "the basket contains how many eggs?"

This example may at first seem strange because both the extended environment and the proposition belong not to English but to an extension invented to explicate the semantics of questions.

Finally, a language consists of the following:

a set S of symbols

a set ENV of anticipated semantic environments that give meanings to some of the symbols in S

a precedes relation on ENV such that E precedes F iff F is a possible future environment of E

a set P of propositions

a satisfaction relation, satisfies, that shows for any proposition f in P and semantic environment E in ENV, whether f is true in E

a deduction mechanism , |-, such that if |- f, then E satisfies f, for every anticipated environment E. (This condition is referred to as soundness)

The intent here is that ENV shows how people are willing to use symbols. Different semantic environments in ENV represent potentially different real‑world situations. A given semantic environment reflects past, present, and future circumstances. The precedes relation is constrained both by how people allow the meanings of symbols to change and by how reality itself can evolve. The satisfaction relation shows how people determine correctness of propositions; a proposition is correct iff it is true in semantic environments that reflect the currently evolving real‑world situation.

Any language can be extended to include quantifiers. The sentence “exists s: s2 - 8 = 0,” for example, can be interpreted, provided equations can. If E is an environment in ENV, we define this use of the existential quantifier as follows:

E satisfies “exists s, s2 - 8 = 0” iff
E with [s => x] satisfies “s2 - 8 = 0”, for some value of x.

If the language has a type mechanism, we can also introduce bounded quantifiers. For example, if E assigns a meaning to the symbol “integer” (e.g., E(“integer”) is the set of integers), then

E satisfies “exists s: integer, s2 - 8 = 0” iff
E with [s => x] satisfies “s2 - 8 = 0”, for some x in E(“integer”)

A.2 External Consistency

We describe the external system interface by giving a model of system behavior relative to a and vendor‑supplied, user extensible description language, and then we describe this language.

A.2.1 External System Interface

We model the external system interface by modeling the system and the user description language by which it communicates with users. The system itself is modeled using histories of I/O events.

An input event is a direct observation that exactly describes an input sentence and relevant circumstances under which the input occurred, e.g., the physical device used and the value of the system clock when that input occurred.

An output event is a direct observation (made by an ideal observer of the system's behavior) to the effect that a sentence was output under given circumstances, including, possibly, whether the output was accompanied by a "credibility marker" providing information about the credibility of the output.

An I/O event is an input event or an output event. An input history, output history, or I/O history is a finite partial function from time to input events, output events, or I/O events, respectively. For simplicity of modeling, we assume that each I/O event e records the actual time of the I/O event, and that no two I/O events occur at exactly the same time. These assumptions imply that, for any I/O history h and j in domain(h), j is recoverable from h(j). (Without these assumptions, we would often be obliged to use the pair (j, h(j)) where we now write simply h(j); other aspects of the model would remain unchanged.)

Finally, an I/O system is a set of I/O histories. The I/O system for a vendor's computer consists of all of its possible I/O histories.

User description languages are defined jointly by the vendor and the system's users. We assume that a user description language have an implication symbol, ®, such that if y is a proposition and F is a set of propositions, then F ® y is a proposition such that for any anticipated environment U, U satisfies (F ® y) iff either U satisfies y or U fails to satisfy some f in F. Formally, ® is a proposition‑valued constructor. We also require that the language have a negation symbol, more accurately, we assume that there is a function, Ø, from propositions to propositions such that, for any anticipated environment U, if U satisfies Ø f, then U fails to satisfy f.

We require that the description language have conventions for discussing the behavior of the system. Specifically, we require that the domain of each anticipated semantic environment U for a description language contain a distinguished history symbol, hist, where U(hist) is the actual I/O history of the vendor‑supplied system. Thus, the I/O system for the vendor's computer is precisely the set of all histories of the form U(hist), where U varies over anticipated semantic environments. If f is a proposition about the system's actual I/O history at a particular moment, and U is the actual semantic environment at that moment, then U satisfies f iff f is a correct statement about the system’s actual I/O history, U(hist).

We define an anticipated I/O history to be one of the form U(hist), for some anticipated environment U. We let the variables h vary over arbitrary I/O histories and let the symbols hU, hV, abbreviate the terms U(hist), V(hist), respectively. We require that if U precedes V, then hV extends hU.

We have assumed that input events are direct observations by the system and that output events are direct observations by an ideal observer. With each such I/O event e, there is an intended portion, intent(e), obtained by discarding auxiliary aspects of e, such as when the event occurred and whether it is an input or an output. For example, the intended portion of the direct observation “someone authenticated as user abc entered the string ‘birds fly’ from terminal tty0 when the system clock read 1 pm, September 13, 1994.” The intended portion of this direct observation is just the quoted assertion, “birds fly.” This distinction between direct observations and the intent behind whatever caused them was glossed over in the main body of the paper, but is crucial to a precise, formal model.

If e is a user input or system output, define is‑assertion(e) to mean that intent(e) contains the declarative form marker, and define is‑ request(e) to mean that intent(e) contains either the imperative or the interrogative form marker. We define prop(e) to be the proposition in the sentence intent(e).

We require that the user description language define, for each I/O event e such that is‑request(e), a predicate is‑legitimate(e). Informally, the meaning of is‑legitimate is conveyed to the system by assertions telling the system which kinds of requests are legitimate. Formally, is‑legitimate is just a partial mapping from requests to propositions. The semantics of a user description language must be such that a request made at time i is legitimate in a given user environment U iff U satisfies the proposition of the form is‑legitimate(hU(i)). Notice that is‑legitimate quotes its argument in the sense that the truth of is‑legitimate(e) depends on the actual I/O event e, not just on the meaning of user‑sentence(e) in the environment U.

A proposition f is stable for U iff V satisfies f whenever U precedes V. In the formalization we consider stability for particular environments to allow for the fact that real languages evolve over time. Clearly, if f is stable for U , then f is stable for any later environment V.

The automated system is able to affirm stability for U by proving theorems of the form

|- ((hist extends h) ® f), where h is known to be a subhistory of hU

This allows for the possibility of language extensibility in which h specifies new terminology that occurs in f.

A proposition is stable iff it is stable for every anticipated environment. (This doesn’t quite imply that it has the same interpretation in every environment because it might have different interpretations in different initial environments.)

A.2.2 The External‑Consistency Objective

We define is‑correct(e) to be prop(e) if e contains an assertion, or is‑legitimate(e) if e contains a question or other request. The external‑consistency objective for a particular output e in U(hist) can be stated simply as follows:

external‑consistency(e, U) iff V satisfies is‑correct(e) whenever U precedes V.

In this form, external consistency implies that is‑correct(e) is stable for U, but not that e is or, in the case of request, that content(e) is. These distinctions were not apparent in the main body of the text.

A.3 Warranties on Correctness of System Output

A.3.1 Warranted Outputs and Bases

For any output event o, let is‑warranted(o) indicate whether o contains a credibility marker indicating that it is a warranted output.

We assume, for simplicity, that the direct observations in I/O bases are stored in a format that allows the system to distinguish whether a given observation is an output or an input and, in the latter case, whether the input is user supplied. We assume that these distinctions can be tested via the functions is-output(e), and has-user-input(e), respectively. (A more accurate but awkward formalization would explicitly tag sentences in an I/O basis with “source” markers indicating whether they are inputs or outputs and whether they are user-contributed).

In what follows, we let w‑basis(j, h) be the actual I/O basis used to compute and justify the warranted output h(j), assuming is-warranted(h(j)). In defining w-basis in this way, we have implicitly assumed that the system is deterministic.

A.3.2 Automated System Requirements

In formalizing the output-warranty requirement, it is convenient to talk not simply about whether an out is warrantable, but whether a given I/O basis warrants it in the following sense:

warrants(B, f, h) iff B is a warrantable I/O basis on h and |- (B+ ® is‑correct(f), where B+ = B union {is-correct(e) | e in B and intent(e) is defined}[6].

B is a warrantable basis on h iff B is an I/O basis, and whenever h(j) is an output in B, warrants(w‑basis(j, h), h(j), h).

This definition of a warrantable basis makes sense because we have assumed that j is recoverable from h(j), as is the fact that h(j) is an output. By using B+ instead of B, we have explicitly assumed that a direct observation in the I/O basis may be used to justify an output either as is or through the inferred intent of users and the system. In effect, we have hardwired the following axiom schema, where e is any user input or previous warranted system output:

|- (e ® is‑correct(e)).

In this form, it is easy to see that the correctness of each user input must be stable, at least for the current and future environments.

The output-warranty and correct-observation requirements are now defined as follows:

output‑warranty(j, h) iff

if h(j) is an output event, and is‑warranted(h(j)),

then warrants(w‑basis(j, h), is‑correct(h(j)), h).

correct‑observation:(i, h) iff

i in domain(h) implies |- ((hist extends h) ® h(i)).

The above correct-observation requirement really does capture the fact that each I/O event is a true proposition that is stable for the current environment: If correct-observation(i, hU), and U precedes V, then V satisfies (hist extends hU) by assumption, and V satisfies ((hist extends h) ® h(i)), by soundness of the deduction mechanism, so that V satisfies h(i). This form of the requirement is at least superficially stronger that what was presented in the main body of the paper because an I/O event not only has to be true and stable, but the system must be able to understand this fact, which is why we used |- in preference to the satisfaction relation.

A.3.3 User Responsibilities

For any user input of the form hU(i), define

input‑correctness(i, U) iff V satisfies is‑correct(hU(i)), whenever U precedes V

In contrast to the system requirements, the input-correctness responsibility relies directly on the actual semantic environment, most of which is probably not comprehensible to the automated system. This input-correctness requirement is really just the external consistency objective applied to user inputs.

A.3.4 Meeting the Objective

Theorem 1

If output‑warranty(j, hU), for each output hU(j),

correct-observation(i, hU), for each input hU(i), and

input-correctness(i, U), for each i such that (hU(i)) contains a user input,

then external‑consistency(hU(j), U), for each output hU(j).

Proof We use strong induction on I/O histories. Suppose hU(j) is a warranted output event. Then w‑basis(j, h) is a warrantable basis. Each direct observation in w‑basis(j, h) is true by the correct-observation requirement. If hU(i) is an output in w‑basis(j, h) then hU(i) is warrantable and thus correct by induction. Each user input in w‑basis(j, h) is also correct by input‑correctness. Consequently, hU(j) is correct by the output-warranty requirement and the definition of a warrants. o

A.4 Identifying Correct Inputs

A.4.1 External System Interface

We let has-correctness-ensuring-profile(i, h) be the assertion that the input h(i) has a correctness-ensuring profile. In other words, U satisfies correctness‑ensuring(i, hU) iff hU contains the specification of an input profile p for hU(i), and moreover, hU specifies that p is correctness assuring. Validation of this assertion normally entails execution of any integrity validators specified in the associated profile p.

By assumption, some inputs warrantably have correctness-ensuring profiles. If h(i) is such an input, we let c‑basis(i, h) be the I/O basis for the assertion,

has-correctness-ensuring-profile(i, h).

A.4.2 Automated-System Requirements

x-warrants[7](B, f, h) iff B is a credible I/O basis on h and |- (B+ ® is‑correct(f),

where B+ = B union {is-correct(e) | e in B and intent(e) is defined}, as before.

B is a credible basis on h iff B is an I/O basis,

whenever h(j) is an output in B, x-warrants(w‑basis(j, h), h(j)), and.

whenever h(i) is an element of B that contains a user input, is‑certified(i, h)).

is‑certified(i, h) iff x-warrants(c‑basis(i, h), has-correctness-ensuring-profile(i, h), h)

The extended output-warranty requirement is now defined as follows:

x-output‑warranty(j, h) iff

if h(j) is an output event, and is‑warranted(h(j)),

then x-warrants(w‑basis(j, h), is‑correct(h(j)), h).

A.4.3 User Responsibilities

For any user input of the form hU(i), define

revised-input‑correctness(i, U) iff

V satisfies is‑correct(hU(i)), whenever U precedes V and is‑certified(i, hU)

A.4.4 Achieving the Objective

Theorem 2

If x-output‑warranty(j, hU), for each output hU(j),

correct-observation(i, hU), for each input hU(i), and

revised-input-correctness(i, U), for each i such that (hU(i)) contains a user input,

then external‑consistency(hU(j), U), for each output hU(j).

Proof We again use strong induction on I/O histories. Suppose hU(j) is a warranted output event. Then w‑basis(j, h) is an x-warrantable basis. Each direct observation in w‑basis(j, h) is true by the correct-observation requirement. If hU(i) is an output in w‑basis(j, h) then hU(i) is x-warrantable and thus correct by induction. Each user input in w‑basis(j, h) is also correct by revised-input‑correctness. Consequently, hU(j) is correct by the output-warranty requirement and the definition of an x-warrants. o

We formalize only the first part of Theorem 3, the latter two parts being special cases of the first. The formalization says that if all certified inputs before i are correct and the input at time i is incorrect but not certified, then all certified inputs up through time i are correct.

Theorem 3

If hU = (hV before i),

hU(i) is an incorrect user input and not is‑certified(i, hU), and

revised‑input‑correctness(i, U) and

then revised‑input‑correctness(U with [i => hU(i)].

Proof This follows directly from the definitions of revised-input-correctness and is‑certified. o

Theorem 4 Suppose that some inputs have nonempty c‑bases (so that certifiability is not wholly predefined). Then

(a) some inputs are administrative (by definition),

(b) some users are able to make administrative inputs without prior authorization by other users.

Proof If c‑bases are always empty, then what counts for being a certified input is predefined, which establishes the parenthetical remark. If some c‑basis is nonempty, the inputs in this credible basis are administrative, by definition. Let e be the first input event whose c‑basis is nonempty. Either the c‑basis contains no user inputs or else the c‑basis contains certified inputs that have empty c‑bases. In the former case, certification of e is recognized on the basis of direct observations, without prior authorization. In the latter case, there are certified inputs that are believed a priori. o

Appendix B: Glossary

Administrative User An experienced user who interprets for the automated system the interests of the enterprise that employs the automated system; administrative users are trusted to establish operating parameters, to install software such as integrity validators, and to specify constraints, user roles, user permissions, correctness-ensuring profiles, and such to the automated system.

Assertion A sentence that states a proposition claimed as true by its author.

Certifiably Correct Input The automated system accepts an input as being certifiably correct if the system can warrantably claim that the input's input-event profile is correctness ensuring.

Certified User Input A user input that has an input-event profile which the system can warrantably assert is a correctness-ensuring profile.

Certified Integrity Validator An integrity validator that was input to the system as a certified input.

Consistency Agreement or harmony of parts or features, specifically, the ability to be asserted together without contradiction.

Correctness of an Assertion Correspondence to reality.

Correctness of a Request Legitimacy according to preassigned criteria.

Correctness-Ensuring Profile An input-event profile believed by an administrative user to ensure correctness of input.

Corroborated Input An input that is endorsed by other qualified users.

Credible I/O Basis A warrantable I/O basis whose user inputs are certified.

Description Language A common language for describing real-world situations, agreed upon among users of a system, and, possibly, constrained by the vendor of the system, who may, for example, supply basic constructs for expressing propositions in the users' description language.

Direct Observation Information directly acquired by the automated system from its own internal workings or from its interface to its environment. Hardware interrupts, internal clock readings, and uninterpreted literal data from an input device are examples of direct observations.

Correct-Observation Requirement This requirement is that each direct observation be a correct, stable assertion.

External Consistency The ability of an automated system to give correct information about its external environment.

Input-Correctness Requirement This requirement is that all assertions and requests made to the system by its users be correct.

Input-Correctness Requirement (revised) This requirement is that all certified inputs to the system be correct.

Input Event We call the entering of an input to the automated system an input event.

Input-Event Attribute An attribute associated with an input event, used to specify a description of a correct input.

Input-Event Profile The attributes associated with an input event.

Input/Output Basis An accurate record of the particular information that supports the correctness of an output of the automated system; this record may consist of direct observations, user-supplied inputs, previous outputs, or any combination of the three.

Integrity Trusted Computing Base (ITCB) That part of an automated system that has to be right for the vendor to meet his responsibility to implement an advertised integrity policy.

Integrity Validator User-supplied or built-in tests for the automated system to apply to inputs in order to identify incorrect inputs; conceptually similar to those transformation procedures of Clark and Wilson that are used to transform inputs into constrained data items.

Legitimacy of a Request Consistency with predetermined goals of the enterprise employing the automated system.

Output-Marking Requirement This requirement is that the automated system should provide a capability for users to designate specified classes of output as warrantable, implying that these outputs are to be marked as warranted by the system.

Output-Warranty Requirement (basic and extended) This requirement is that the automated system's vendor guarantee that every output marked as warranted is warrantable. See Warrantable Output definitions for distinctions between the basic requirement and the extended requirement.

Profile-Administration Input A certified user input used by the system to conclude that an input-event profile is correctness-ensuring.

Question A sentence that states a proposition and indicates that its author wishes to know if the proposition is true or, more generally, wishes to find a true instantiation of it.

Request A sentence that gives a command, authorization, suggestion, or, generally, any information intended to effect changes in the issuer's environment; thus, a request states a proposition that is to become true.

Sentence A unit of information exchange, in the form of an assertion, request, or question, that discusses conceptual and real-world situations including, possibly, the state of the automated system involved.

Stable Proposition A proposition whose truth is independent of when the proposition is issued and whose precision is specified within the proposition itself.

Warrant (verb) Guarantee that if the user-supplied information on which an output is based is correct, then so is the output.

Warrantable I/O Basis An I/O basis all of whose outputs are warrantable (according to the definition of warrantable output for the relevant definition of output-warranty requirement).

Warrantable Output (for basic definition of output-warranty requirement) An output that has a warrantable I/O basis and is correct provided each sentence in its I/O basis is correct.

Warrantable Output (for extended definition of output-warranty requirement) An output that has a credible I/O basis and is correct provided each sentence in its I/O basis is correct.

Warranted Output An output that can meet the external-consistency objective.



[1] As generally used, the word "warranty" implies not only a guarantee of something but also responsibility for compensatory action should the warranty be violated. This latter aspect of vendor responsibility is not addressed in our model of external consistency.

[2] For the system to validate the assertion that an input has a correctness-ensuring profile, it must actually validate two assertions, namely that it has correctly determined the input-event profile for that input, and that this profile is correctness ensuring.

[3] The most sensitive administrative inputs are those used to explain which inputs are administrative inputs. The formal model allows but does not require separate treatment for these most sensitive inputs.

[4] From section 3: each direct observation is a correct, stable assertion.

[5] From section 2: each assertion or request output by the system is correct.

[6] We could build the stabilizing filter directly into the model at this point by defining B+ to contain correctness assertions that had been stabililzed according to some algorithm that depended, in general, on h.

[7] The term “x-warrants” (for “extended warranty”) is needed to avoid overloading this formalization of “warrants” with that of the previous section.



Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: October, 25, 2015