Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

A Slightly Skeptical View on CMM

(Capability Maturity Model) as an example Of Cargo Cult Software Engineering

News See also Science, PseudoScience and Society Recommended Links Official and semi-official documents Examples of CMM hype Lysenkoism
Cargo cult programming Slightly Skeptical View on Extreme Programming Featuritis Inhouse vs Outsourced Applications Development KISS Principle    
Software Life Cycle Models Program Understanding Literate Programming and HTMLization of programs Software Testing Humor Random Findings Etc
.
At worst, the CMM is a whitewash that obscures the true dynamics of software engineering, suppresses alternative models. If an organization follows it for its own sake, rather than simply as a requirement mandated by a particular government contract, it may very well lead to the collapse of that company's competitive potential.

James Bach

.

"In times of universal deceit, telling the truth will be a revolutionary act."

-- George Orwell
[Eric Arthur Blair] (1903-1950) British author

The Software Capability Maturity Model (CMM), is a software development methodology that is as close to scam as ISO 9000. The current version was released in December 2001 by the Software Engineering Institute and is often called version 1.1 of the Capability Maturity Model Integration (CMMI).

More politically inclined authors would claim that this is a variant of "Brezhnev socialism" applied to software engineering (or worse a variant of Lysenkoism as there is some government pressure to get the certification), but that's another story. But labels aside the fact that some organization is CMM-certified (and it does not matter to what level -- see below) in the current environment should probably be viewed as a slick marketing trick (especially useful for outsourcers).

The initial development of CMM is attributed to Watts Humphrey, who founded the Software Process Program of the Software Engineering Institute (SEI) at Carnegie Mellon University. From 1959 to 1986 he worked for IBM. He holds a bachelor's degree in physics from the University of Chicago, a master's degree in physics from the Illinois Institute of Technology, and a master's degree in business administration from the University of Chicago. It looks like CMM originated from 1987 document written by Watts S. Humphrey (Evolution of CMM and CMMI from SEI):

This was a 40-page document containing a list of questions to be used as an assessment tool. Each question was mapped to the five levels, still present today. To achieve a level, an organization had to demonstrate they could answer "Yes" to 90% of the "starred" questions and 80% of all questions for that level.

However, there was also a second "Technology" dimension, with two levels A and B, which was displayed vertically (with the 5 -levels horizontal)! The technology dimension assessed the level of automation present. Organizations "matured" from 1A to 5B.

Like it is often the case with questionable doctrines his views of CMM are more realistic then many of his followers and he had second thought about the effectiveness of his creation (Sidebar Watts Humphrey on Software Quality):

Is CMM the only quality tool software developers need?

The CMM framework is essentially aimed at how do you establish a good management environment for doing engineering work. It's about the planning you need, the configuration management, the practices, the policies -- all that stuff. It doesn't talk about how you do things.

When I looked at organizations that were at high [CMM] levels, I discovered that the engineering practices hadn't changed that much. I had naively assumed that when we put good practices in place, like planning and measurement and quality management, that it would seep down to the engineers [programmers], and they'd start to use it in their personal work. It didn't happen.

The author probably never read Brooks famous The Mythical Man-Month book. In his later, 1987, essay "No Silver Bullet," Frederick P. Brooks wrote,

"The essence of a software entity is a construct of interlocking concepts ... I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation ... If this is true, building software will always be hard. There is inherently no silver bullet."

Road to hell is paved with good intentions. Good work can be done under any software development model. But excessive bureaucratization stimulates performing bad work by redirecting energy from worth goal to "raising the flag and marching with the banner" activities. In excessive bureaucratization is in place it is actually not so important what software development methodology is used: you are screwed anyway.

What is really bas in all this CMM junk is that it shifts focus from improvement of real capabilities of software organization into creation of useless, excessive, expensive and time consuming bureaucratic perversions. Excessive, I would say obsessive, as the focus is on formal procedures as well as an illusive goal of "process improvement".

The latter is the most detrimental and dangerous feature of CMM. This naive (or crooked) approach directly encourages excessive bureaucratization, mandates wasteful paperwork in best "mature socialism" style. Some problem associated with the CMM are somewhat similar to problems of traditional waterfall approach to software development: the most detached from reality software life-cycle model. In many ways CMM's activity-based measurement approach mimics the sequential paradigm inherent in the waterfall software development model. Here is a relevant quote from CMM vs. CMMI: from Conventional to Modern Software Management - article originally published in The Rational Edge, February 2002

Is the CMM Obsolete?

Some issues associated with the practice of the CMM are also recurring symptoms of traditional waterfall approaches and overly process-based management. The CMM's activity-based measurement approach is very much in alignment with the sequential, activity-based management paradigm of the waterfall process (i.e., do requirements activities, then design activities, then coding activities, then unit testing activities, then integration activities, then system acceptance testing). This probably explains why many organizations' perspectives on the CMM are anchored in the waterfall mentality.

Alternatively, iterative development techniques, software industry best practices, and economic motivations drive organizations to take a more results-based approach: Develop the business case, vision, and prototype solution; elaborate into a baseline architecture; elaborate into usable releases; and then finalize into fieldable releases. Although the CMMI remains an activity-based approach (and this is a fundamental flaw), it does integrate many of the industry's modern best practices, and it discourages much of the default alignment with the waterfall mentality.

One way to analyze CMM and CMMI alignment with the waterfall model and iterative development, respectively, is to look at whether each model's KPAs motivate sound software management principles for these two different development approaches. First, we will define those software management principles. Over the last ten years, I have compiled two sets: one for succeeding with the conventional, waterfall approach and one for succeeding with a modern, iterative approach. Admittedly, these "Top Ten Principles" have no scientific basis and provide only a coarse description of patterns for success with their respective management approaches. Nevertheless, they do provide a suitable framework for my view that the CMM is aligned with the waterfall mentality, whereas the CMMI is more aligned with an iterative mentality.

If you look at official and semi-official documents, the amount of Dilbert type "management-speak" and acronyms is really staggering; see for example CMM-Tutorial and phillips2004. Some aspects of CMM might make some sense for software maintenance, but hardly for software development.

CMM presupposes certification of organizations for 4 "maturity" levels (from 2 to 5):

  1. Initial. Ad hoc process (chaotic, ad hoc, heroic)
  2. Repeatable. Basic project management (project management, process discipline)
  3. Defined level. Process definition is used (institutionalized)
  4. Managed level. Process measurement is used (quantified)
  5. Optimizing level. Process control is used (process improvement)

Like one critic of CMM aptly noted, the initial level of CMM (Level 2 or "managed") is actually a certification "of the ability to stand upright and make fire" applied to software development. This is so basic (and fuzzy) that any software development organization can legitimately claim CMM level 2 readiness.

The most insightful critique of CMM was provided by James Bach in his article The Immaturity of CMM originally published in the September 1994 issue of American Programmer. Here is one relevant quote from James Bach's paper (I strongly encourage to read it ) which dispels the "institutionalization" myth that is the cornerstone of CMM:

The idea that process makes up for mediocrity is a pillar of the CMM, wherein humans are apparently subordinated to defined processes. But, where is the justification for this? To render excellence less important the problem solving tasks would somehow have to be embodied in the process itself. I've never seen such a process, but if one exists, it would have to be quite complex. Imagine a process definition for playing a repeatably good chess game. Such a process exists, but is useful only to computers; a process useful to humans has neither been documented nor taught as a series of unambiguous steps. Aren't software problems at least as complex as chess problems?  

The CMM reveres institutionalization of process for its own sake. Since the CMM is principally concerned with an organization's ability to commit, such a bias is understandable. But, an organization's ability to commit is merely an expression of a project team's ability to execute. Even if necessary processes are not institutionalized formally, they may very well be in place, informally, by virtue of the skill of the team members.

Institutionalization guarantees nothing, and efforts to institutionalize often lead to a bifurcation between an oversimplified public process and a rich private process that must be practiced undercover. Even if institutionalization is useful, why not instead institutionalize a system for identifying and keeping key contributors in the organization, and leave processes up to them? The CMM contains very little information on process dynamics.

In other words, the right organizational processes can improve the output of a group of talented software developers, but they do not create one. By ignoring this critical, fundamental fact, the CMM completly loses credibility with anyone experienced with a wide range of software development projects.

In his review of Back's groundbreaking and courageous paper Kelly Nehowig observed:

The author describes six basic problem areas that he has identified with the CMM:

  1. The CMM has no formal theoretical basis and in fact is based on the experience “of very knowledgeable people”. Because of this lack of theoretical proof, any other model based on experiences of other experts would have equal veracity.

  2. The CMM does not have good empirical support and this same empirical support could also be construed to support other models. Without a comparison of alternative process models under a controlled study, an empirical case cannot be built to substantiate the SEI’s claims regarding the CMM. Primarily, the model is based on the experiences of large government contractors and of Watts Humprey’s own experience in the mainframe world. It does not represent the successful experiences of many shrink-wrap companies that are judged to be a “level 1” organization by the CMM.

  3. The CMM ignores the importance of people involved with the software process by assuming that processes can somehow render individual excellence less important. In order for this to be the case, problem-solving tasks would somehow have to be included in the process itself, which the CMM does not begin to address.

  4. The CMM reveres the institutionalization of process for its own sake. This guarantees nothing and in some cases, the institutionalization of processes may lead to oversimplified public processes, ignoring the actual successful practice of the organization.

  5. The CMM does not effectively describe any information on process dynamics, which confuses the study of the relationships between practices and levels within the CMM. The CMM does not perceive or adapt to the conditions of the client organization. Arguably, most and perhaps all of the key practices of the CMM at its various levels could be performed usefully at level 1, depending on the particular dynamics of an organization. Instead of modeling these process dynamics, the CMM merely stratifies them.

  6. The CMM encourages the achievement of a higher maturity level in some cases by displacing the true mission, which is improving the process and overall software quality. This may effectively “blind” an organization to the most effective use of its resources.
The author’s most compelling argument against the CMM is that many successful software companies that, according to the CMM, should not exist. Many software companies that provide “shrink wrap” software such as Microsoft, Symantec, and Lotus would definitely be classified by the CMM as level 1 companies. In these companies, innovation reigns supreme, and it is from the perspective of the innovator that the CMM seems lost.

The author claims that innovation per se does not appear in the CMM at all, and is only suggested by level 5. Preoccupied with predictability, the CMM is ignorant of the dynamics of innovation. In fact, where innovators advise companies to be flexible, to push authority down into the organization, and to recommend constant constructive innovation, the CMM mistakes all of these attributes to the chaos that it represents in level 1 companies. Because the CMM is distrustful of personal contributions, ignorant of the environment needed to nurture innovative thinking, and content to bury organizations under an ineffective superstructure, achieving level 2 on the CMM scale may actually destroy the very thing that caused the company to be successful in the first place.

The highest level (Level 5) has nothing to do with software quality: what it really means is the ability to use a double set of books and produce a lot of bogus paperwork in English language. As such, it has tremendous marketing value, especially if the other side is represented by PHBs. It creates really nice opening for outsourcers (and outsourcers play important, may be critical role in keeping CMM afloat) who can claim being certified at level 5 as the best thing in software development since sliced bread. For that reason this level of CMM certification is simply loved by outsourcers. Nine Indian firms claim level 5 certification, and not without a reason :-).

If you look at Usenet discussions of CMM hype the most strong defenders of this marketing trick are people connected to outsourcers. The level of argumentation reminds me the USSR Communist Party Congresses with its long applauses, changing into a standing ovation for each monstrous stupidity invented by the Politburo jerks ;-).

If you look at Usenet discussions of CMM hype the most strong defenders of this marketing trick are people connected to outsourcers. The level of argumentation reminds me the USSR Communist Party Congresses with its long applauses, changing into a standing ovation for each monstrous stupidity invented by the Politburo jerks ;-).

Sometimes CMM-compliance is mandated. In this case the less efforts spend on obtaining it, the better. In fact, anyone can proclaim themselves to be on CMM Level they want without any significant changes in the actual software development process. This is a "paper tiger" type of certifications: all that is needed is (bogus) paperwork.

At the same time too much zeal in achieving CMM-compliance can be very destructive for the organization. The CMM absolutize the value of formal processes, but ignores people. And it is people, the software developers, who are the key to success. This is readily apparent to anyone who is familiar with the work of Gerald Weinberg on programming psychology. The net result of excessive zeal in achieving CMM compliance can be the proliferation of dangerous and clueless "software development bureaucracy" and micromanagers. If this is possible I would recommend for CIO to find volunteers who work on CMM-compliance, created an appropriate organization unit and after it is achieved dismantle or outsource the unit and let go people who were the most enthusiastic about the whole process ;-). They were extremely dangerous for the organization health anyway.

All-in-all obtaining CMM certification is by-and-large a waist of organizational resources, but you might need to do in order to participate in government contacts. If you do, please it take it easy and understands that this is pretty much useless exercise. Still the world is not perfect and sometime you need to play the game. The most constructive was to play CMM game is to concentrate on introduction of automation tools like bug tracking software (for example Bugzilla), test automation tools (for example DejaGnu) and compilation and linkage automation software ( for example OMU can be adapted for this purpose).

But the key issue here is to block the promotion to management ranks a special category of people who thrive under the organizational atmosphere of "software development socialism". Those people are the most dangerous and destructive for any software development organization and CMM can serve as a litmus test for exposing them. If CMM process helps them to grow in the management ranks everything is lost, if opposite is true (as reflected by the shrewd suggestion above that CMM-compliance unit should first be created and then outsourced :-) then CMM process can even be useful.

Remember about the danger of "software development socialism", folks ;-). Such side effects of typical CMM adoption as bureaucratization, micromanagement and promotion of wrong type of people should never be overlooked as they kill software developers creativity and any innovation capability within the organization. Everything becomes way too predictable as in "predictable failure". And you know what happen with the USSR with its "mature socialism", don't you ?

Software companies which try to push technology envelope would be better off ignoring CMM. As Back noted

 "Studies alleging that the CMM is valuable don't consider alternatives, and leave out critical data that would allow a full analysis of what's going on in companies that claim to have moved up in CMM levels and to have benefited for that reason."

All-in-all CMM is just yet another variant of cargo cult science.  Nothing more, nothing less.


NEWS CONTENTS

Old News ;-)

[Jul 03, 2021] Mission creep

Highly recommended!
Jul 03, 2021 | en.wikipedia.org

Mission creep is the gradual or incremental expansion of an intervention, project or mission, beyond its original scope, focus or goals , a ratchet effect spawned by initial success. [1] Mission creep is usually considered undesirable due to how each success breeds more ambitious interventions until a final failure happens, stopping the intervention entirely.

The term was originally applied exclusively to military operations , but has recently been applied to many different fields. The phrase first appeared in 1993, in articles published in the Washington Post and in the New York Times concerning the United Nations peacekeeping mission during the Somali Civil War .

...

[Jul 25, 2017] Knuth Computer Programming as an Art

Jul 25, 2017 | www.paulgraham.com

CACM , December 1974

When Communications of the ACM began publication in 1959, the members of ACM'S Editorial Board made the following remark as they described the purposes of ACM'S periodicals [2]:

"If computer programming is to become an important part of computer research and development, a transition of programming from an art to a disciplined science must be effected."
Such a goal has been a continually recurring theme during the ensuing years; for example, we read in 1970 of the "first steps toward transforming the art of programming into a science" [26]. Meanwhile we have actually succeeded in making our discipline a science, and in a remarkably simple way: merely by deciding to call it "computer science."

Implicit in these remarks is the notion that there is something undesirable about an area of human activity that is classified as an "art"; it has to be a Science before it has any real stature. On the other hand, I have been working for more than 12 years on a series of books called "The Art of Computer Programming." People frequently ask me why I picked such a title; and in fact some people apparently don't believe that I really did so, since I've seen at least one bibliographic reference to some books called "The Act of Computer Programming."

In this talk I shall try to explain why I think "Art" is the appropriate word. I will discuss what it means for something to be an art, in contrast to being a science; I will try to examine whether arts are good things or bad things; and I will try to show that a proper viewpoint of the subject will help us all to improve the quality of what we are now doing.

One of the first times I was ever asked about the title of my books was in 1966, during the last previous ACM national meeting held in Southern California. This was before any of the books were published, and I recall having lunch with a friend at the convention hotel. He knew how conceited I was, already at that time, so he asked if I was going to call my books "An Introduction to Don Knuth." I replied that, on the contrary, I was naming the books after him . His name: Art Evans. (The Art of Computer Programming, in person.)

From this story we can conclude that the word "art" has more than one meaning. In fact, one of the nicest things about the word is that it is used in many different senses, each of which is quite appropriate in connection with computer programming. While preparing this talk, I went to the library to find out what people have written about the word "art" through the years; and after spending several fascinating days in the stacks, I came to the conclusion that "art" must be one of the most interesting words in the English language.

The Arts of Old

If we go back to Latin roots, we find ars, artis meaning "skill." It is perhaps significant that the corresponding Greek word was τεχνη , the root of both "technology" and "technique."

Nowadays when someone speaks of "art" you probably think first of "fine arts" such as painting and sculpture, but before the twentieth century the word was generally used in quite a different sense. Since this older meaning of "art" still survives in many idioms, especially when we are contrasting art with science, I would like to spend the next few minutes talking about art in its classical sense.

In medieval times, the first universities were established to teach the seven so-called "liberal arts," namely grammar, rhetoric, logic, arithmetic, geometry, music, and astronomy. Note that this is quite different from the curriculum of today's liberal arts colleges, and that at least three of the original seven liberal arts are important components of computer science. At that time, an "art" meant something devised by man's intellect, as opposed to activities derived from nature or instinct; "liberal" arts were liberated or free, in contrast to manual arts such as plowing (cf. [6]). During the middle ages the word "art" by itself usually meant logic [4], which usually meant the study of syllogisms.

Science vs. Art

The word "science" seems to have been used for many years in about the same sense as "art"; for example, people spoke also of the seven liberal sciences, which were the same as the seven liberal arts [1]. Duns Scotus in the thirteenth century called logic "the Science of Sciences, and the Art of Arts" (cf. [12, p. 34f]). As civilization and learning developed, the words took on more and more independent meanings, "science" being used to stand for knowledge, and "art" for the application of knowledge. Thus, the science of astronomy was the basis for the art of navigation. The situation was almost exactly like the way in which we now distinguish between "science" and "engineering."

Many authors wrote about the relationship between art and science in the nineteenth century, and I believe the best discussion was given by John Stuart Mill. He said the following things, among others, in 1843 [28]:

Several sciences are often necessary to form the groundwork of a single art. Such is the complication of human affairs, that to enable one thing to be done , it is often requisite to know the nature and properties of many things... Art in general consists of the truths of Science, arranged in the most convenient order for practice, instead of the order which is the most convenient for thought. Science groups and arranges its truths so as to enable us to take in at one view as much as possible of the general order of the universe. Art... brings together from parts of the field of science most remote from one another, the truths relating to the production of the different and heterogeneous conditions necessary to each effect which the exigencies of practical life require.
As I was looking up these things about the meanings of "art," I found that authors have been calling for a transition from art to science for at least two centuries. For example, the preface to a textbook on mineralogy, written in 1784, said the following [17]: "Previous to the year 1780, mineralogy, though tolerably understood by many as an Art, could scarce be deemed a Science."

According to most dictionaries "science" means knowledge that has been logically arranged and systematized in the form of general "laws." The advantage of science is that it saves us from the need to think things through in each individual case; we can turn our thoughts to higher-level concepts. As John Ruskin wrote in 1853 [32]: "The work of science is to substitute facts for appearances, and demonstrations for impressions."

It seems to me that if the authors I studied were writing today, they would agree with the following characterization: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it. Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.

Artificial intelligence has been making significant progress, yet there is a huge gap between what computers can do in the foreseeable future and what ordinary people can do. The mysterious insights that people have when speaking, listening, creating, and even when they are programming, are still beyond the reach of science; nearly everything we do is still an art.

From this standpoint it is certainly desirable to make computer programming a science, and we have indeed come a long way in the 15 years since the publication of the remarks I quoted at the beginning of this talk. Fifteen years ago computer programming was so badly understood that hardly anyone even thought about proving programs correct; we just fiddled with a program until we "knew" it worked. At that time we didn't even know how to express the concept that a program was correct, in any rigorous way. It is only in recent years that we have been learning about the processes of abstraction by which programs are written and understood; and this new knowledge about programming is currently producing great payoffs in practice, even though few programs are actually proved correct with complete rigor, since we are beginning to understand the principles of program structure. The point is that when we write programs today, we know that we could in principle construct formal proofs of their correctness if we really wanted to, now that we understand how such proofs are formulated. This scientific basis is resulting in programs that are significantly more reliable than those we wrote in former days when intuition was the only basis of correctness.

The field of "automatic programming" is one of the major areas of artificial intelligence research today. Its proponents would love to be able to give a lecture entitled "Computer Programming as an Artifact" (meaning that programming has become merely a relic of bygone days), because their aim is to create machines that write programs better than we can, given only the problem specification. Personally I don't think such a goal will ever be completely attained, but I do think that their research is extremely important, because everything we learn about programming helps us to improve our own artistry. In this sense we should continually be striving to transform every art into a science: in the process, we advance the art.

Science and Art

Our discussion indicates that computer programming is by now both a science and an art, and that the two aspects nicely complement each other. Apparently most authors who examine such a question come to this same conclusion, that their subject is both a science and an art, whatever their subject is (cf. [25]). I found a book about elementary photography, written in 1893, which stated that "the development of the photographic image is both an art and a science" [13]. In fact, when I first picked up a dictionary in order to study the words "art" and "science," I happened to glance at the editor's preface, which began by saying, "The making of a dictionary is both a science and an art." The editor of Funk & Wagnall's dictionary [27] observed that the painstaking accumulation and classification of data about words has a scientific character, while a well-chosen phrasing of definitions demands the ability to write with economy and precision: "The science without the art is likely to be ineffective; the art without the science is certain to be inaccurate."

When preparing this talk I looked through the card catalog at Stanford library to see how other people have been using the words "art" and "science" in the titles of their books. This turned out to be quite interesting.

For example, I found two books entitled The Art of Playing the Piano [5, 15], and others called The Science of Pianoforte Technique [10], The Science of Pianoforte Practice [30]. There is also a book called The Art of Piano Playing: A Scientific Approach [22].

Then I found a nice little book entitled The Gentle Art of Mathematics [31], which made me somewhat sad that I can't honestly describe computer programming as a "gentle art." I had known for several years about a book called The Art of Computation , published in San Francisco, 1879, by a man named C. Frusher Howard [14]. This was a book on practical business arithmetic that had sold over 400,000 copies in various editions by 1890. I was amused to read the preface, since it shows that Howard's philosophy and the intent of his title were quite different from mine; he wrote: "A knowledge of the Science of Number is of minor importance; skill in the Art of Reckoning is absolutely indispensible."

Several books mention both science and art in their titles, notably The Science of Being and Art of Living by Maharishi Mahesh Yogi [24]. There is also a book called The Art of Scientific Discovery [11], which analyzes how some of the great discoveries of science were made.

So much for the word "art" in its classical meaning. Actually when I chose the title of my books, I wasn't thinking primarily of art in this sense, I was thinking more of its current connotations. Probably the most interesting book which turned up in my search was a fairly recent work by Robert E. Mueller called The Science of Art [29]. Of all the books I've mentioned, Mueller's comes closest to expressing what I want to make the central theme of my talk today, in terms of real artistry as we now understand the term. He observes: "It was once thought that the imaginative outlook of the artist was death for the scientist. And the logic of science seemed to spell doom to all possible artistic flights of fancy." He goes on to explore the advantages which actually do result from a synthesis of science and art.

A scientific approach is generally characterized by the words logical, systematic, impersonal, calm, rational, while an artistic approach is characterized by the words aesthetic, creative, humanitarian, anxious, irrational. It seems to me that both of these apparently contradictory approaches have great value with respect to computer programming.

Emma Lehmer wrote in 1956 that she had found coding to be "an exacting science as well as an intriguing art" [23]. H.S.M. Coxeter remarked in 1957 that he sometimes felt "more like an artist than a scientist" [7]. This was at the time C.P. Snow was beginning to voice his alarm at the growing polarization between "two cultures" of educated people [34, 35]. He pointed out that we need to combine scientific and artistic values if we are to make real progress.

Works of Art

When I'm sitting in an audience listening to a long lecture, my attention usually starts to wane at about this point in the hour. So I wonder, are you getting a little tired of my harangue about "science" and "art"? I really hope that you'll be able to listen carefully to the rest of this, anyway, because now comes the part about which I feel most deeply.

When I speak about computer programming as an art, I am thinking primarily of it as an art form , in an aesthetic sense. The chief goal of my work as educator and author is to help people learn how to write beautiful programs . It is for this reason I was especially pleased to learn recently [32] that my books actually appear in the Fine Arts Library at Cornell University. (However, the three volumes apparently sit there neatly on the shelf, without being used, so I'm afraid the librarians may have made a mistake by interpreting my title literally.)

My feeling is that when we prepare a program, it can be like composing poetry or music; as Andrei Ershov has said [9], programming can give us both intellectual and emotional satisfaction, because it is a real achievement to master complexity and to establish a system of consistent rules.

Furthermore when we read other people's programs, we can recognize some of them as genuine works of art. I can still remember the great thrill it was for me to read the listing of Stan Poley's SOAP II assembly program in 1958; you probably think I'm crazy, and styles have certainly changed greatly since then, but at the time it meant a great deal to me to see how elegant a system program could be, especially by comparison with the heavy-handed coding found in other listings I had been studying at the same time. The possibility of writing beautiful programs, even in assembly language, is what got me hooked on programming in the first place.

Some programs are elegant, some are exquisite, some are sparkling. My claim is that it is possible to write grand programs, noble programs, truly magnificent ones!

Taste and Style

The idea of style in programming is now coming to the forefront at last, and I hope that most of you have seen the excellent little book on Elements of Programming Style by Kernighan and Plauger [16]. In this connection it is most important for us all to remember that there is no one "best" style; everybody has his own preferences, and it is a mistake to try to force people into an unnatural mold. We often hear the saying, "I don't know anything about art, but I know what I like." The important thing is that you really like the style you are using; it should be the best way you prefer to express yourself.

Edsger Dijkstra stressed this point in the preface to his Short Introduction to the Art of Programming [8]:

It is my purpose to transmit the importance of good taste and style in programming, [but] the specific elements of style presented serve only to illustrate what benefits can be derived from "style" in general. In this respect I feel akin to the teacher of composition at a conservatory: He does not teach his pupils how to compose a particular symphony, he must help his pupils to find their own style and must explain to them what is implied by this. (It has been this analogy that made me talk about "The Art of Programming.")
Now we must ask ourselves, What is good style, and what is bad style? We should not be too rigid about this in judging other people's work. The early nineteenth-century philosopher Jeremy Bentham put it this way [3, Bk. 3, Ch. 1]:
Judges of elegance and taste consider themselves as benefactors to the human race, whilst they are really only the interrupters of their pleasure... There is no taste which deserves the epithet good , unless it be the taste for such employments which, to the pleasure actually produced by them, conjoin some contingent or future utility: there is no taste which deserves to be characterized as bad, unless it be a taste for some occupation which has a mischievous tendency.
When we apply our own prejudices to "reform" someone else's taste, we may be unconsciously denying him some entirely legitimate pleasure. That's why I don't condemn a lot of things programmers do, even though I would never enjoy doing them myself. The important thing is that they are creating something they feel is beautiful.

In the passage I just quoted, Bentham does give us some advice about certain principles of aesthetics which are better than others, namely the "utility" of the result. We have some freedom in setting up our personal standards of beauty, but it is especially nice when the things we regard as beautiful are also regarded by other people as useful. I must confess that I really enjoy writing computer programs; and I especially enjoy writing programs which do the greatest good, in some sense.

There are many senses in which a program can be "good," of course. In the first place, it's especially good to have a program that works correctly. Secondly it is often good to have a program that won't be hard to change, when the time for adaptation arises. Both of these goals are achieved when the program is easily readable and understandable to a person who knows the appropriate language.

Another important way for a production program to be good is for it to interact gracefully with its users, especially when recovering from human errors in the input data. It's a real art to compose meaningful error messages or to design flexible input formats which are not error-prone.

Another important aspect of program quality is the efficiency with which the computer's resources are actually being used. I am sorry to say that many people nowadays are condemning program efficiency, telling us that it is in bad taste. The reason for this is that we are now experiencing a reaction from the time when efficiency was the only reputable criterion of goodness, and programmers in the past have tended to be so preoccupied with efficiency that they have produced needlessly complicated code; the result of this unnecessary complexity has been that net efficiency has gone down, due to difficulties of debugging and maintenance.

The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.

We shouldn't be penny wise and pound foolish, nor should we always think of efficiency in terms of so many percent gained or lost in total running time or space. When we buy a car, many of us are almost oblivious to a difference of $50 or $100 in its price, while we might make a special trip to a particular store in order to buy a 50 cent item for only 25 cents. My point is that there is a time and place for efficiency; I have discussed its proper role in my paper on structured programming, which appears in the current issue of Computing Surveys [21].

Less Facilities: More Enjoyment

One rather curious thing I've noticed about aesthetic satisfaction is that our pleasure is significantly enhanced when we accomplish something with limited tools. For example, the program of which I personally am most pleased and proud is a compiler I once wrote for a primitive minicomputer which had only 4096 words of memory, 16 bits per word. It makes a person feel like a real virtuoso to achieve something under such severe restrictions.

A similar phenomenon occurs in many other contexts. For example, people often seem to fall in love with their Volkswagens but rarely with their Lincoln Continentals (which presumably run much better). When I learned programming, it was a popular pastime to do as much as possible with programs that fit on only a single punched card. I suppose it's this same phenomenon that makes APL enthusiasts relish their "one-liners." When we teach programming nowadays, it is a curious fact that we rarely capture the heart of a student for computer science until he has taken a course which allows "hands on" experience with a minicomputer. The use of our large-scale machines with their fancy operating systems and languages doesn't really seem to engender any love for programming, at least not at first.

It's not obvious how to apply this principle to increase programmers' enjoyment of their work. Surely programmers would groan if their manager suddenly announced that the new machine will have only half as much memory as the old. And I don't think anybody, even the most dedicated "programming artists," can be expected to welcome such a prospect, since nobody likes to lose facilities unnecessarily. Another example may help to clarify the situation: Film-makers strongly resisted the introduction of talking pictures in the 1920's because they were justly proud of the way they could convey words without sound. Similarly, a true programming artist might well resent the introduction of more powerful equipment; today's mass storage devices tend to spoil much of the beauty of our old tape sorting methods. But today's film makers don't want to go back to silent films, not because they're lazy but because they know it is quite possible to make beautiful movies using the improved technology. The form of their art has changed, but there is still plenty of room for artistry.

How did they develop their skill? The best film makers through the years usually seem to have learned their art in comparatively primitive circumstances, often in other countries with a limited movie industry. And in recent years the most important things we have been learning about programming seem to have originated with people who did not have access to very large computers. The moral of this story, it seems to me, is that we should make use of the idea of limited resources in our own education. We can all benefit by doing occasional "toy" programs, when artificial restrictions are set up, so that we are forced to push our abilities to the limit. We shouldn't live in the lap of luxury all the time, since that tends to make us lethargic. The art of tackling miniproblems with all our energy will sharpen our talents for the real problems, and the experience will help us to get more pleasure from our accomplishments on less restricted equipment.

In a similar vein, we shouldn't shy away from "art for art's sake"; we shouldn't feel guilty about programs that are just for fun. I once got a great kick out of writing a one-statement ALGOL program that invoked an innerproduct procedure in such an unusual way that it calculated the mth prime number, instead of an innerproduct [19]. Some years ago the students at Stanford were excited about finding the shortest FORTRAN program which prints itself out, in the sense that the program's output is identical to its own source text. The same problem was considered for many other languages. I don't think it was a waste of time for them to work on this; nor would Jeremy Bentham, whom I quoted earlier, deny the "utility" of such pastimes [3, Bk. 3, Ch. 1]. "On the contrary," he wrote, "there is nothing, the utility of which is more incontestable. To what shall the character of utility be ascribed, if not to that which is a source of pleasure?"

Providing Beautiful Tools

Another characteristic of modern art is its emphasis on creativity. It seems that many artists these days couldn't care less about creating beautiful things; only the novelty of an idea is important. I'm not recommending that computer programming should be like modern art in this sense, but it does lead me to an observation that I think is important. Sometimes we are assigned to a programming task which is almost hopelessly dull, giving us no outlet whatsoever for any creativity; and at such times a person might well come to me and say, "So programming is beautiful? It's all very well for you to declaim that I should take pleasure in creating elegant and charming programs, but how am I supposed to make this mess into a work of art?"

Well, it's true, not all programming tasks are going to be fun. Consider the "trapped housewife," who has to clean off the same table every day: there's not room for creativity or artistry in every situation. But even in such cases, there is a way to make a big improvement: it is still a pleasure to do routine jobs if we have beautiful things to work with. For example, a person will really enjoy wiping off the dining room table, day after day, if it is a beautifully designed table made from some fine quality hardwood.

Therefore I want to address my closing remarks to the system programmers and the machine designers who produce the systems that the rest of us must work with. Please, give us tools that are a pleasure to use, especially for our routine assignments, instead of providing something we have to fight with. Please, give us tools that encourage us to write better programs, by enhancing our pleasure when we do so.

It's very hard for me to convince college freshmen that programming is beautiful, when the first thing I have to tell them is how to punch "slash slash JoB equals so-and-so." Even job control languages can be designed so that they are a pleasure to use, instead of being strictly functional.

Computer hardware designers can make their machines much more pleasant to use, for example by providing floating-point arithmetic which satisfies simple mathematical laws. The facilities presently available on most machines make the job of rigorous error analysis hopelessly difficult, but properly designed operations would encourage numerical analysts to provide better subroutines which have certified accuracy (cf. [20, p. 204]).

Let's consider also what software designers can do. One of the best ways to keep up the spirits of a system user is to provide routines that he can interact with. We shouldn't make systems too automatic, so that the action always goes on behind the scenes; we ought to give the programmer-user a chance to direct his creativity into useful channels. One thing all programmers have in common is that they enjoy working with machines; so let's keep them in the loop. Some tasks are best done by machine, while others are best done by human insight; and a properly designed system will find the right balance. (I have been trying to avoid misdirected automation for many years, cf. [18].)

Program measurement tools make a good case in point. For years, programmers have been unaware of how the real costs of computing are distributed in their programs. Experience indicates that nearly everybody has the wrong idea about the real bottlenecks in his programs; it is no wonder that attempts at efficiency go awry so often, when a programmer is never given a breakdown of costs according to the lines of code he has written. His job is something like that of a newly married couple who try to plan a balanced budget without knowing how much the individual items like food, shelter, and clothing will cost. All that we have been giving programmers is an optimizing compiler, which mysteriously does something to the programs it translates but which never explains what it does. Fortunately we are now finally seeing the appearance of systems which give the user credit for some intelligence; they automatically provide instrumentation of programs and appropriate feedback about the real costs. These experimental systems have been a huge success, because they produce measurable improvements, and especially because they are fun to use, so I am confident that it is only a matter of time before the use of such systems is standard operating procedure. My paper in Computing Surveys [21] discusses this further, and presents some ideas for other ways in which an appropriate interactive routine can enhance the satisfaction of user programmers.

Language designers also have an obligation to provide languages that encourage good style, since we all know that style is strongly influenced by the language in which it is expressed. The present surge of interest in structured programming has revealed that none of our existing languages is really ideal for dealing with program and data structure, nor is it clear what an ideal language should be. Therefore I look forward to many careful experiments in language design during the next few years.

Summary

To summarize: We have seen that computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better. Therefore we can be glad that people who lecture at computer conferences speak about the state of the Art .

References

1. Bailey, Nathan. The Universal Etymological English Dictionary. T. Cox, London, 1727. See "Art," "Liberal," and "Science."

2. Bauer, Walter F., Juncosa, Mario L., and Perlis, Alan J. ACM publication policies and plans. J. ACM 6 (Apr. 1959), 121-122.

3. Bentham, Jeremy. The Rationale of Reward. Trans. from Theorie des peines et des recompenses, 1811, by Richard Smith, J. & H. L. Hunt, London, 1825.

4. The Century Dictionary and Cyclopedia 1. The Century Co., New York, 1889.

5. Clementi, Muzio. The Art of Playing the Piano. Trans. from L'art de jouer le pianoforte by Max Vogrich. Schirmer, New York, 1898.

6. Colvin, Sidney. "Art." Encyclopaedia Britannica, eds 9, 11, 12, 13, 1875-1926.

7. Coxeter, H. S. M. Convocation address, Proc. 4th Canadian Math. Congress, 1957, pp. 8-10.

8. Dijkstra, Edsger W. EWD316: A Short Introduction to the Art of Programming. T. H. Eindhoven, The Netherlands, Aug. 1971.

9. Ershov, A. P. Aesthetics and the human factor in programming. Comm. ACM 15 (July 1972), 501-505.

10. Fielden, Thomas. The Science of Pianoforte Technique. Macmillan, London, 927.

11. Gore, George. The Art of Scientific Discovery. Longmans, Green, London, 1878.

12. Hamilton, William. Lectures on Logic 1. Win. Blackwood, Edinburgh, 1874.

13. Hodges, John A. Elementary Photography: The "Amateur Photographer" Library 7. London, 1893. Sixth ed, revised and enlarged, 1907, p. 58.

14. Howard, C. Frusher. Howard's Art of Computation and golden rule for equation of payments for schools, business colleges and self-culture .... C.F. Howard, San Francisco, 1879.

15. Hummel, J.N. The Art of Playing the Piano Forte. Boosey, London, 1827.

16. Kernighan B.W., and Plauger, P.J. The Elements of Programming Style. McGraw-Hill, New York, 1974.

17. Kirwan, Richard. Elements of Mineralogy. Elmsly, London, 1784.

18. Knuth, Donald E. Minimizing drum latency time. J. ACM 8 (Apr. 1961), 119-150.

19. Knuth, Donald E., and Merner, J.N. ALGOL 60 confidential. Comm. ACM 4 (June 1961), 268-272.

20. Knuth, Donald E. Seminumerical Algorithms: The Art of Computer Programming 2. Addison-Wesley, Reading, Mass., 1969.

21. Knuth, Donald E. Structured programming with go to statements. Computing Surveys 6 (Dec. 1974), pages in makeup.

22. Kochevitsky, George. The Art of Piano Playing: A Scientific Approach. Summy-Birchard, Evanston, II1., 1967.

23. Lehmer, Emma. Number theory on the SWAC. Proc. Syrup. Applied Math. 6, Amer. Math. Soc. (1956), 103-108.

24. Mahesh Yogi, Maharishi. The Science of Being and Art of Living. Allen & Unwin, London, 1963.

25. Malevinsky, Moses L. The Science of Playwriting. Brentano's, New York, 1925.

26. Manna, Zohar, and Pnueli, Amir. Formalization of properties of functional programs. J. ACM 17 (July 1970), 555-569.

27. Marckwardt, Albert H, Preface to Funk and Wagnall's Standard College Dictionary. Harcourt, Brace & World, New York, 1963, vii.

28. Mill, John Stuart. A System Of Logic, Ratiocinative and Inductive. London, 1843. The quotations are from the introduction, S 2, and from Book 6, Chap. 11 (12 in later editions), S 5.

29. Mueller, Robert E. The Science of Art. John Day, New York, 1967.

30. Parsons, Albert Ross. The Science of Pianoforte Practice. Schirmer, New York, 1886.

31. Pedoe, Daniel. The Gentle Art of Mathematics. English U. Press, London, 1953.

32. Ruskin, John. The Stones of Venice 3. London, 1853.

33. Salton, G.A. Personal communication, June 21, 1974.

34. Snow, C.P. The two cultures. The New Statesman and Nation 52 (Oct. 6, 1956), 413-414.

35. Snow, C.P. The Two Cultures: and a Second Look. Cambridge University Press, 1964.

Copyright 1974, Association for Computing Machinery, Inc. General permission to republish, but not for profit, all or part of this material is granted provided that ACM's copyright notice is given and that reference is made to the publication, to its date of issue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery.

[May 05, 2017] William Binney - The Government is Profiling You (The NSA is Spying on You)

Very interesting discussion of how the project of mass surveillance of internet traffic started and what were the major challenges. that's probably where the idea of collecting "envelopes" and correlating them to create social network. Similar to what was done in civil War.
The idea to prevent corruption of medical establishment to prevent Medicare fraud is very interesting.
Notable quotes:
"... I suspect that it's hopelessly unlikely for honest people to complete the Police Academy; somewhere early on the good cops are weeded out and cannot complete training unless they compromise their integrity. ..."
"... 500 Years of History Shows that Mass Spying Is Always Aimed at Crushing Dissent It's Never to Protect Us From Bad Guys No matter which government conducts mass surveillance, they also do it to crush dissent, and then give a false rationale for why they're doing it. ..."
"... People are so worried about NSA don't be fooled that private companies are doing the same thing. ..."
"... In communism the people learned quick they were being watched. The reaction was not to go to protest. ..."
"... Just not be productive and work the system and not listen to their crap. this is all that was required to bring them down. watching people, arresting does not do shit for their cause ..."
Apr 20, 2017 | www.youtube.com
Chad 2 years ago

"People who believe in these rights very much are forced into compromising their integrity"

I suspect that it's hopelessly unlikely for honest people to complete the Police Academy; somewhere early on the good cops are weeded out and cannot complete training unless they compromise their integrity.

Agent76 1 year ago (edited)
January 9, 2014

500 Years of History Shows that Mass Spying Is Always Aimed at Crushing Dissent It's Never to Protect Us From Bad Guys No matter which government conducts mass surveillance, they also do it to crush dissent, and then give a false rationale for why they're doing it.

http://www.washingtonsblog.com/2014/01/government-spying-citizens-always-focuses-crushing-dissent-keeping-us-safe.html

Homa Monfared 7 months ago

I am wondering how much damage your spying did to the Foreign Countries, I am wondering how you changed regimes around the world, how many refugees you helped to create around the world.

Don Kantner, 2 weeks ago

People are so worried about NSA don't be fooled that private companies are doing the same thing. Plus, the truth is if the NSA wasn't watching any fool with a computer could potentially cause an worldwide economic crisis.

Bettor in Vegas 1 year ago

In communism the people learned quick they were being watched. The reaction was not to go to protest.

Just not be productive and work the system and not listen to their crap. this is all that was required to bring them down. watching people, arresting does not do shit for their cause......

[Dec 26, 2016] Does Code Reuse Endanger Secure Software Development?

Dec 26, 2016 | it.slashdot.org
(threatpost.com) 148 Posted by EditorDavid on Saturday December 17, 2016 @07:34PM from the does-code-reuse-endanger-secure-software-development dept. msm1267 quotes ThreatPost: The amount of insecure software tied to reused third-party libraries and lingering in applications long after patches have been deployed is staggering. It's a habitual problem perpetuated by developers failing to vet third-party code for vulnerabilities, and some repositories taking a hands-off approach with the code they host. This scenario allows attackers to target one overlooked component flaw used in millions of applications instead of focusing on a single application security vulnerability.

The real-world consequences have been demonstrated in the past few years with the Heartbleed vulnerability in OpenSSL , Shellshock in GNU Bash , and a deserialization vulnerability exploited in a recent high-profile attack against the San Francisco Municipal Transportation Agency . These are three instances where developers reuse libraries and frameworks that contain unpatched flaws in production applications... According to security experts, the problem is two-fold. On one hand, developers use reliable code that at a later date is found to have a vulnerability. Second, insecure code is used by a developer who doesn't exercise due diligence on the software libraries used in their project.
That seems like a one-sided take, so I'm curious what Slashdot readers think. Does code reuse endanger secure software development?

[Dec 26, 2016] Ask Slashdot: Has Your Team Ever Succumbed To Hype Driven Development?

Dec 26, 2016 | ask.slashdot.org
(daftcode.pl) 332 Posted by EditorDavid on Sunday November 27, 2016 @11:30PM from the TDD-vs-HDD dept. marekkirejczyk , the VP of Engineering at development shop Daftcode, shares a warning about hype-driven development: Someone reads a blog post, it's trending on Twitter, and we just came back from a conference where there was a great talk about it. Soon after, the team starts using this new shiny technology (or software architecture design paradigm), but instead of going faster (as promised) and building a better product, they get into trouble . They slow down, get demotivated, have problems delivering the next working version to production.
Describing behind-schedule teams that "just need a few more days to sort it all out," he blames all the hype surrounding React.js, microservices, NoSQL, and that " Test-Driven Development Is Dead " blog post by Ruby on Rails creator David Heinemeier Hansson. ("The list goes on and on... The root of all evil seems to be social media.")

Does all this sound familiar to any Slashdot readers? Has your team ever succumbed to hype-driven development?

[Sep 14, 2011] The Immaturity of CMM by James Bach

"My thesis, in this essay, is that the CMM is a particular mythology of software process evolution that cannot legitimately claim to be a natural or essential representation of software processes."

This article was originally published in the September 94 issue of American Programmer.

The Software Engineering Institute's (SEI) Capability Maturity Model (CMM) gets a lot of publicity. Given that the institute is funded by the US Department of Defense to the tune of tens of millions of dollars each year [1], this should come as no surprise' the folks at the SEI are the official process mavens of the military, and have the resources to spread the word about what they do. But, given also that the CMM is a broad, and increasingly deep, set of assertions as to what constitutes good software development practice, it's reasonable to ask where those assertions come from, and whether they are in fact complete and correct.

My thesis, in this essay, is that the CMM is a particular mythology of software process evolution that cannot legitimately claim to be a natural or essential representation of software processes.

The CMM is at best a consensus among a particular group of software engineering theorists and practitioners concerning a collection of effective practices grouped according to a simple model of organizational evolution. As such, it is potentially valuable for those companies that completely lack software savvy, or for those who have a lot of it and thus can avoid its pitfalls.

At worst, the CMM is a whitewash that obscures the true dynamics of software engineering, suppresses alternative models. If an organization follows it for its own sake, rather than simply as a requirement mandated by a particular government contract, it may very well lead to the collapse of that company's competitive potential. For these reasons, the CMM is unpopular among many of the highly competitive and innovative companies producing commercial shrink-wrap software.

A short description of the CMM

The CMM [7] was conceived by Watts Humphrey, who based it on the earlier work of Phil Crosby. Active development of the model by the SEI began in 1986.

It consists of a group of "key practices", neither new nor unique to CMM, which are divided into five levels representing the stages that organizations should go through on the way to becoming "mature". The SEI has defined a rigorous process assessment method to appraise how well a organization satisfies the goals associated with each level. The assessment is supposed to be led by an authorized lead assessor.

The maturity levels are:

1. Initial (chaotic, ad hoc, heroic)

2. Repeatable (project management, process discipline)

3. Defined (institutionalized)

4. Managed (quantified)

5. Optimizing (process improvement)

One way companies are supposed to use the model is first to assess their maturity level and then form a specific plan to get to the next level. Skipping levels is not allowed.

The CMM was originally meant as a tool to evaluate the ability of government contractors to perform a contracted software project. It may be suited for that purpose; I don't know. My concern is that it is also touted as a general model for software process improvement. In that application, the CMM has serious weaknesses.

Shrink-wrap companies, which have also been called commercial off-the-shelf firms or software package firms, include Borland, Claris, Apple, Symantec, Microsoft, and Lotus, among others. Many such companies rarely if ever manage their requirements documents as formally as the CMM describes. This is a requirement to achieve level 2, and so all of these companies would probably fall into level 1 of the model.

Criticism of the CMM

A comprehensive survey of criticism of the CMM is outside the scope of this article. However, Capers Jones and Gerald Weinberg are two noteworthy critics.

In his book Assessment & Control of Software Risks [6], Jones discusses his own model, Software Productivity Research (SPR), which was developed independently from CMM at around the same time and competes with it today. Jones devotes a chapter to outlining the weaknesses of the CMM. SPR accounts for many factors that the CMM currently ignores, such as those contributing to the productivity of individual engineers.

In the two volumes of his Quality Software Management series [12,13], Weinberg takes issue with the very concept of maturity as applied to software processes, and instead suggests a paradigm based on patterns of behavior. Weinberg models software processes as interactions between humans, rather than between formal constructs. His approach suggests an evolution of "problem-solving leadership" rather than canned processes.

General problems with CMM

I don't have the space to expand fully on all the problems I see in the CMM. Here are the biggest ones from my point of view as a process specialist in the shrink-wrap world:

Feet of clay: The CMM's fundamental misunderstanding of level 1 Organizations

The world of technology thrives best when individuals are left alone to be different, creative, and disobedient. -- Don Valentine, Silicon Valley Venture Capitalist [8]

Apart from the concerns mentioned above, the most powerful argument against the CMM as an effective prescription for software processes is the many successful companies that, according the CMM, should not exist. This point is most easily made against the backdrop of the Silicon Valley.

Tom Peters's, Thriving on Chaos [9], amounts to a manifesto for Silicon Valley. It places innovation, non-linearity, ongoing revolution at the center of its world view. Here in the Valley, innovation reigns supreme, and it is from the vantage point of the innovator that the CMM seems most lost. Personal experience at Apple and Borland, and contact with many others in the decade I've spent here, support this view.

Proponents of the CMM commonly mistake its critics as being anti-process, and some of us are. But a lot of us, including me, are process specialists. We believe in the kinds of processes that support innovation. Our emphasis is on systematic problem-solving leadership to enable innovation, rather than mere process control to enable cookie-cutter solutions.

Innovation per se does not appear in the CMM at all, and it is only suggested by level 5. This is shocking, in that the most innovative firms in the software industry, (e.g., General Magic, a pioneer in personal digital communication technology) operate at level 1, according to the model. This includes Microsoft, too, and certainly Borland [2]. Yet, in terms of the CMM, these companies are considered no different than any failed startup or paralyzed steel company. By contrast, companies like IBM, which by all accounts has made a real mess of the Federal Aviation Administration's Advanced Automation Project, score high in terms of maturity (according to a member of a government audit team with whom I spoke).

Now, the SEI argues that innovation is outside of its scope, and that the CMM merely establishes a framework within which innovation may more freely occur. According to the literature of innovation, however, nothing could be further from the truth. Preoccupied with predictability, the CMM is profoundly ignorant of the dynamics of innovation.

Such dynamics are documented in Thriving on Chaos, Reengineering the Corporation [4], and The Fifth Discipline [10], three well known books on business innovation. Where innovators advise companies to get flexible, the CMM advises them to get predictable. Where the innovators suggest pushing authority down in the organization, the CMM pushes it upward. Where the innovators recommend constant constructive innovation, the CMM mistakes it for chaos at level 1. Where the innovators depend on a trail of learning experiences, the CMM depends on a trail of paper.

Nowhere is the schism between these opposing world-views more apparent than on the matter of heroism. The SEI regards heroism as an unsustainable sacrifice on the part of particular individuals who have special gifts. It considers heroism the sole reason that level 1 companies succeed, when they succeed at all.

The heroism more commonly practiced in successful level 1 companies is something much less mystical. Our heroism means taking initiative to solve ambiguous problems. This does not mean burning people up and tossing them out, as the SEI claims. Heroism is a definable and teachable set of behaviors that enhance and honor creativity (as a unit of United Technologies Microelectronics Center has shown [3]). It is communication, and mutual respect. It means the selective deployment of processes, not according to management mandate, but according to the skills of the team.

Personal mastery is at the center of heroism, yet it too has no place in the CMM, except through the institution of a formal training program. Peter Senge [10], has this to say about mastery:

"There are obvious reasons why companies resist encouraging personal mastery. It is 'soft', based in part on unquantifiable concepts such as intuition and personal vision. No one will ever be able to measure to three decimal places how much personal mastery contributes to productivity and the bottom line. In a materialistic culture such as ours, it is difficult even to discuss some of the premises of personal mastery. 'Why do people even need to talk about this stuff?' someone may ask. 'Isn't it obvious? Don't we already know it?'"

This is, I believe, the heart of the problem, and the reason why CMM is dangerous to any company founded upon innovation. Because the CMM is distrustful of personal contributions, ignorant of the conditions needed to nurture non-linear ideas, and content to bury them beneath a constraining superstructure, achieving level 2 on the CMM scale may very well stamp out the only flame that lit the company to begin with.

I don't doubt that such companies become more predictable, in the way that life becomes predictable if we resolve never to leave our beds. I do doubt that such companies can succeed for long in a dynamic world if they work in their pajamas.

An alternative to CMM

If not the maturity model, then by what framework can we guide genuine process improvement?

Alternative frameworks can be found in generic form in Thriving on Chaos, which contains 45 "prescriptions", or The Fifth Discipline, which presents--not surprisingly--five disciplines. The prescriptions of Thriving on Chaos are embodied in an organizational tool called The Excellence Audit, and The Fifth Discipline Fieldbook [11], which provides additional guidance in creating learning organizations, is now available.

An advantage of these models is that they provide direction, without mandating a particular shape to the organization. They actually provide guidance in creating organizational change.

Specific to software engineering, I'm working on a process model at Borland that consists of a seven-dimensional framework for analyzing problems and identifying necessary processes. These dimensions are: business factors, market factors, project deliverables, four primary processes (commitment, planning, implementation, convergence), teams, project infrastructure, and milestones. The framework connects to a set of scaleable "process cycles". The process cycles are repeatable step by step recipes for performing certain common tasks.

The framework is essentially a situational repository of heuristics for conducting successful projects. It is meant to be a quick reference to aid experienced practitioners in deciding the best course of action.

The key to this model is that the process cycles are subordinated to the heuristic framework. The whole thing is an aid to judgment, not a prescription for institutional formalisms. The structure of the framework, as a set of two-dimensional grids, assists in process tailoring and asking "what if...?"

In terms of this model, maturity means recognizing problems (through the analysis of experience and use of metrics) and solving them (through selective definition and deployment of formal and informal processes), and that means developing judgment and cooperation within teams. Unlike the CMM, there is no a priori declaration either of the problems, or the solutions. That determination remains firmly in the hands of the team.

The disadvantage of this alternative model is that it's more complex, and therefore less marketable. There are no easy answers, and our progress cannot be plotted on the fingers of one hand. But we must resist the temptation to turn away from the unmeasurable and sometimes ineffable reality of software innovation.

After all, that would be immature.

Postscript 02/99

In the five years since I wrote this article, neither the CMM situation, nor my assessment of it, has changed much. The defense industry continues to support the CMM. Some commercial IT organizations follow it, many others don't. Software companies pursuing the great technological goldrush of our time, the Internet, are ignoring it in droves. Studies alleging that the CMM is valuable don't consider alternatives, and leave out critical data that would allow a full analysis of what's going on in companies that claim to have moved up in CMM levels and to have benefited for that reason.

One thing about my opinion has shifted. I've become more comfortable with the distinction between the CMM philosophy, and the CMM issue list. As a list of issues worth addressing in the course of software process improvement, the CMM is useful and benign. I would argue that it's incomplete and confusing in places, but that's no big deal. The problem begins when the CMM is adopted as a philosophy for good software engineering.

Still, it has become a lot clearer to me why the CMM philosophy is so much more popular than it deserves to be. It gives hope, and an illusion of control, to management. Faced with the depressing reality that software development success is contingent upon so many subtle and dynamic factors and judgments, the CMM provides a step by step plan to do something unsubtle and create something solid. The sad part is that this step-by-step plan usually becomes a substitute for genuine education in engineering management, and genuine process improvement.

Over the last few years, I've been through Jerry Weinberg's classes on management and change artistry: Problem Solving Leadership, and the Change Shop. I've become a part of his Software Engineering Management Development Group program, and the SHAPE forum. Information about all of these are available at http://www.geraldmweinberg.com. In my view, Jerry's work continues to offer an excellent alternative to the whole paradigm of the CMM: managers must first learn to see, hear, and think about human systems before they can hope to control them. Software projects are human systems'deal with it.

One last plug. Add to your reading list The Logic of Failure, by Dietrich Dorner. Dorner analyzes how people cope with managing complex systems. Without mentioning software development or capability maturity, it's as eloquent an argument against CMM philosophy as you'll find.

References

1. Berti, Pat, "Four Pennsylvania schools await defense cuts.", Pittsburgh Business Times, Jan 22, 1990 v9 n24

2. Coplien, James, "Borland Software Craftsmanship: a New Look at Process, Quality and Productivity", Proceedings of the 5th Borland International Conference, 1994

3. Couger, J. Daniel; McIntyre, Scott C.; Higgins, Lexis F.; Snow, Terry A., "Using a bottom-up approach to creativity improvement in IS development.", Journal of Systems Management, Sept 1991 v42 n9 p23(6)

4. Hammer, Michael; Champy, James, Reengineering the Corporation, HarperCollins, 1993

5. Humphrey, Watts, Managing the Software Process, ch. 2, Addison-Wesley, 1989

6. Jones, Capers, Assessment & Control of Software Risks, Prentice-Hall, 1994

7. Paulk, Mark, et al, Capability Maturity Model 1.1 (CMU/SEI-93-TR-24)

8. Peters, Tom, The Tom Peters Seminar: Crazy Times Call for Crazy Organizations, Random House, 1994

9. Peters, Tom, Thriving on Chaos: Handbook for a Management Revolution, HarperCollins, 1987

10. Senge, Peter, The Fifth Discipline, Doubleday, 1990

11. Senge, Peter, The Fifth Discipline Fieldbook, Doubleday, 1994

12. Weinberg, Gerald M., Quality Software Management, v. 1 Systems Thinking, Dorset House, 1991

13. Weinberg, Gerald M., Quality Software Management, v. 2 First-order measurement, Dorset House, 1993

Review of The Immaturity of CMM" by James Bach Published in American Programmer, Sept. 1994 Kelly Nehowig Applied Logic Engineering

Introduction

The paper being reviewed was written to support the thesis that the Software Engineering Institute's Capability Maturity Model (SEI CMM) is a collection of software engineering practices that are organized according to a simple model based on process evolution that are not completely effective in every software organization. The author makes his case by describing six areas in which he has general problems with the CMM. This is followed by a section outlining the author's claim that a "level 1" organization is completely misunderstood by the SEI and that effective software can be (and actually is) created by many level 1 organizations. Finally, the author briefly describes an alternative to CMM that can be used as a framework for process improvement.

For the most part, I believe that the author has accurately critiqued the CMM and, from my experience, I would agree with the problems he discusses. In my mind, the CMM is a good theoretical guideline for establishing a basic understanding of the characteristics of a good software development organization, but by stringently following its processes and procedures to the letter, an organization is not guaranteed to be successful. The CMM does not deal effectively with innovation issues and people issues. It also does not reconcile the fact that many successful software organizations can claim various attributes associated with four (or sometimes all five) of the CMM levels but, due to the rules established by the CMM, would officially be designated a Level 1 organization, which unfairly describes the organization's capabilities.

Summary of the Reviewed Article

The article is broken into several sections that describe the CMM in general, the problems that the author has with the CMM, an alternative to the CMM, and a postscript that was added to the original paper in February of 1999.

Brief description of the CMM

The author describes the CMM as a group of key practices that are divided into five levels representing various maturity levels that organizations should go through on their way to becoming "mature".

The author lists the CMM levels as follows:

  1. Initial (chaotic, ad hoc, heroic)

  2. Repeatable (project management, process discipline)

  3. Defined (institutionalized)

  4. Managed (quantified)

  5. Optimizing (process improvement)

The author states that the original intent of the CMM was that of a tool to evaluate the ability of government contractors to perform a contracted software project. His primary concern is that many tout the CMM as a general model for process improvement and he believes that in this area, it has many weaknesses.

General Problems with the CMM

The author describes six basic problem areas that he has identified with the CMM:

  1. The CMM has no formal theoretical basis and in fact is based on the experience "of very knowledgeable people". Because of this lack of theoretical proof, any other model based on experiences of other experts would have equal veracity.

  2. The CMM does not have good empirical support and this same empirical support could also be construed to support other models. Without a comparison of alternative process models under a controlled study, an empirical case cannot be built to substantiate the SEI's claims regarding the CMM. Primarily, the model is based on the experiences of large government contractors and of Watts Humprey's own experience in the mainframe world. It does not represent the successful experiences of many shrink-wrap companies that are judged to be a "level 1" organization by the CMM.

  3. The CMM ignores the importance of people involved with the software process by assuming that processes can somehow render individual excellence less important. In order for this to be the case, problem-solving tasks would somehow have to be included in the process itself, which the CMM does not begin to address.

  4. The CMM reveres the institutionalization of process for its own sake. This guarantees nothing and in some cases, the institutionalization of processes may lead to oversimplified public processes, ignoring the actual successful practice of the organization.

  5. The CMM does not effectively describe any information on process dynamics, which confuses the study of the relationships between practices and levels within the CMM. The CMM does not perceive or adapt to the conditions of the client organization. Arguably, most and perhaps all of the key practices of the CMM at its various levels could be performed usefully at level 1, depending on the particular dynamics of an organization. Instead of modeling these process dynamics, the CMM merely stratifies them.

  6. The CMM encourages the achievement of a higher maturity level in some cases by displacing the true mission, which is improving the process and overall software quality. This may effectively "blind" an organization to the most effective use of its resources.
The author's most compelling argument against the CMM is the many successful software companies that, according to the CMM, should not exist. Many software companies that provide "shrink wrap" software such as Microsoft, Symantec, and Lotus would definitely be classified by the CMM as level 1 companies. In these companies, innovation reigns supreme, and it is from the perspective of the innovator that the CMM seems lost.

The author claims that innovation per se does not appear in the CMM at all, and is only suggested by level 5. Preoccupied with predictability, the CMM is ignorant of the dynamics of innovation. In fact, where innovators advise companies to be flexible, to push authority down into the organization, and to recommend constant constructive innovation, the CMM mistakes all of these attributes to the chaos that it represents in level 1 companies. Because the CMM is distrustful of personal contributions, ignorant of the environment needed to nurture innovative thinking, and content to bury organizations under an ineffective superstructure, achieving level 2 on the CMM scale may actually destroy the very thing that caused the company to be successful in the first place.

The author discusses the issue of "heroism", defined as the individual effort beyond the call of duty to make a project successful. The SEI regards heroism as a negative and as an unsustainable sacrifice on people that have special gifts. It considers heroism the sole reason that Level 1 companies can survive. The author claims a different definition for heroism – taking initiative to solve ambiguous problems. He claims that this is a definable and teachable set of behaviors that enhance creativity, which leads to personal mastery of the subject matter. In his opinion, it is not a negative, but it is a requirement of most successful organizations.

As an alternative to the CMM, the author introduces the idea of a framework based on heuristics for conducting successful projects. The key to this model is that it is an aid for judgment, not a prescription for institutional formalisms. In this model, maturity means recognizing problems through the analysis of experience and the use of metrics and to solve them through selective definition and deployment of processes.

This process model consists of a seven-dimensional framework for analyzing problems and identifying the correct processes. These dimensions include: business factors, market factors, project deliverables, four primary processes (commitment, planning, implementation, and convergence), teams, project infrastructure, and milestones. This framework connects to a set of processes that are repeatable for performing certain common tasks.

In an addendum to the original thesis, the author comments that not much has changed in his opinion on the CMM in the five years since originally writing his paper. Some software companies are successfully using the CMM in their organizations, but many, including most of the newer Internet-based software companies, are not using the CMM.

The author does comment on one shift in his thinking – that is the fact that he has become more comfortable with the idea of using CMM as a basic philosophy and not as an issues list. He now believes that using CMM to identify a list of issues worth addressing in the course of overall software improvement may be useful, but that it should not be adapted as a philosophy for good software engineering.

For the most part, I agree with the author's assessment of the CMM. Some of his arguments seem weaker than others, but I believe they are valid.

Due to the fact that the CMM has no theoretical basis and that it has no empirical proof, it loses value from an academic point of view. Although this (in my opinion) is one of the author's weaker arguments, it is important from the aspect of substantiation of the claims made by SEI. Without the theoretical proof and the lack of empirical support based on comparison of alternative models under a controlled study, the SEI's case for promoting CMM as the optimal model for software development is weakened.

The author's implication that the CMM institutionalizes process for its own sake without regard to current practices is an accurate assessment in my view. I have seen organizations that implement policies without regard to current organizational practices (some of which are quite successful). The result is a confused development group that gets a series of mixed messages from "management" which do not necessarily improve the development process.

Another key fault in the CMM described by the author is the overriding pressure to move to the next maturity level, sometimes by ignoring the true mission, which is the quality of the software product. The factors important in moving up to the next level may or may not necessarily benefit the organization and its products because all subjectivity is removed. What benefits one organization may not have the same effect in another organization.

The author's claims about heroism are interesting, but I differ slightly with the conclusions that he draws. I agree that heroism, as defined by taking the initiative to solve ambiguous problems, is critical to the success of an organization. However, in my experience, heroism is a trait that is difficult to teach. I believe it is inherent within the individual for the most part and, without a good process model, can be abused by the organization in order to accomplish its goals.

Probably one of the more important points that the author touches on is the CMM's implied claim of the importance of process over people. It has been my experience that processes are not direct substitutes for the quality of the development team personnel. In other words, the right organizational processes can improve the output of a group of talented software developers, but they do not create one. By ignoring this critical item, the CMM loses credibility with anyone experienced with a wide range of development teams.

A striking example that is prevalent throughout the author's thesis is the number of software companies that probably exist at CMM Level 1, but that are incredibly successful. Microsoft is a prime example – although they do not model their organization in a manner that the SEI considers to be "mature", their products mostly meet or exceed the customer's needs and this creates a very successful company.

In considering the alternatives to the CMM, I believe that the author is correct in his assertion that a model based on past experience and the use of metrics is probably more effective in practice as compared to the CMM. The implementation of such a model is based on selective definition of problems and the selective deployment of specific processes.

... ... ...

Laser Summer School - Lectures

Title: Lean software development

Speaker: Mary Poppendieck

Description: As global competitiveness comes to the software development industry, the search is on for a better way to create first-class software rapidly, repeatedly, and reliably. Lean initiatives in manufacturing, logistics, and services have led to dramatic improvements in cost, quality and delivery time; can they do the same for software development? The short answer is "Absolutely!"
Of the many methods that have arisen to improve software development, Lean is emerging as one that is grounded in decades of work understanding how to make processes better. Lean thinking focuses on giving customers what they want, when and where the want it, without a wasted motion or wasted minute.

You will learn how to:

Outline:

Lecture 1 – Overview

IRIS 29

Software Process Improvement centers around three goals: Productivity, quality and predictability. Productivity we normally understand as a measure of the effort required to deliver a product. Quality is related to meeting requirements and defects are deviations from requirements. Predictability regards our ability to predict process performance using historical data and principles of statistical process control. In CMMI we thus see process performance as a measure of actual results achieved by following a process. As examples of process measures CMMI identifies e.g. effort, cycle time, and defect removal efficiency, while product measures could be reliability, defect density, or response time. Obviously this means that traditional SPI is about establishing a known base of established practices, and improved process performance is about discrete variations in otherwise repeated practices from this base. Planning, stability and repetition are cornerstones in professional software development according to this view.

My questions are simple, perhaps even naive:

Lasse's weblog - CMM critique in CIO.com

I know this is ancient news in the Internet timescale, but...

Without commenting on the fraud aspect of the article, it still hurts to realize how CXOs believe a CMM certification has anything to do with the quality of the software process... As someone mentioned in a past CMM assessment recap meeting, CMM (as well as ISO 900X) is about making sure you're following the process you say you're following. Nothing more, nothing less.

Posted by lasse on February 1, 2005 9:34:29 AM EET

Re: CMM critique in CIO.com

Yeah, at my previous job we were bought out by a very large defense contractor with a CMM 3 rating. So, in order to make sure we also operated at CMM level 3, they gave everyone in the company 2 days training. Then declared that we were a CMM level 3 shop. Woohoo!

Re: CMM critique in CIO.com

I worked with a level 5 certified vendor. After a little bit we realized the certification does not matter(at least with that vendor) and we had to rewrite lot of stuff due to their thousand internal process problems and turn over of employees etc.

Comment from Kishore Dandu on February 1, 2005 11:11:13 PM EET

Re: CMM critique in CIO.com

It's easy to regress back to Tayloristic thinking once "your team" becomes "your IT organization" or "your software factory" or "your offshore center". If you're holding a CMM level 5 assessment, you're definitely not the smallest headcount in town and you probably don't know even half of your coworkers by face, let alone by name. In that setting, it's difficult to keep in mind that software development is about people first and foremost.

Comment from Lasse on February 2, 2005 12:09:57 AM EET

[Feb 9, 2007] The Business of Software - CMM Level 5

You can have a defined, repeatable, optimized process in place, but still have bad programmers.

In India,there is a fashion-companies like to advertise themselves as CMM level 5.I wonder if its a scam(easy to get such certification,or these companies r lying).I had joined such a company last year,and I was shocked the way they worked.I was assigned to write a proof-of-concept for an Ajax library,which was nothing but taken from a website(I was told their team has developed).There were other documents that I had to prepare.
I wonder if in US also you find such companies.How good they are?

jam
Friday, February 09, 2007

====

India has a huge drive for companies to get CMM level 5 accreditation, because it is seen as a requirement in the west. The Dilbert managers may not have a clue about the benefits (and losses) involved in outsourcing, but they are easy to buy out with Gartner reports and CMM accreditation. So while it may not be easy to get CMM5, I think overall it has very little bearing on the internal messiness of a company.

Simon@AutoUpdate+
Saturday, February 10, 2007

====

"So while it may not be easy to get CMM5, I think overall it has very little bearing on the internal messiness of a company. "

Actually, it has MUCH bearing on their internal organization...the development process is what is judged. Yes, it is very difficult. I work for an upper fortune 50 company and our local dev shop worked very very hard to get CMM level 3. Either all those shops are lying that they have level 5, or the CMM judges in India are easily bought. I am not saying that there are not CMM L5 companies in India....just that if there are...they are very few and far between.

DH
Saturday, February 10, 2007

====

When I was working at a large company as a contractor, we outsourced some gui components to a CMM 5 company in India. I was not impressed with their quality. There was a lot of turnover in the company, both developers and managers. Code reviews showed very junior coding. Project wizards did not functions the way they should.

To their credit, they did EXACTLY what the specifications said even it they did not make sense for a developer. Our management was just bad at writing the specs ;-)

Remember CMM is about the process of making software, making it repeatable and optimizing it. You can have a defined, repeatable, optimized process in place, but still have bad programmers. They just have to be able to follow the process that is in place.

SteveM Send private email
Saturday, February 10, 2007

====

The cio.com article is pretty good, but the softpanorama.org article is pretty selective about its quotes. He quotes:

"In fact, the study found that Level 5 companies on average had higher defect rates than anyone else."

The full quote says:

"In fact, the study found that Level 5 companies on average had higher defect rates than anyone else. But Reasoning did see a difference when it sent the code back to the developers for repairs and then tested it again. The second time around, the code from CMM companies improved, while the code from the non-CMM companies showed no improvement."

Of course both articles fail to mention that the reason CMM-5 companies show more defects per line of code than CMM-1 companies is because the CMM-5 companies actually know how many bugs they have because they have a actual repeatable and accurate quality control process, and then CMM-1 company doesn't. What would you rather have? More defects from a company that actually measures how many defects they have, or 'less defects' from a company that has absolutely no idea how many defects they have, so they are just making up a random number?

Meghraj Reddy
Sunday, February 11, 2007

Bursting the CMM Hype - Software Quality - CIO Magazine Mar 1,2004

Nice quote: "They said they were Level 4, but in fact they had never been assessed"

Truth in Advertising

Stories about false claims abound. Ron Radice, a longtime lead appraiser and former official with the SEI, worked with a Chicago company that was duped in 2003 by an offshore service provider that falsely claimed to have a CMM rating. "They said they were Level 4, but in fact they had never been assessed," says Radice, who declined to name the guilty provider.

... ... ...

How Much for That Certification?

Appraisers continue to cheat too, according to their colleagues. The pressure on appraisers, in fact, is higher than ever today, especially with offshore providers competing in the outsourcing market. Frank Koch, a lead appraiser with Process Strategies Inc., another software services consultancy, says some Chinese consulting companies he dealt with promised a certain CMM level to clients and then expected him to give it to them. "We don't do work for certain [consultancies in China] because their motives are a whole lot less than wholesome," he says. "They'd say we're sure [certain clients] are a Level 2 or 3 and that's unreasonable, to say nothing of unethical. The term is called selling a rating."

... ... ...

A quick Nexis search revealed four companies - Cognizant, Patni, Satyam and Zensar -claiming "enterprise CMM 5," with no explanation of where the assessments were conducted or how many projects were assessed, or by whom. Dozens more companies trumpet their CMM levels with little or no explanation.

Indeed, all of the services companies we interviewed for this story claimed that their CMM assessments applied across the company when in fact only 10 percent to 30 percent of their projects were assessed.

cmm fraud - Reader Comment - CIO

" They then got CMM level 4 rating and now CMM level 4 ... all fraudulently. They create a database of training records and classes... populate it and show the auditor LOOK at all the classes...after they get the level they cancel ALL training and proceed with biddness as usual. Pathetic. "
I am subcontractor at a large defense corp. They claimed to be CMM level 3 when i started 5 years ago. Yet they didn't do training or peer reviews. They then got CMM level 4 rating and now CMM level 4 ... all fraudulently. They create a database of training records and classes..populate it and show the auditor LOOK at all the classes...after they get the level they cancel ALL training and proceed with biddness as usual. Pathetic. They are now trying for level 5. Despite not having ANY business for any products other than level 3. our quality has not changed one bit in 5 years. The workers are more miserable now and spend less time on the PRODUCT than they do on CMM bullshat though. Its only dedicated WORKERS that ensure the customer still gets a quality product. Management is insane with this CMM quest even though it has NO ADDED VALUE whatsoever and none of our current customers care about paying for anything other than level 3.

InformIT Comments on the Article The Future of Outsourcing: September 11, 2011 by Alan Gore

" CMM cert process is itself subject to manipulation and fraud by the fact that anybody can submit any project (even one they didn't do) for review to the people at Carnegie Mellon. "

If it's not clear, I meant to say that the CMM cert process is itself subject to manipulation and fraud by the fact that anybody can submit any project (even one they didn't do) for review to the people at Carnegie Mellon.

The "true believers" refers to those at CM and elsewhere who continue to preach "Software Engineering" when the vast majority of its adherents cannot reliably or even consistently produce success from project to project. None who has far more failures than successes when using their own methods is in a position to lecture others on the "right way" to make successful software. Once again, the emperor has no software project magic fix, and processes which demand innate skill cannot be mass-produced in a population without that inate skill. Get over it.

Durba, your idiotic generalization will make you nice fodder for the next c by markusbaccus OCT 09, 2003 02:23:05 AM

The CMM is a cert in that it rates a company's adoption of an apparently unquestionable methodologly which has a 2/3 rate of failure. It is the logical equivalent of saying, "If you don't blow on that dice three times before you roll it, you only have a one in six chance of rolling a six. Umm-- prove it.

Do me a favor, learn how to recognize logically falacious arguments like an "appeal to authority" or a "non sequitor", ("why isn't the SEI doing something about it?" == fallacious belief that SEI is in a postion to adequately identify fraud merely because it is a recognizable authority, or that it would even have an incentive to do so. e.g. "He is an expert in physics so he would never lie to protect his project's funding." Oh, and since we're on it, you implicitly made an error of misplaced deduction when you missed my point. (e.g: "I lit one match, so all matches will light." i.e., it may be true that ONE project met the standards of the Capability Maturity Model Level 5, but that is not an indicator of whether that company really lives up to those standich rests upon a statistically insignificant sampling of people (one guy who is self-selected to be non-technical, or else they would have no need to offshore their work to your company, now would they??? Duh!

Here's a clue Durba: Offshoring is not due to a shortage of American talent, it's due to a shortage of American talent who could afford to live in America on $10 per hour. Now, drawing upon my many years of experience with teams from many nationalities, it may surprise you to know that I would estimate that about one in ten IT workers are worth their pay, the other nine are worthless or a menace, and this ratio holds true regardless of their nationality (Although Eastern Europeans do seem to do much better than 10%). Since you guys merely adopted our IT training and introduced no new methods (unlike the communist bloc countries), I would suggest that this should surprise no one who thought about it.

Continuation for Durba so he can catch the clue train.

by markusbaccus OCT 09, 2003 02:26:21 AM

If you want to go down the road of idiotic generalizations about particular nationalities, I could tell many stories of *real* one-dimensional thinking by Indian techs which led to far more catastrophic results than inconveniencing you with a non-consequential question. If such a trivial issue is your idea of bad, it makes me wonder if you even know what bad is. Since you're using a web browser (undoubtably IE) as your FTP client, I can only imagine how lost your team would be if you Windows-jockeys had to rely upon a command line FTP client, which of course would never have such a problem and would have superior performance to IE's lame-ass implementation. Maybe the guy didn't know to look in his browser settings because he actually is used to using a different and better tool for the job than you are?

That wouldn't surprise me, because I've met many Indians who seem to have a special gift for assuming they know better than people with many times their experience and ignoring what they are told until after the predictable disaster strikes, at which time they usually act like they have discovered something remarkable all by themselves or become strangely silent as they scramble to fix their opus to fuckology. People like that will almost nevewill need to rely upon protectionism, nationalistic prejudice, and nepotism if they want to keep their job in the face of global competition.

Which, since we're on the topic, Durba, let me ask you a simple question: How are you going to keep your job when you have to compete with people who will work for $3.00 USD per hour, or worse, $7 a day? What worth will your four year degree be then, genius? Get it yet? Think about it. Wipro is already working the Vietnam angle for when you guys get uppity. Given that little reality, your heyday won't last for four decades like ours did. Maybe an American will bail you out when someone finally convinces a critical mass of managers that development quality, not cost, is what leads to better ROI. Then only the truly skilled will do well.

Past history supports Alan's view

by gerbilinheat OCT 06, 2003 09:58:51 AM

Most of us recall the flight of aircraft engineers / aerospace technicians in the late 1980's after the meltdown of the Reagan Perpetual War Budget that resulted in the Reagan and Bush tax increases on the middle class.

Ultimately, we wound up with Lockheed retiring from the commercial aircraft business entirely, McDonnell Douglass and Boeing both suffering in worldwide sales from the British - French consortium Aerospatial and its world class Airbus series.

Currently, China, Thailand, Burma, Peru and several U.S. carriers are going Airbus.
All these steps, and these identical results, occurred in the steel, aluminum, automobile, shipbuilding and textile industries. NONE have returned to significant and lasting profitability to date.

Simply, if you let go of your expertise, you let go of your market.

The economy!

by Harley OCT 06, 2003 02:58:20 PM

Ignoring the issue of religion, really don't need to travel down that rabbit hole, the real issue that no one has talked about here is the impact on the economy. Simple math, replace a 100K software job with a 30K job and the baker, butcher, laundry, auto, home repair etc. that the 100K software job supported are gone also. This is simple trickle down poverty for America! For heavens sake, the US government is sending contract software jobs over seas while millions of unemployed Americans can and are capable of doing the work. Overseas outsourcing needs to be controlled now! Whether you believe Wall Street or not, the economy has not hit bottom yet, and I believe it is just taking a breath before it plunges much further. Sometimes people need to hear the radical extreme to open their eyes to what could happen.

Recommended Links

[PDF] The Immaturity of CMM by James Bach (Formerly of Borland ) Classic critique of CMM

Bursting the CMM Hype - Software Quality - CIO Magazine Mar 1,2004 BY CHRISTOPHER KOCH U.S. CIOs want to do business with offshore companies with high CMM ratings. But some outsourcers exaggerate and even lie about their Capability Maturity Model scores. Why CIOs should never take CMM ratings at face value. Only if CIOs ask tough questions will they be able to distinguish between the companies that are exaggerating their CMM claims and those that are focused on real improvement. Here's the list.
Read More

Software Capability Maturity Model Level Two (SW-CMM)

Review of The Immaturity of CMM" by James Bach Published in American Programmer, Sept. 1994 Kelly Nehowig Applied Logic Engineering

Introduction

The paper being reviewed was written to support the thesis that the Software Engineering Institute's Capability Maturity Model (SEI CMM) is a collection of software engineering practices that are organized according to a simple model based on process evolution that are not completely effective in every software organization. The author makes his case by describing six areas in which he has general problems with the CMM. This is followed by a section outlining the author's claim that a "level 1" organization is completely misunderstood by the SEI and that effective software can be (and actually is) created by many level 1 organizations. Finally, the author briefly describes an alternative to CMM that can be used as a framework for process improvement.

For the most part, I believe that the author has accurately critiqued the CMM and, from my experience, I would agree with the problems he discusses. In my mind, the CMM is a good theoretical guideline for establishing a basic understanding of the characteristics of a good software development organization, but by stringently following its processes and procedures to the letter, an organization is not guaranteed to be successful. The CMM does not deal effectively with innovation issues and people issues. It also does not reconcile the fact that many successful software organizations can claim various attributes associated with four (or sometimes all five) of the CMM levels but, due to the rules established by the CMM, would officially be designated a Level 1 organization, which unfairly describes the organization's capabilities.

Summary of the Reviewed Article

The article is broken into several sections that describe the CMM in general, the problems that the author has with the CMM, an alternative to the CMM, and a postscript that was added to the original paper in February of 1999.

Brief description of the CMM

The author describes the CMM as a group of key practices that are divided into five levels representing various maturity levels that organizations should go through on their way to becoming "mature".

The author lists the CMM levels as follows:

  1. Initial (chaotic, ad hoc, heroic)

  2. Repeatable (project management, process discipline)

  3. Defined (institutionalized)

  4. Managed (quantified)

  5. Optimizing (process improvement)

The author states that the original intent of the CMM was that of a tool to evaluate the ability of government contractors to perform a contracted software project. His primary concern is that many tout the CMM as a general model for process improvement and he believes that in this area, it has many weaknesses.

General Problems with the CMM

The author describes six basic problem areas that he has identified with the CMM:

  1. The CMM has no formal theoretical basis and in fact is based on the experience "of very knowledgeable people". Because of this lack of theoretical proof, any other model based on experiences of other experts would have equal veracity.

  2. The CMM does not have good empirical support and this same empirical support could also be construed to support other models. Without a comparison of alternative process models under a controlled study, an empirical case cannot be built to substantiate the SEI's claims regarding the CMM. Primarily, the model is based on the experiences of large government contractors and of Watts Humprey's own experience in the mainframe world. It does not represent the successful experiences of many shrink-wrap companies that are judged to be a "level 1" organization by the CMM.

  3. The CMM ignores the importance of people involved with the software process by assuming that processes can somehow render individual excellence less important. In order for this to be the case, problem-solving tasks would somehow have to be included in the process itself, which the CMM does not begin to address.

  4. The CMM reveres the institutionalization of process for its own sake. This guarantees nothing and in some cases, the institutionalization of processes may lead to oversimplified public processes, ignoring the actual successful practice of the organization.

  5. The CMM does not effectively describe any information on process dynamics, which confuses the study of the relationships between practices and levels within the CMM. The CMM does not perceive or adapt to the conditions of the client organization. Arguably, most and perhaps all of the key practices of the CMM at its various levels could be performed usefully at level 1, depending on the particular dynamics of an organization. Instead of modeling these process dynamics, the CMM merely stratifies them.

  6. The CMM encourages the achievement of a higher maturity level in some cases by displacing the true mission, which is improving the process and overall software quality. This may effectively "blind" an organization to the most effective use of its resources.
The author's most compelling argument against the CMM is the many successful software companies that, according to the CMM, should not exist. Many software companies that provide "shrink wrap" software such as Microsoft, Symantec, and Lotus would definitely be classified by the CMM as level 1 companies. In these companies, innovation reigns supreme, and it is from the perspective of the innovator that the CMM seems lost.

The author claims that innovation per se does not appear in the CMM at all, and is only suggested by level 5. Preoccupied with predictability, the CMM is ignorant of the dynamics of innovation. In fact, where innovators advise companies to be flexible, to push authority down into the organization, and to recommend constant constructive innovation, the CMM mistakes all of these attributes to the chaos that it represents in level 1 companies. Because the CMM is distrustful of personal contributions, ignorant of the environment needed to nurture innovative thinking, and content to bury organizations under an ineffective superstructure, achieving level 2 on the CMM scale may actually destroy the very thing that caused the company to be successful in the first place.

The author discusses the issue of "heroism", defined as the individual effort beyond the call of duty to make a project successful. The SEI regards heroism as a negative and as an unsustainable sacrifice on people that have special gifts. It considers heroism the sole reason that Level 1 companies can survive. The author claims a different definition for heroism – taking initiative to solve ambiguous problems. He claims that this is a definable and teachable set of behaviors that enhance creativity, which leads to personal mastery of the subject matter. In his opinion, it is not a negative, but it is a requirement of most successful organizations.

As an alternative to the CMM, the author introduces the idea of a framework based on heuristics for conducting successful projects. The key to this model is that it is an aid for judgement, not a prescription for institutional formalisms. In this model, maturity means recognizing problems through the analysis of experience and the use of metrics and to solve them through selective definition and deployment of processes.

This process model consists of a seven-dimensional framework for analyzing problems and identifying the correct processes. These dimensions include: business factors, market factors, project deliverables, four primary processes (commitment, planning, implementation, and convergence), teams, project infrastructure, and milestones. This framework connects to a set of processes that are repeatable for performing certain common tasks.

In an addendum to the original thesis, the author comments that not much has changed in his opinion on the CMM in the five years since originally writing his paper. Some software companies are successfully using the CMM in their organizations, but many, including most of the newer Internet-based software companies, are not using the CMM.

The author does comment on one shift in his thinking – that is the fact that he has become more comfortable with the idea of using CMM as a basic philosophy and not as an issues list. He now believes that using CMM to identify a list of issues worth addressing in the course of overall software improvement may be useful, but that it should not be adapted as a philosophy for good software engineering.

For the most part, I agree with the author's assessment of the CMM. Some of his arguments seem weaker than others, but I believe they are valid.

Due to the fact that the CMM has no theoretical basis and that it has no empirical proof, it loses value from an academic point of view. Although this (in my opinion) is one of the author's weaker arguments, it is important from the aspect of substantiation of the claims made by SEI. Without the theoretical proof and the lack of empirical support based on comparison of alternative models under a controlled study, the SEI's case for promoting CMM as the optimal model for software development is weakened.

The author's implication that the CMM institutionalizes process for its own sake without regard to current practices is an accurate assessment in my view. I have seen organizations that implement policies without regard to current organizational practices (some of which are quite successful). The result is a confused development group that gets a series of mixed messages from "management" which do not necessarily improve the development process.

Another key fault in the CMM described by the author is the overriding pressure to move to the next maturity level, sometimes by ignoring the true mission, which is the quality of the software product. The factors important in moving up to the next level may or may not necessarily benefit the organization and its products because all subjectivity is removed. What benefits one organization may not have the same effect in another organization.

The author's claims about heroism are interesting, but I differ slightly with the conclusions that he draws. I agree that heroism, as defined by taking the initiative to solve ambiguous problems, is critical to the success of an organization. However, in my experience, heroism is a trait that is difficult to teach. I believe it is inherent within the individual for the most part and, without a good process model, can be abused by the organization in order to accomplish its goals.

Probably one of the more important points that the author touches on is the CMM's implied claim of the importance of process over people. It has been my experience that processes are not direct substitutes for the quality of the development team personnel. In other words, the right organizational processes can improve the output of a group of talented software developers, but they do not create one. By ignoring this critical item, the CMM loses credibility with anyone experienced with a wide range of development teams.

A striking example that is prevalent throughout the author's thesis is the number of software companies that probably exist at CMM Level 1, but that are incredibly successful. Microsoft is a prime example – although they do not model their organization in a manner that the SEI considers to be "mature", their products mostly meet or exceed the customer's needs and this creates a very successful company.

In considering the alternatives to the CMM, I believe that the author is correct in his assertion that a model based on past experience and the use of metrics is probably more effective in practice as compared to the CMM. The implementation of such a model is based on selective definition of problems and the selective deployment of specific processes.

... ... ...

Official and semi-official documents

Chapter 2 - Process A Generic View Links to official documents

Capability Maturity Model® Integration (CMMIsm), Version 1.1
Continuous Representation [PDF]

Carnegie Mellon Software Engineering Institute

This report is on Capability Maturity Models (CMMs) and Capability Maturity Model Integration (CMMI). Model components, model terminology, capability levels and generic model components, framework interactions, using CMMI models, and process areas are detailed.

Capability Maturity Model® Integration (CMMIsm), Version 1.1
Staged Representation [PDF]

Carnegie Mellon Software Engineering Institute

This report is on Capability Maturity Models (CMMs) and Capability Maturity Model Integration (CMMI). Model components, model terminology, common features, generic goals and generic practices, framework interactions, using CMMI models, and process areas are detailed.

CMMI Version 1.1 Tutorial [also available in PDF]
Mike Phillips

This slide presentation addresses: why to focus on Process, why to use a model, CMMI structure, comparisons with SW-CMM v1.1, SE-CMM, and EIA/IS 731, a process areas overview, appraisal methodology, and training.

CMMI-SE/SW V1.1 to SW-CMM V1.1 Mapping [PDF]
USAF Software Technology Support Center (STSC)

Contains mappings of the Capability Maturity Model for Software (SW-CMM) Version 1.1 to and from the Capability Maturity Model- Integrated - Systems Engineering/Software (CMMISE/SW/IPPD) Version 1.1 by the Software Technology Support Center to answer the questions What does this mean to me?" and "How does this compare to what I am already doing with regard to an existing model?". Also includes sections on SW-CMM key process areas, CMMI-SE/SW specific practices and how to read the maps.

KPA Summary of the Evolution of the SEI's Software CMM® [PDF]
Mark C.Paulk

Contains tables that summarize the general changes at the key process area (KPA) level between different iterations of the "CMM" as it evolved from a software process maturity framework in 1987 to 1993's Software CMM v1.1.

Using the Software CMM® in Small Organizations [PDF]
Mark C. Paulk

This paper discusses how to use the CMM in any business environment but focuses on the small organization with the use of examples. The conclusion of this paper is that using the CMM may be different in degree between small or large projects or organizations, but they are not different in kind.

Using the Software CMM® With Good Judgement [PDF]
Mark C. Paulk

This paper discusses how to use the CMM in any organization but focuses on the small organization, rapid prototyping projects, maintenance shops, R&D outfits, and other environments with the use of examples. This paper concludes that the issues of interpreting the CMM are the same for any organization, they may be different in degree but they are not different in kind.

Examples of CMM hype

[PDF] A Mature View of the CMM

[May 05, 2005] Quality lures software outsourcing By Nicholas Zamiska, The Wall Street Journal

India became a software outsourcing hub by reassuring multinational clients it could compete on quality as well as on cost. Now that quality movement is rapidly spreading around the globe, as other countries pursue the same strategy.

The emphasis on quality is almost a no-brainer when it comes to outsourcing such demanding work as software development, where a small error can undermine an entire project. No matter the cost savings, turning that work over to strangers would be impractical without some means to control against quality risks.

"You're entering the unknown," says Neil McKearney, a software manager with the Swiss arm of France Telecom SA's cellphone unit, Orange, which recently signed a software-development contract with Tata Consultancy Services, the Bombay outsourcing titan. No one in his group has ever traveled to India to meet the software developers working on the project, and Mr. McKearney says no one needs to, because the quality of Tata's work has been certified.

The gold standard in the quality-certification business is the Capability Maturity Model, or CMM, which sets out specific steps needed for an effective development process to be completed. The CMM was conceived by the Software Engineering Institute at Carnegie Mellon University in Pittsburgh, a group funded by the U.S. government because it wanted a standardized way to assess the work of contractors. CMM certification is awarded by consulting firms that can charge companies to evaluate and train their personnel in CMM methods.

CMM has become so popular in India that a technician on a recent visit there saw the CMM logo stamped on a burlap bag of basmati rice, presumably endorsing the grain.

"If you don't have the quality certification), you're not even considered," says Dion Wiggins, a Hong Kong-based analyst at the Stamford, Conn., consultancy Gartner Inc. "It's a must-have."

But now the same standards that allowed Indian companies to lure business from Europe and the U.S. have begun to migrate, helping upstarts from Chile to Egypt to Vietnam chip away at India's outsourcing empire. Despite a shortage of English speakers and skilled programmers, China is pre-eminent among them.

In 2002, only 18 Chinese companies were CMM-certified, compared with 153 Indian ones, according to the Software Engineering Institute. Now that number has climbed to 243, compared with the 387 Indian companies that are accredited.

One of the companies certified at CMM's highest level is Bamboo Networks, which is based in Hong Kong while most of its employees are in mainland China. Bamboo's client list now includes the likes of Credit Suisse Group's Credit Suisse First Boston and Bank of America Corp.

When Bamboo was founded in 1999, costs were out of control and some projects were unprofitable, says Gene T. Kim, the company's 35-year-old chief executive. So Mr. Kim, who had left a hedge fund in New York to found the software company, asked three of his top managers to spend a month researching what the Indian companies were doing that Bamboo wasn't. They came back to him and said that a quality certification was a must.

"It's the passport to the global market," says Mr. Kim.

As part of its bid to become certified, Bamboo created more than 250 types of forms and checklists that programmers need to fill out as they type code. Some resisted the transition to the regimented process, Mr. Kim says -- and were fired.

Now the company is profitable and looking to expand. When Bamboo began, it found it could bill its customers at only $14 an hour. After accreditation, its rate shot up to $20 an hour. In wooing new clients, Mr. Kim says, CMM is "the first thing we mention and the last thing we mention."

Critics of CMM complain that companies boast of being CMM-rated when perhaps only one or two divisions have earned the distinction. The Software Engineering Institute has received complaints of fraud from corporate clients.

"Is it a perfect answer? No, it's not," says Michael Phillips, a program manager at the institute. "The opportunity for abuse is there." Indeed, the institute says clients themselves must investigate the quality claims that an outsourcing company makes for its work.

A cottage industry of quality watchdogs approved by the institute has cropped up across the region to rank and certify companies. The appraisal process can take anywhere from a week to a few months, depending on the size of the organization being evaluated. But it isn't uncommon for companies to spend a year or more overhauling their entire software development process.

PP 6 Strategic Planning for Software Process Improvement
The CMMI, like its predecessors, contains the essential elements of effective processes for specific disciplines. That is, it reflects what the community currently considers to be "best practices" for software engineering, systems engineering, integrated product development, and systems acquisition. In that capacity, the CMMI provides guidance and a frame of reference for organizations that are developing or improving their processes. It also provides a benchmark against which organizations can assess their processes.

After the SW-CMM was first released in 1991, the SEI developed maturity models for several other disciplines, including systems engineering, acquisition, workforce management, and integrated product development. Although each model was focused on a particular discipline, there was considerable overlap in content – after all, "a project is a project is a project". Further, two different structural representations were used: the systems engineering model used a "continuous" structure; the other models used a "staged" structure.

As the SEI was preparing the next generation of its maturity models, the SEI's sponsor directed the SEI to establish a single model that integrates the practices found in the various discipline-specific models. The CMMI development team was initially charged with combining three source models-(1) Capability Maturity Model for Software (SW-CMM) v2.0 draft C, (2) Electronic Industries Association Interim Standard (EIA/IS) 731, and (3) Integrated Product Development Capability Maturity Model (IPD-CMM) v0.98for use by organizations pursuing enterprise-wide process improvement. More recently, the effort was expanded to include the supplier sourcing discipline. There was also an objective of ensuring that the new model would be compatible with ISO 15504, an international standard for software process assessment.

For organizations who have been using SW-CMM v1.1, the CMMI represents a significant advancement. It incorporates most of the current thinking on software management practices and corrects many of the shortcomings of the SW-CMM. However, decisions regarding if, when, and how to make the transition to the CMMI should not be taken lightly.

Major Changes (relative to the SW-CMM v1.1)

The major changes found in the CMMI fall into three categories: disciplines covered, maturity levels and process areas, and model structure.

Multiple Disciplines

For those who are familiar with any of the source models, the most obvious change is that the CMMI covers multiple bodies of knowledge or "disciplines". Currently the CMMI addresses four disciplines:

Software Engineering (SW)

Software engineering covers the development of software systems. Software engineers focus on applying systematic, disciplined, and quantifiable approaches to the development, operation, and maintenance of software.

Systems Engineering (SE)

Systems engineering deals with the development of total systems, which may or may not include software. Systems engineers focus on transforming customer needs, expectations, and constraints into product solutions and supporting these product solutions throughout the life of the product.

Integrated Product and Process Development (IPPD)

Integrated product and process development is a systematic approach that achieves a timely collaboration of relevant stakeholders throughout the life of the product to better satisfy customer needs, expectations, and requirements. If a project or organization chooses an IPPD approach, it performs IPPD-specific practices concurrently with other specific practices to produce products.

Supplier Sourcing (SS)

The supplier sourcing discipline is applicable to projects that use suppliers to perform functions that are critical to the success of the project. Supplier sourcing deals with identifying and evaluating potential sources for products, selecting the sources for the products to be acquired, monitoring and analyzing supplier processes, evaluating supplier work products, and revising the supplier agreement or relationships as appropriate.

An organization may adopt the CMMI for software engineering, systems engineering, or both. The IPPD and Supplier Sourcing disciplines are used in conjunction with SW and SE. For example, a software-only organization might select the CMMI for SW, an equipment manufacturer might select the CMMI for SE and SS, while a systems integration organization might choose the CMMI for SW, SE, and IPPD.

Most practices in the CMMI are applicable to each of the disciplines. For example, the practice "Define Project Life Cycle" in the Project Planning process area is applicable to both software engineering projects and systems engineering projects. Implementations of the practice in the two disciplines are likely to be quite different, however. The model provides "discipline amplifications", which contain information relevant to a particular discipline, to aid in understanding a practice in the context of a specific discipline. If, for instance, you want to find a discipline amplification for software engineering, you would look in the model for items labeled "For Software Engineering".

Figure 1

Maturity Levels and Process Areas

The CMMI's maturity levels have the same definitions as in the earlier models, although some changes to the names of the levels were made. Levels 1, 3, and 5 retained their names, i.e., Initial, Defined, and Optimizing, but Levels 2 and 4 are now named Managed and Quantitatively Managed, respectively, perhaps to more clearly emphasize the evolution of the management processes from a qualitative focus to a quantitative focus.

The CMMI contains twenty-five process areas for the four disciplines currently covered (see Figure 1). (By comparison, the SW-CMM contained eighteen process areas.) Although many of the process areas found in the CMMI are essentially the same as their counterparts in the SW-CMM, some reflect significant changes in scope and focus and others cover processes not previously addressed.

Level 2 survived the transition to the CMMI relatively unscathed. Software Subcontracting has been renamed Supplier Agreement Management and covers a broader range of acquisition and contracting situations. Measurement and Analysis is a new process area that primarily consolidates the practices previously found under the SW-CMM's Measurement and Analysis Common Feature into a single process area.

Level 3 has seen the most amount of reconstruction. Software Product Engineering, which, in the SW-CMM, covered nearly the entire range of engineering practices, has exploded into five process areas. Requirements Development addresses analysis of all levels of requirements. Technical Solution covers design and construction. Product Integration addresses the assembly and integration of components into a final, deliverable product. Verification covers practices such as testing and peer reviews that demonstrate that a product reflects its specified requirements (i.e., "was the thing built right?") and Validation covers practices such as customer acceptance testing that demonstrate that a product fulfills its intended use (i.e., "was the right thing built?").

Integrated Project Management covers what was addressed by Integrated Software Management and Intergroup Coordination in the SW-CMM. Risk Management is a new process area, as is Decision Analysis and Resolution, which focuses on a supporting process for identifying and evaluating alternative solutions for a specific issue.

IPPD brings two additional process areas to Level 3. Integrated Teaming addresses establishing and sustaining integrated product teams. Organized Environment for Integration focuses on the infrastructure and people management practices needed for effective integrated teaming.

The Supplier Sourcing discipline adds Integrated Supplier Management, which builds upon Supplier Agreement Management (Level 2) by specifying practices that emphasize proactively identifying sources of products that may be used to satisfy a project's requirements and maintaining cooperative project-supplier relationships.

Level 4 of the CMMI states more clearly what is expected in a quantitatively controlled process. Specifically, statistical and other quantitative techniques are expected to be used on selected processes (i.e., those that are critical from a business objectives perspective) to achieve statistically predictable quality and process performance. Software Quality Management and Quantitative Product Management in the SW-CMM have been replaced with two new process areas. Organizational Process Performance involves establishing and maintaining measurement baselines and models that characterize the expected performance of the organization's standard processes. Quantitative Project Management focuses on using the baselines and models to establish plans and performance objectives and on using statistical and quantitative techniques to monitor and control project performance.

The focus and intent of Level 5 has not changed dramatically with the release of CMMI. Process Change Management and Technology Change Management from the SW-CMM have been combined into one process area, Organizational Innovation and Deployment, which builds upon Organizational Process Focus (Level 3) by emphasizing the use of high-maturity techniques in process improvement. Defect Prevention has been renamed Causal Analysis and Resolution.

With the increase in the number of process areas and practices, the CMMI is significantly larger than the SW-CMM-the Staged Representation of the CMMI-SE/SW has a total of 80 goals and 411 practices, while the SW-CMM has 52 goals and 316 practices. Early adopters of the CMMI have found that this model inflation has a significant impact on both improvement efforts and assessments.

Structural Changes

As mentioned in the introduction, each of the source models was defined as either a staged model, which focuses on maturity, or as a continuous model, which focuses on capability. Use of the terms maturity and capability can be confusing initially-maturity levels relate to an entire organization; capability levels relate to individual process areas. The CMMI provides both representations!

The actual contents (i.e., goals, practices, subpractices, etc.) are essentially the same in each representation (in the continuous representation there are a few additional practices that are needed to provide a sufficient degree of granularity in process area capability). The representations primarily differ in how they are organized and presented:

Staged Representation

In the staged representation, each process area is associated with one of five maturity levels, as shown in Figure 2. The maturity levels and their process areas represent a recommended path for process improvement. The maturity levels serve as benchmarks that can be used to characterize an organization's overall process maturity. An organization achieves a maturity level when it has successfully implemented all applicable process areas that exist at and below that level.


Figure 2

Continuous Representation

In the continuous representation maturity levels do not exist. Instead, as shown in Figure 3, capability levels are designated for process areas, from "Incomplete" (capability level 0) to "Optimizing" (capability level 5) and, thus, provide a recommended order for approaching process improvement within each process area.

A continuous representation promotes flexibility in the order in which the process areas are addressed. A technique called "equivalent staging" may be used to relate the process areas' capability levels to a staged representation's maturity levels.


Figure 3

Both representations recognize that the process areas may be grouped into four general categories: project management, engineering, support, and process management. This grouping is helpful in discussing the interactions among process areas.

An organization adopting the CMMI must decide which representation would be most useful. It is anticipated that most organizations who have experience with the SW-CMM will choose the Staged Representation since maturity level ratings are a widely-used means of benchmarking and comparing organizations.

The ways in which goals and practices are used as model components have improved in the CMMI in two respects. The first affects the mapping between goals and practices. In the SW-CMM, a practice may be mapped to more than one goal; in the CMMI practices are defined such that they map to one, and only one, goal. The second improvement is the introduction of generic goals, which address process institutionalization. In the SW-CMM the institutionalization practices are not explicitly mapped to goals. In the CMMI, institutionalization practices are called generic practices and are mapped to generic goals. By clarifying the relationship between goals and practices, the CMMI is easier to use as an improvement guide and data management in assessments is simplified.

The Future of the SW-CMM

The SEI has stated that there will be no further changes to the SW-CMM model or the CBA-IPI assessment method. The SEI will continue to offer training in the SW-CMM for two years after the release of CMMI v1.1 and will train CBA-IPI Lead Assessors through December 2003. Data from SEI-authorized assessments against the SW-CMM will continue to be accepted and included in the maturity profiles reported by the SEI.

Migrating to the CMMI

Should your organization adopt the CMMI? The simple answer is "yes, sometime". Since the CMMI is intended to be a replacement for the SW-CMM and the other source models, the real question is "When is the right time for us to migrate to the CMMI?" Like most significant management decisions, the best answer may not be obvious.

If your organization has not yet initiated a CMM-based process improvement program or has only made limited progress towards Maturity Level 2, we recommend that you consider adopting the CMMI as your process framework now. The improvements in the CMMI make it a clear choice over the SW-CMM.

For organizations who have made significant investments in CMM-based improvement, the decision to adopt the CMMI is not trivial. It's similar to deciding whether to upgrade to the newest version of Windows-it seems likely that the migration from the SW-CMM to the CMMI will be painful in the short term but worthwhile in the long run. We offer the following suggestions for those facing the decision:

Once the decision to adopt the CMMI is made, the organization must choose a representation, i.e., staged or continuous. If your organization plans its process improvements based on business objectives, risks, expected benefit, or other such factors, then the Continuous Representation may be more useful. If, however, your organization tends to follow the path indicated by maturity levels or is focused on achieving maturity level ratings, then the Staged Representation may be more appropriate. We suggest you consider also a third approach: use the Continuous Representation for improvement and use the Staged Representation for assessment.

Conclusion

The CMMI is a long-overdue and necessary upgrade to the earlier, single discipline CMMs. Although the CMMI doesn't have the same software engineering flavor as the SW-CMM, the changes in structure, scope, and content are significant and we are confident that it will prove to be an important framework for organizations who develop software and systems.

There is a significant learning curve ahead for those who adopt the CMMI, although not unlike the experience most of us had with the SW-CMM. The CMMI model documents and other CMMI information is available from the SEI's web site (www.sei.cmu.edu/cmmi/) and SEI Transition Partners are providing CMMI training. We encourage you to take advantage of these resources to guide your decisions about using the CMMI in your organization.

[download pdf version]

Frank Koch is a Principal of Process Strategies, Inc.

Humor

Below is the author's attempt to use humor in describing this interesting process. It was partially based on the content of the paper Bursting the CMM Hype - Software Quality - CIO Magazine Mar 1,2004 by Christopher Koch (please note that this is humor and the author took liberty to rewrite the paper in his own skeptical way):

< the start of re-written as a humor story>

The CMM fraud was a perverted response of the US Air Force's to their frustration with its software buying process in the 1980s. Like any large bureaucracies they were milked by unscrupulous contractors and it's understandable that they had trouble figuring out which companies to pick ;-). Cronies at Carnegie Mellon University in Pittsburgh won a bid to create an organization, the SEI, to improve the vendor vetting process. They hired Humphrey, IBM's former software development chief, to participate in this effort in 1986. This genius decided immediately that the Air Force was chasing the wrong problem. "We were focused on identifying competent people, but we saw that all the projects [the Air Force] had were in trouble-it didn't matter who they had doing the work," he recalls. "So we said let's focus on improving the work rather than just the proposals."

The first version of CMM in 1987 was a questionnaire designed to identify good software practices within the companies doing the bidding. It was bogus test that was easy to fake "It was easy to cram for the test," says Jesse Martak, former head of a development group for the defense contracting arm of Westinghouse, which is now owned by Northrop Grumman. "We knew how to work the system."

So the SEI "refined" it in 1991 to become a monstrous pseudo-scientific perversion (or slick marketing trick, if you wish) that supposedly provides detailed model of software development best practices. Compliance is verified by group of cronies at SEI: lead appraisers. The lead appraisers head up a team of people from inside the company being assessed (usually three to seven, depending on the size of the company). Together, they look for proof that the company is implementing the policies and procedures of CMM across a "representative" subset (10-30%) of the company's software projects. There are also other perversions like interviews with pre-selected and pre-instructed what to say project managers and developers.

Internal people of course will fake everything they can. A lead appraiser who asked to remain anonymous noted: "They have conflicting objectives. They need to be objective, but the organization wants to be assessed at a certain level."

The depth and wisdom of the CMM itself is extremely questionable. Having a higher maturity level does not reduce the risk over hiring a company with no CMM level at all. But for contractor the certification has huge marketing value. If you are a Level 5 organization you can win some nice contracts even if you always produce software that is complete garbage.

A recent survey of 89 different software applications by Reasoning, an automated software inspection company, on average, found no difference in the number of code defects in software from companies that identified themselves to be on one of the CMM levels and those that did not. In fact, the study found that Level 5 companies on average had higher defect rates than anyone else.

... ... ...

Even if we discard the fact the certification is a complete nonsense, it's actually pretty difficult to discover the mere fact whether the organization was certified is fake or not and whether actual certification ever performed. Appraisers are required to submit formal documentation of all their assessments to the SEI and to customers. Lead appraisers must write up something called a Final Findings Report that includes "areas for improvement" if the appraiser finds any (they usually do, even with Level 5 companies). But there is no requirement for the content or format in the reports to be consistent across appraisers or companies. The report can be easily faked. According to one appraiser who asked not to be named, companies will often ask appraisers to "roll up" the detailed findings into shallow PowerPoint presentations that conceal actual picture of the company and its software development processes. "The purpose of the report is to tell companies where they need to improve-that's the whole point of CMM," she says. "But they make us write these fluffernutters that can gloss over important details." She conveniently forgot to mention that she is a willing accomplice of this scheme.

The Final Findings Report is what company officials present internally to the big brass and to customers knowledgeable enough to ask for it. But there's no obligation to do it. They can declare their CMM level without producing any evidence. They can even hire their own lead appraisers inside the company and assess their CMM capabilities themselves. They don't have to hire a lead appraiser from the outside who might be under less pressure to give a good assessment. And they can characterize their CMM level any way they want in their marketing materials and press releases.

< the end of re-written as a humor story>

Random Findings

[PDF] A Critique of Software Defect Prediction Models



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 03, 2020