Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

 The Mythical Man-Month Essays on Software Engineering

by Frederick Brooks Jr.

News

Classic Books

Recommended Links

Social Problem in Enterprise Unix Administration

The Peter Principle

The Second System Effect

Brooks law

Conway Law

Programming Pearls

How to Solve It

The Art of Computer Programming

Featuritis

The Jargon file

The Good Soldier Svejk

The Power Elite

Defensive programming

Parkinson Law The True Believer Lions' Commentary on Unix K&R Book Rapid Development Winner-Take-All Politics Military Incompetence
Alice's Adventures in Wonderland Tao of programming AWK book Animal Farm The Unix Hater's Handbook The Elements of Programming Style Humor Etc

See the introduction to the series for more information

"No scene from prehistory is quite so vivid as that of the mortal struggles of great beasts in the tar pits. The fiercer the struggle, the more entangling the tar, and no beast is so strong or so skillful but that he ultimately sinks. "

The Mythical Man-Month. Essays on Software Engineering by Frederick Brooks Jr. Anniversary Edition Paperback, 322 pages Published by Addison-Wesley Pub Co in July 1995.

Contents

Preface
The Tar Pit
The Mythical Man-Month
The Surgical Team
Aristocracy, Democracy, and System Design
The Second-System Effect
Passing the Word
Why Did the Tower of Babel Fail?
Calling the Shot
Ten Pounds in a Five-Pound Sack
The Documentary Hypothesis
Plan to Throw One Away
Sharp Tools
The Whole and the Parts
Hatching a Catastrophe
The Other Face
Epilogue
Notes and references
Index

The book was first published in 1975 and was based on Fred Brooks experience with the design of operating systems for famous IBM/360 series. It is interesting that OS/360 was generally a failure -- badly designed and in many respects inferior OS.

But hardware of System/360 was really great -- a masterpiece of engineering. and Fred Brooks was responsible at least for one genius decision: to use 8 bit bytes.

Actually both hardware and descendants of operating systems designed for those machines are still sold by IBM (IBM mainframe business).

While OS/360 was inferior from the very beginning it was a huge success as hardware was a huge success (rising tide lifts all boats). And despite being a failed, inferior architecture it survived.

But due to deficiencies of the OS, some really good peaces of System/360 software suits did not faire equally well. For example, PL/1 an innovative programming language that pioneered the use of exceptions (and inspired C) is dead. Despite having initial good complier (Pl/1 F) and later two really brilliantly written compilers (debugging and optimizing), each of which is still a masterpiece of software engineering. At the same time ugly JCL is still used.

Despite not so good accomplishments as the head of the project, Brooks have written a really brilliant book. Most of the material in the book is as relevant today as it was when originally written.

Understandably, though, part of the material is outdated. In the 20th Anniversary Edition, rather than update the original text, which is considered "classical," Brooks has wisely decided to add update chapters. These discuss the issues presented while shedding fresh light from the 90s on them. Personally, I recommend reading the relevant parts of Chapter 18 after reading each previous chapter. Chapter 18 is titled, "Propositions of The Mythical Man-Month: True or False?", and it contains updated information about each of the chapters from 1 to 17.

The book itself still did not lost its value due to the fact that it contains unique observations about large scale software development, the observations that can't be found anywhere else(for example Steve McConnell is just a consultant who never headed large projects). Brooks touches things that are typical. but not highly visible in any complex engineering project. He is my compilation of key points from the book:

Anniversary edition contains in addition to the original book his influential 1986 essay "No Silver Bullet" which was originally an invited paper for the IFIP '86 conference in Dublin, and later published in Computer magazine. In this paper, Brooks speculated that no technology will be found, within ten years of its publication (in 1986), which will enhance the process of software development by an order of magnitude. Nine years later, in retrospect, Brooks sadly notes that he was right. Scripting languages changed this a little bit but still this remains true as of November 2009.

Ideas presented (from Wikipedia)

The mythical man-month

See also: Combinatorial explosion

Brooks discusses several causes of scheduling failures. The most enduring is his discussion of Brooks's law: Adding manpower to a late software project makes it later. Man-month is a hypothetical unit of work representing the work done by one person in one month; Brooks' law says that the possibility of measuring useful work in man-months is a myth, and is hence the centerpiece of the book.

Complex programming projects cannot be perfectly partitioned into discrete tasks that can be worked on without communication between the workers and without establishing a set of complex interrelationships between tasks and the workers performing them.

Therefore, assigning more programmers to a project running behind schedule will make it even later. This is because the time required for the new programmers to learn about the project and the increased communication overhead will consume an ever increasing quantity of the calendar time available. When n people have to communicate among themselves, as n increases, their output decreases and when it becomes negative the project is delayed further with every person added.

No silver bullet

Main article: No Silver Bullet

Brooks added "No Silver Bullet - Essence and Accidents of Software Engineering"-and further reflections on it, "'No Silver Bullet' Refired"-to the anniversary edition of The Mythical Man-Month.

Brooks insists that there is no one silver bullet -- "there is no single development, in either technology or management technique, which by itself promises even one order of magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity."

The argument relies on the distinction between accidental complexity and essential complexity, similar to the way Amdahl's law relies on the distinction between "strictly serial" and "parallelizable".

The second-system effect

Main article: Second-system effect

The second-system effect proposes that, when an architect designs a second system, it is the most dangerous system they will ever design, because they will tend to incorporate all of the additions they originally did not add to the first system due to inherent time constraints. Thus, when embarking on a second system, an engineer should be mindful that they are susceptible to over-engineering it.

The tendency towards irreducible number of errors

See also: Heisenbug

99 little bugs in the code.
99 little bugs.
Take one down, patch it around.
127 little bugs in the code...[3]

The author makes the observation that in a suitably complex system there is a certain irreducible number of errors. Any attempt to fix observed errors tends to result in the introduction of other errors.

Progress tracking

Brooks wrote "Question: How does a large software project get to be one year late? Answer: One day at a time!" Incremental slippages on many fronts eventually accumulate to produce a large overall delay. Continued attention to meeting small individual milestones is required at each level of management.

Conceptual integrity

To make a user-friendly system, the system must have conceptual integrity, which can only be achieved by separating architecture from implementation. A single chief architect (or a small number of architects), acting on the user's behalf, decides what goes in the system and what stays out. The architect or team of architects should develop an idea of what the system should do and make sure that this vision is understood by the rest of the team. A novel idea by someone may not be included if it does not fit seamlessly with the overall system design. In fact, to ensure a user-friendly system, a system may deliberately provide fewer features than it is capable of. The point is that, if a system is too complicated to use, then many of its features will go unused because no one has the time to learn how to use them.

The manual

The chief architect produces a manual of system specifications. It should describe the external specifications of the system in detail, i.e., everything that the user sees. The manual should be altered as feedback comes in from the implementation teams and the users.

The pilot system

When designing a new kind of system, a team will design a throw-away system (whether it intends to or not). This system acts as a "pilot plant" that reveals techniques that will subsequently cause a complete redesign of the system. This second, smarter system should be the one delivered to the customer, since delivery of the pilot system would cause nothing but agony to the customer, and possibly ruin the system's reputation and maybe even the company.

Formal documents

Every project manager should create a small core set of formal documents defining the project objectives, how they are to be achieved, who is going to achieve them, when they are going to be achieved, and how much they are going to cost. These documents may also reveal inconsistencies that are otherwise hard to see.

Project estimation

When estimating project times, it should be remembered that programming products (which can be sold to paying customers) and programming systems are both three times as hard to write as simple independent in-house programs.[4] It should be kept in mind how much of the work week will actually be spent on technical issues, as opposed to administrative or other non-technical tasks, such as meetings, and especially "stand-up" or "all-hands" meetings.

Communication

To avoid disaster, all the teams working on a project should remain in contact with each other in as many ways as possible-e-mail, phone, meetings, memos etc. Instead of assuming something, implementers should ask the architect(s) to clarify their intent on a feature they are implementing, before proceeding with an assumption that might very well be completely incorrect. The architect(s) are responsible for formulating a group picture of the project and communicating it to others.

The surgical team

Much as a surgical team during surgery is led by one surgeon performing the most critical work, while directing the team to assist with less critical parts, it seems reasonable to have a "good" programmer develop critical system components while the rest of a team provides what is needed at the right time. Additionally, Brooks muses that "good" programmers are generally five to ten times as productive as mediocre ones.

Code freeze and system versioning

Software is invisible. Therefore, many things only become apparent once a certain amount of work has been done on a new system, allowing a user to experience it. This experience will yield insights, which will change a user's needs or the perception of the user's needs. The system should, therefore, be changed to fulfill the changed requirements of the user. This can only occur up to a certain point, otherwise the system may never be completed. At a certain date, no more changes should be allowed to the system and the code should be frozen. All requests for changes should be delayed until the next version of the system.

Specialized tools

Instead of every programmer having his own special set of tools, each team should have a designated tool-maker who may create tools that are highly customized for the job that team is doing, e.g., a code generator tool that creates code based on a specification. In addition, system-wide tools should be built by a common tools team, overseen by the project manager.

Lowering software development costs

There are two techniques for lowering software development costs that Brooks writes about:

Frederick P. Brooks

Frederick P. Brooks, Jr. is the "father of the IBM System 360". While manager of the 360 project it was Dr. Brooks who specified that a byte would consist of 8 bits and that word consists of 4 bytes. Whether or not you agree with his decision, it's hard to argue that this has not had a huge impact on the computer field.

Mr. Fred Brooks must have been a very high level politician in IBM of his time. Being in command for several thousand programmers is a highly political assignment. And that creates a subtle analogy of the book with Machiavelli's "The Prince".

In his time Machiavelli also was a senior civil servant in the Republic of Florence. Later the Medici returned to Florence, the Republic was replaced with an absolute rule and Machiavelli was dismissed from his job. Out of favors, out of job, he found himself under house arrest. The only thing to do was to write a book with his reflections on the process of gaining and maintaining power. Do you think the new rulers were stupid enough to hire a clever civil servant of the previous regime? So people like Fred Brooks and Machiavelli are probably very clever but not clever enough. Fouche and Talleyrand, for that matter, never wrote books -- they were men of action.

No book on software project management has been so influential and so timeless as The Mythical Man-Month. Even now, 20 years after the initial publication it's still a very important.

Who is this Fred Brooks?

He was born in 1931 in Durham, North Carolina and grew up in nearby Greenville, NC. He was interested in computers during his teenage years, and went on to double major in math and physics at Duke University. At Harvard, he studied for his Ph.D. under Howard Aiken, the inventor of the early Harvard computers. He joined IBM in 1956, working in Poughskeepie and Yorktown, NY and in 1957 Brooks and Dura Sweeney patented a Stretch interrupt system for the IBM Stretch computer that introduced most features of today's interrupt systems.

In the early 1960s he was the leader of the development team for the IBM 360 Series. The 360 quickly became the most popular mainframe computer on the market. In 1964, Brooks left IBM to found the Computer Science department at the University of North Carolina at Chapel Hill and to become a professor of Computer Science. He has chaired the department for the past twenty years.

Throughout his career, Brooks has authored two books in the field of Computer Science, The Mythical Man-Month: Essays on Software Engineering and Computer Architecture: Concepts and Evolution.

The Mythical-Man Month is undoubtedly his most famous work in which he compiled essays about his work on the IBM 360 Series. It also contains the famous "Silver Bullet Essay" where he likens his work on the 360 Series to a werewolf that can only be killed by a "Silver Bullet." Brooks was also known for his famous aphorisms, his most popular being Brooks' Law: "Adding manpower to a late software project makes it later" Many of his quotes are applicable in a wider context, such as "Complexity is the fatal foe" (P 344).

Brooks is still a professor of Computer Science at UNC today.

See also


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

News A Note on the Relationship of Brooks Law and Conway Law Recommended Links TOC Brooks law No Silver Bullet (essay, included in 2-d edition of TMMM)
Reviews

Frederick P. Brooks

Excepts Quotes Humor Random Findings Etc

[Apr 08, 2014] Why won't you DIE IBM's S-360 and its legacy at 50

The Register

IBM's System 360 mainframe, celebrating its 50th anniversary on Monday, was more than a just another computer. The S/360 changed IBM just as it changed computing and the technology industry.

The digital computers that were to become known as mainframes were already being sold by companies during the 1950s and 1960s - so the S/360 wasn't a first.

Where the S/360 was different was that it introduced a brand-new way of thinking about how computers could and should be built and used.

The S/360 made computing affordable and practical - relatively speaking. We're not talking the personal computer revolution of the 1980s, but it was a step.

The secret was a modern system: a new architecture and design that allowed the manufacturer - IBM - to churn out S/360s at relatively low cost. This had the more important effect of turning mainframes into a scalable and profitable business for IBM, thereby creating a mass market.

The S/360 democratised computing, taking it out of the hands of government and universities and putting its power in the hands of many ordinary businesses.

The birth of IBM's mainframe was made all the more remarkable given making the machine required not just a new way of thinking but a new way of manufacturing. The S/360 produced a corporate and a mental restructuring of IBM, turning it into the computing giant we have today.

The S/360 also introduced new technologies, such as IBM's Solid Logic Technology (SLT) in 1964 that meant a faster and a much smaller machine than what was coming from the competition of the time.

Big Blue introduced new concepts and de facto standards with us now: virtualisation - the toast of cloud computing on the PC and distributed x86 server that succeeded the mainframe - and the 8-bit byte over the 6-bit byte.

The S/360 helped IBM see off a rising tide of competitors such that by the 1970s, rivals were dismissively known as "the BUNCH" or the dwarves. Success was a mixed blessing for IBM, which got in trouble with US regulators for being "too" successful and spent a decade fighting a government anti-trust law suit over the mainframe business.

The legacy of the S/360 is with us today, outside of IBM and the technology sector.

naylorjs

S/360 I knew you well

The S/390 name is a hint to its lineage, S/360 -> S/370 -> S/390, I'm not sure what happened to the S/380. Having made a huge jump with S/360 they tried to do the same thing in the 1970s with the Future Systems project, this turned out to be a huge flop, lots of money spent on creating new ideas that would leapfrog the competition, but ultimately failed. Some of the ideas emerged on the System/38 and onto the original AS/400s, like having a query-able database for the file system rather than what we are used to now.

The link to NASA with the S/360 is explicit with JES2 (Job Execution Subsystem 2) the element of the OS that controls batch jobs and the like. Messages from JES2 start with the prefix HASP, which stands for Houston Automatic Spooling Program.

As a side note, CICS is developed at Hursley Park in Hampshire. It wasn't started there though. CICS system messages start with DFH which allegedly stands for Denver Foot Hills. A hint to its physical origins, IBM swapped the development sites for CICS and PL/1 long ago.

I've not touched an IBM mainframe for nearly twenty years, and it worries me that I have this information still in my head. I need to lie down!

Ross Nixon

Re: S/360 I knew you well

I have great memories of being a Computer Operator on a 360/40. They were amazing capable and interesting machines (and peripherals).

QuiteEvilGraham

Re: S/360 I knew you well

ESA is the bit that you are missing - the whole extended address thing, data spaces,hyperspaces and cross-memory extensions.

Fantastic machines though - I learned everything I know about computing from Principals of Operations and the source code for VM/SP - they used to ship you all that, and send you the listings for everything else on microfiche. I almost feel sorry for the younger generations that they will never see a proper machine room with the ECL water-cooled monsters and attendant farms of DASD and tape drives. After the 9750's came along they sort of look like very groovy American fridge-freezers.

Mind you, I can get better mippage on my Thinkpad with Hercules than the 3090 I worked with back in the 80's, but I couldn't run a UK-wide distribution system, with thousands of concurrent users, on it.

Nice article, BTW, and an upvote for the post mentioning The Mythical Man Month; utterly and reliably true.

Happy birthday IBM Mainframe, and thanks for keeping me in gainful employment and beer for 30 years!

Anonymous Coward

Re: S/360 I knew you well

I stated programming (IBM 360 67) and have programmed several IBM mainframe computers. One of the reason for the ability to handle large amounts of data is that these machines communicate to terminals in EBCDIC characters, which is similar to ASCII. It took very few of these characters to program the 3270 display terminals, while modern X86 computers use a graphical display and need a lot data transmitted to paint a screen. I worked for a company that had an IBM-370-168 with VM running both os and VMS. We had over 1500 terminals connected to this mainframe over 4 states. IBM had visioned that VM/CMS. CICS was only supposed to be a temporary solution to handling display terminals, but it became the main stay in many shops. Our shop had over 50 3330 300 meg disk drives online with at least 15 tape units. These machines are in use today, in part, because the cost of converting to X86 is prohibitive. On these old 370 CICS, the screens were separate from the program. JCL (job control language) was used to initiate jobs, but unlike modern batch files, it would attach resources such as a hard drive or tape to the program. This is totally foreign to any modern OS. Linux or Unix can come close but MS products are totally different.

Stephen Channell

Re: S/360 I knew you well

S/380 was the "future systems program" that was cut down to the S/38 mini.

HASP was the original "grid scheduler" in Houston running on a dedicated mainframe scheduling work to the other 23 mainframes under the bridge.. I nearly wet myself with laughter reading Data-Synapse documentation and their "invention" of a job-control-language. 40 years ago HASP was doing Map/Reduce to process data faster than a tape-drive could handle.

If we don't learn the lessons of history, we are destined to IEFBR14!

Pete 2

Come and look at this!

As a senior IT bod said to me one time, when I was doing some work for a mobile phone outfit.

"it's an IBM engineer getting his hands dirty".

And so it was: a hardware guy, with his sleeves rolled up and blood grime on his hands, replacing a failed board in an IBM mainframe.

The reason it was so noteworthy, even in the early 90's was because it was such a rare occurrence. It was probably one of the major selling points of IBM computers (the other one, with just as much traction, is the ability to do a fork-lift upgrade in a weekend and know it will work.) that they didn't blow a gasket if you looked at them wrong.

The reliability and compatibility across ranges is why people choose this kit. It may be arcane, old-fashioned, expensive and untrendy - but it keeps on running.

The other major legacy of OS/360 was, of course, The Mythical Man Month who's readership is stil the most reliable way of telling the professional IT managers from the wannabees who only have buzzwords as a knowledge base.

Amorous Cowherder

Re: Come and look at this!

They were bloody good guys from IBM!

I started off working on mainframes around 1989, as graveyard shift "tape monkey" loading tapes for batch jobs. My first solo job was as a Unix admin on a set of RS/6000 boxes, I once blew out the firmware and a test box wouldn't boot. I called out an IBM engineer after I completely "futzed" the box, he came out and spent about 2 hours with me teaching me how to select and load the correct firmware. He then spent another 30 mins checking my production system with me and even left me his phone number so I call him directly if I needed help when I did the production box. I did the prod box with no issues because of the confidence I got and the time he spent with me. Cheers!

David Beck

Re: 16 bit byte?

The typo must be fixed, the article says 6-bit now. The following is for those who have no idea what we are talking about.

Generally machines prior to the S/360 were 6-bit if character or 36-bit if word oriented. The S/360 was the first IBM architecture (thank you Dr's Brooks, Blaauw and Amdahl) to provide both data types with appropriate instructions and to include a "full" character set (256 characters instead of 64) and to provide a concise decimal format (2 digits in one character position instead of 1) 8-bits was chosen as the "character" length. It did mean a lot of Fortran code had to be reworked to deal with 32-bit single precision or 32 bit integers instead of the previous 36-bit.

If you think the old ways are gone, have a look at the data formats for the Unisys 2200.


John Hughes

Virtualisation

Came with the S/370, not the S/360, which didn't even have virtual memory.

Steve Todd

Re: Virtualisation

The 360/168 had it, but it was a rare beast.

Mike 140

Re: Virtualisation

Nope. CP/67 was the forerunner of IBM's VM. Ran on S/360

David Beck

Re: Virtualisation

S/360 Model 67 running CP67 (CMS which became VM) or the Michigan Terminal System. The Model 67 was a Model 65 with a DAT box to support paging/segmentation but CP67 only ever supported paging (I think, it's been a few years).

Steve Todd

Re: Virtualisation

The 360/168 had a proper MMU and thus supported virtual memory. I interviewed at Bradford university, where they had a 360/168 that they were doing all sorts of things that IBM hadn't contemplated with (like using conventional glass teletypes hooked to minicomputers so they could emulate the page based - and more expensive - IBM terminals).

I didn't get to use an IBM mainframe in anger until the 3090/600 was available (where DEC told the company that they'd need a 96 VAX cluster and IBM said that one 3090/600J would do the same task). At the time we were using VM/TSO and SQL/DS, and were hitting 16MB memory size limits.

Peter Gathercole

Re: Virtualisation @Steve Todd

I'm not sure that the 360/168 was a real model. The Wikipedia article does not think so either.

As far as I recall, the only /168 model was the 370/168, one of which was at Newcastle University in the UK, serving other Universities in the north-east of the UK, including Durham (where I was) and Edinburgh.

They also still had a 360/65, and one of the exercises we had to do was write some JCL in OS/360. The 370 ran MTS rather than an IBM OS.

Grumpy Guts

Re: Virtualisation

You're right. The 360/67 was the first VM - I had the privelege of trying it out a few times. It was a bit slow though. The first version of CP/67 only supported 2 terminals I recall... The VM capability was impressive. You could treat files as though they were in real memory - no explicit I/O necessary.

Chris Miller

This was a big factor in the profitability of mainframes. There was no such thing as an 'industry-standard' interface - either physical or logical. If you needed to replace a memory module or disk drive, you had no option* but to buy a new one from IBM and pay one of their engineers to install it (and your system would probably be 'down' for as long as this operation took). So nearly everyone took out a maintenance contract, which could easily run to an annual 10-20% of the list price. Purchase prices could be heavily discounted (depending on how desperate your salesperson was) - maintenance charges almost never were.

* There actually were a few IBM 'plug-compatible' manufacturers - Amdahl and Fujitsu. But even then you couldn't mix and match components - you could only buy a complete system from Amdahl, and then pay their maintenance charges. And since IBM had total control over the interface specs and could change them at will in new models, PCMs were generally playing catch-up.

David Beck

Re: Maintenance

So true re the service costs, but "Field Engineering" as a profit centre and a big one at that. Not true regarding having to buy "complete" systems for compatibility. In the 70's I had a room full of CDC disks on a Model 40 bought because they were cheaper and had a faster linear motor positioner (the thing that moved the heads), while the real 2311's used hydraulic positioners. Bad day when there was a puddle of oil under the 2311.

John Smith

@Chris Miller

"This was a big factor in the profitability of mainframes. There was no such thing as an 'industry-standard' interface - either physical or logical. If you needed to replace a memory module or disk drive, you had no option* but to buy a new one from IBM and pay one of their engineers to install it (and your system would probably be 'down' for as long as this operation took). So nearly everyone took out a maintenance contract, which could easily run to an annual 10-20% of the list price. Purchase prices could be heavily discounted (depending on how desperate your salesperson was) - maintenance charges almost never were."

True.

Back in the day one of the Scheduler software suppliers made a shed load of money (the SW was $250k a pop) by making new jobs start a lot faster and letting shops put back their memory upgrades by a year or two.

Mainframe memory was expensive.

Now owned by CA (along with many things mainframe) and so probably gone to s**t.

tom dial

Re: Maintenance

Done with some frequency. In the DoD agency where I worked we had mostly Memorex disks as I remember it, along with various non-IBM as well as IBM tape drives, and later got an STK tape library. Occasionally there were reports of problems where the different manufacturers' CEs would try to shift blame before getting down to the fix.

I particularly remember rooting around in a Syncsort core dump that ran to a couple of cubic feet from a problem eventually tracked down to firmware in a Memorex controller. This highlighted the enormous I/O capacity of these systems, something that seems to have been overlooked in the article. The dump showed mainly long sequences of chained channel programs that allowed the mainframe to transfer huge amounts of data by executing a single instruction to the channel processors, and perform other possibly useful work while awaiting completion of the asynchronous I/O.

Mike Pellatt

Re: Maintenance

@ChrisMiller - The IBM I/O channel was so well-specified that it was pretty much a standard. Look at what the Systems Concepts guys did - a Dec10 I/O and memory bus to IBM channel converter. Had one of those in the Imperial HENP group so we could use IBM 6250bpi drives as DEC were late to market with them. And the DEC 1600 bpi drives were horribly unreliable.

The IBM drives were awesome. It was always amusing explaining to IBM techs why they couldn't run online diags. On the rare occasions when they needed fixing.

David Beck

Re: Maintenance

It all comes flooding back.

A long CCW chain, some of which are the equivalent of NOP in channel talk (where did I put that green card?) with a TIC (Transfer in Channel, think branch) at the bottom of the chain back to the top. The idea was to take an interrupt (PCI) on some CCW in the chain and get back to convert the NOPs to real CCWs to continue the chain without ending it. Certainly the way the page pool was handled in CP67.

And I too remember the dumps coming on trollies. There was software to analyse a dump tape but that name is now long gone (as was the origin of most of the problems in the dumps). Those were the days I could not just add and subtract in hex but multiply as well.


Peter Simpson

The Mythical Man-Month

Fred Brooks' seminal work on the management of large software projects, was written after he managed the design of OS/360. If you can get around the mentions of secretaries, typed meeting notes and keypunches, it's required reading for anyone who manages a software project. Come to think of it...*any* engineering project. I've recommended it to several people and been thanked for it.

// Real Computers have switches and lights...

Madeye

The Mythical Man-Month

The key concepts of this book are as relevant today as they were back in the 60s and 70s - it is still oft quoted ("there are no silver bullets" being one I've heard recently). Unfortunately fewer and fewer people have heard of this book these days and even fewer have read it, even in project management circles.

WatAWorld

Was IBM ever cheaper?

I've been in IT since the 1970s.

My understanding from the guys who were old timers when I started was the big thing with the 360 was the standardized Op Codes that would remain the same from model to model, with enhancements, but never would an Op Code be withdrawn.

The beauty of IBM s/360 and s/370 was you had model independence. The promise was made, and the promise was kept, that after re-writing your programs in BAL (360's Basic Assembler Language) you'd never have to re-code your assembler programs ever again.

Also the re-locating loader and method of link editing meant you didn't have to re-assemble programs to run them on a different computer. Either they would simply run as it, or they would run after being re-linked. (When I started, linking might take 5 minutes, where re-assembling might take 4 hours, for one program. I seem to recall talk of assemblies taking all day in the 1960s.)

I wasn't there in the 1950s and 60s, but I don't recall any one ever boasting at how 360s or 370s were cheaper than competitors.

IBM products were always the most expensive, easily the most expensive, at least in Canada.

But maybe in the UK it was like that. After all the UK had its own native computer manufacturers that IBM had to squeeze out despite patriotism still being a thing in business at the time.


PyLETS

Cut my programming teeth on S/390 TSO architecture

We were developing CAD/CAM programs in this environment starting in the early eighties, because it's what was available then, based on use of this system for stock control in a large electronics manufacturing environment. We fairly soon moved this Fortran code onto smaller machines, DEC/VAX minicomputers and early Apollo workstations. We even had an early IBM-PC in the development lab, but this was more a curiosity than something we could do much real work on initially. The Unix based Apollo and early Sun workstations were much closer to later PCs once these acquired similar amounts of memory, X-Windows like GUIs and more respectable graphics and storage capabilities, and multi-user operating systems.

Gordon 10

Ahh S/360 I knew thee well

Cut my programming teeth on OS/390 assembler (TPF) at Galileo - one of Amadeus' competitors.

I interviewed for Amadeus's initial project for moving off of S/390 in 1999 and it had been planned for at least a year or 2 before that - now that was a long term project!

David Beck

Re: Ahh S/360 I knew thee well

There are people who worked on Galileo still alive? And ACP/TPF still lives, as zTPF? I remember a headhunter chasing me in the early 80's for a job in OZ, Quantas looking for ACP/TPF coders, $80k US, very temping.

You can do everything in 2k segments of BAL.

Anonymous IV

No mention of microcode?

Unless I missed it, there was no reference to microcode which was specific to each individual model of the S/360 and S/370 ranges, at least, and provided the 'common interface' for IBM Assembler op-codes. It is the rough equivalent of PC firmware. It was documented in thick A3 black folders held in two-layer trolleys (most of which held circuit diagrams, and other engineering amusements), and was interesting to read (if not understand). There you could see that the IBM Assembler op-codes each translated into tens or hundreds of microcode machine instructions. Even 0700, NO-OP, got expanded into surprisingly many machine instructions.

John Smith 19

Re: No mention of microcode?

"I first met microcode by writing a routine to do addition for my company's s/370. Oddly, they wouldn't let me try it out on the production system :-)"

I did not know the microcode store was writeable.

Microcode was a core (no pun intended) feature of the S/360/370/390/4030/z architecture.

It allowed IBM to trade actual hardware (EG a full spec hardware multiplier) for partial (part word or single word) or completely software based (microcode loop) depending on the machines spec (and the customers pocket) without needing a re compile as at the assembler level it would be the same instruction.

I'd guess hacking the microcode would call for exceptional bravery on a production machine.

Arnaut the less

Re: No mention of microcode? - floppy disk

Someone will doubtless correct me, but as I understood it the floppy was invented as a way of loading the microcode into the mainframe CPU.

tom dial

The rule of thumb in use (from Brooks's Mythical Man Month, as I remember) is around 5 debugged lines of code per programmer per day, pretty much irrespective of the language. And although the end code might have been a million lines, some of it probably needed to be written several times: another memorable Brooks item about large programming projects is "plan to throw one away, because you will."

Tom Welsh

Programming systems product

The main reason for what appears, at first sight, low productivity is spelled out in "The Mythical Man-Month". Brooks freely concedes that anyone who has just learned to program would expect to be many times more productive than his huge crew of seasoned professionals. Then he explains, with the aid of a diagram divided into four quadrants. Top left, we have the simple program. When a program gets big and complex enough, it becomes a programming system, which takes a team to write it rather than a single individual. And that introduces many extra time-consuming aspects and much overhead. Going the other way, writing a simple program is far easier than creating a product with software at its core. Something that will be sold as a commercial product must be tested seven ways from Sunday, made as maintainable and extensible as possible, be supplemented with manuals, training courses, and technical support services, etc. Finally, put the two together and you get the programming systems product, which can be 100 times more expensive and time-consuming to create than an equivalent simple program.

Tom Welsh

"Why won't you DIE?"

I suppose that witty, but utterly inappropriate, heading was added by an editor; Gavin knows better. If anyone is in doubt, the answer would be the same as for other elderly technology such as houses, roads, clothing, cars, aeroplanes, radio, TV, etc. Namely, it works - and after 50 years of widespread practical use, it has been refined so that it now works *bloody well*. In extreme contrast to many more recent examples of computing innovation, I may add.

Whoever added that ill-advised attempt at humour should be forced to write out 1,000 times:

"The definition of a legacy system: ONE THAT WORKS".

Grumpy Guts

Re: Pay Per Line Of Code

I worked for IBM UK in the 60s and wrote a lot of code for many different customers. There was never a charge. It was all part of the built in customer support. I even rewrote part of the OS for one system (not s/360 - IBM 1710 I think) for Rolls Royce aero engines to allow all the user code for monitoring engine test cells to fit in memory.

dlc.usa

Sole Source For Hardware?

Even before the advent of Plug Compatible Machines brought competition for the Central Processing Units, the S/360 peripheral hardware market was open to third parties. IBM published the technical specifications for the bus and tag channel interfaces allowing, indeed, encouraging vendors to produce plug and play devices for the architecture, even in competition with IBM's own. My first S/360 in 1972 had Marshall not IBM disks and a Calcomp drum plotter for which IBM offered no counterpart. This was true of the IBM Personal Computer as well. This type of openness dramatically expands the marketability of a new platform architecture.

RobHib

Eventually we stripped scrapped 360s for components.

"IBM built its own circuits for S/360, Solid Logic Technology (SLT) - a set of transistors and diodes mounted on a circuit twenty-eight-thousandths of a square inch and protected by a film of glass just sixty-millionths of an inch thick. The SLT was 10 times more dense the technology of its day."

When these machines were eventually scrapped we used the components from them for electronic projects. Their unusual construction was a pain, much of the 'componentry' couldn't be used because of the construction. (That was further compounded by IBM actually partially smashing modules before they were released as scrap.)

"p3 [Photo caption] The S/360 Model 91 at NASA's Goddard Space Flight Center, with 2,097,152 bytes of main memory, was announced in 1968"

Around that time our 360 only had 44kB memory, it was later expanded to 77kB in about 1969. Why those odd values were chosen is still somewhat a mystery to me.

David Beck

Re: Eventually we stripped scrapped 360s for components.

@RobHib-The odd memory was probably the size of the memory available for the user, not the hardware size (which came in powers of 2 multiples). The size the OS took was a function of what devices were attached and a few other sysgen parameters. Whatever was left after the OS was user space. There was usually a 2k boundary since memory protect keys worked on 2k chunks, but not always, some customers ran naked to squeeze out those extra bytes.

Glen Turner 666

Primacy of software

Good article.

Could have had a little more about the primacy of software: IBM had a huge range of compliers, and having an assembling language common across a wide range was a huge winner (as obvious as that seems today in an age of a handful of processor instruction sets). Furthermore, IBM had a strong focus on binary compatibility, and the lack of that with some competitor's ranges made shipping software for those machines much more expensive than for IBM.

IBM also sustained that commitment to development. Which meant that until the minicomputer age they were really the only possibility if you wanted newer features (such as CICS for screen-based transaction processing or VSAM or DB2 for databases, or VMs for a cheaper test versus production environment). Other manufacturers would develop against their forthcoming models, not their shipped models, and so IBM would be the company "shipping now" with the feature you desired.

IBM were also very focussed on business. They knew how to market (eg, the myth of 'idle' versus 'ready' light on tape drives, whitepapers to explain technology to managers). They knew how to charge (eg, essentially a lease, which matched company's revenue). They knew how to do politics (eg, lobbying the Australian PM after they lost a government sale). They knew how to do support (with their customer engineers basically being a little bit of IBM embedded at the customer). Their strategic planning is still world class.

I would be cautious about lauding the $0.5B taken to develop the OS/360 software as progress. As a counterpoint consider Burroughs, who delivered better capability with less lines of code, since they wrote in Algol rather than assembler. Both companies got one thing right: huge libraries of code which made life much easier for applications programmers. DEC's VMS learnt that lesson well. It wasn't until MS-DOS that we were suddenly dropped back into an inferior programming environment (but you'll cope with a lot for sheer responsiveness, and it didn't take too long until you could buy in what you needed).

What killed the mainframe was its sheer optimisation for batch and transaction processing and the massive cost if you used it any other way. Consider that TCP/IP used about 3% of the system's resources, or $30k pa of mainframe time. That would pay for a new Unix machine every year to host your website on.

[Aug 11, 2012] WHEN A BAZAAR IS NO LONGER A BAZAAR

Retrospective Notes

First-time readers of this rant should be aware that it was written well before either the Halloween Documents or AOL/Sun acquisition of Netscape. For all I know, at this writing, some other relevant event is erupting. Sun's quasi-open-source licensing of Java might even be considered germane.

In any case: I disclaim any powers of prophecy that might be inferred from some skewed reading of what follows. That's a little like reading the Book of Revelations as predicting Monica Lewinsky (the Whore of Babble-On, forsooth!) and this Y2K fiasco (yea, verily, thy Systems will Tremble and their Bowels will Loosen before the Wrath of the Two-Thousand-Year-Old Lamb.)

I also want people to know: I like what's happening with Open Source, and I'm glad that it has such an articulate and intelligent spokesman in Eric Raymond. My expressions of doubt--both serious and tongue-in-cheek--shouldn't be taken as disrespect for the process or the person.

I wrote this for fun. So please read it that way.

I also disclaim any influence on events. No, I don't know if Mr. Villaincantelope-or-whatever-his-name-is found some kind of backhanded inspiration in this piece. It's quite enough that Eric Raymond mentioned Fred Brooks' attribution of Bazaar-like practices to Microsoft; I entirely trust Microsoft employees to take even the most mud-besmeared ball and run with it, good little team-players that they are. I just doubt that they could even find a ball under all the mud you'll see here.

With all that clarified....let's MUDWRESTLE!

The Browser Wars Continued: Children's Crusade vs. Shi'ite Jihad?

Eric Raymond is going to be a Netceleb. Marc Andreesen virtually guarantees this. The developer consensus within Netscape will propel the author of "The Cathedral and The Bazaar" almost to household-name level of public visibility. Netscape will publish its browser source, Agoric processes will save the day, Microsoft's tidal advance will be stemmed.

Or will it?

I have mused on Eric Raymond's choice of "Cathedral" and "Bazaar" as juxtaposed metaphors for the two different styles. Historically, the one is Christian, the other Islamic.

Much could be made of this.

Allow me.

Religious Architecture and Social Archetypes

In Christian countries, the Cathedral was the source of ecclesiastical power, and to some extent it remains so. As a political power, however, its role has gone from nearly central to nearly non-existent. Nobody is building Cathedrals anymore, so that pork-barrel is long empty. And nothing has taken its place.

In Islamic countries, the bazaar - which is seldom, if ever, co-located with any related job-producing construction projects -- remains the center of ecclesiastical power. In self-styled Islamic Republics, the bazaar is therefore the center of power, period.

Perhaps the theocratic overtones in the choice of the term Cathedral was Eric Raymond's allusion to what many see as the doctrinaire and sanctimonious rumblings from RMS the Pope, versus the more freewheeling and (in strict GNU terms) easily-corrupted style of Linux development. And perhaps in choosing the term "bazaar" Raymond meant only to suggest something more social and economic and mundane; something that was, architecturally, at least, more low-to-the-ground. He refers to Eric Drexler's notions of Agoric systems, saying that he thought of entitling the essay "The Cathedral and the Agora." So maybe none of this has anything to do with the taint of theocracy.

But maybe everyone should (re)read God and Golem, a book written Norbert Wiener, the coiner of the term "cybernetics." In all things computer-related, ecclesiarchs are not far off.

If you want evidence that the bazaar style is easily corrupted and/or prone to its own problems of self-destructive, dogma-inspired holy wars, you need look no further than Microsoft itself.

Yes, Microsoft.

The Gospel According to Fred

Eric Raymond apparently has glanced as far afield as Redmond in seeking precedents, though apparently with only a shudder and a giggle so far. "For Further Reading" leads off with this:

I quoted several bits from Frederick P. Brooks's classic The Mythical Man-Month because, in many respects, his insights have yet to be improved upon.... The new edition is wrapped up by an invaluable 20-years-later retrospective in which Brooks forthrightly admits to the few judgements in the original text which have not stood the test of time. I first read the retrospective after this paper was substantially complete, and was surprised to discover that Brooks attributes bazaar-like practices to Microsoft!

A careful reading of Brooks's 25-year retrospective turns up another, crueler, irony. One of the few points where Brooks concedes a probable error of judgment is in his original insistence that all the information about OS/360 development be made available to everyone on the project.

Information should be free. At IBM. In the mid-60s.

Well. How clueful.

But now he says he was wrong?

That Stained-Glass Window is a Module - MY Module

At some point during this famously-late project, Brooks' policy of providing all information to every programmer meant filling wide shelves in every office with the official Project Notebook, and providing updates to it constantly. This, Brooks now feels, was a disaster.

He should, he says now, have gone along with what was, at the time, a novel software engineering concept: modularization through "Information Hiding". Presumably this would have been implemented by restricting the flow of information within the development organization itself.

It surprises me that Brooks wouldn't now consider, after the fact, that promoting an ethic (or at least, an aesthetic) of modularization in an open organization would be superior to enforcing (perhaps by prohibition) laws of modularization in a closed one.

One suspects that Brooks's Cathedral dreams never died. References to medieval architecture and Christian spirituality abound in his book. To have made OS/360 a true Cathedral - why, he'd be an IBM Cardinal today! A Saint, even! (Or CEO emeritus, anyway.) It seems he now feels that, besides tolerating poor architecture, he erred in leaving out a crucial Cathedral-building management technique: tolerance of internal trade secrets.

Medieval craft guilds were notoriously secretive about their processes, and Cathedrals provided not only pork barrels during lean times, but perhaps an early version of the R&D Tax Credit. But what damn good is that subsidy if your research failures become widely known while your successes aren't safe from pilfering by other guilds?

Poor Fred. Devout IBM Christian to the end.

He missed a trick. To be Pope in a Cathedral-as-pork-barrel economy, you have to be wily, political, amoral. You have to be the broker-of-secrets, which means you have to have secrets worth having.

You don't have to be actually evil, but it sure helps!

Death to the Infidels! (Uh, You're on Our Side, Right?)

Now we have FTP sites via the Internet and patchfiles distributed on newsgroups, rather than the OS/360 Project Notebook maintained by armies of documentation clerks. But apart from that, I'm hard put to see a major difference in social organization.

Yes, this freedom-of-information on the OS/360 project was IBM-internal, but so what? There were literally thousands of people working on OS/360 at one point, quite a sizeable community. Maybe larger than the number of people who have ever contributed a line of code to GNU/Linux. Why would such a community look outside itself? After all, even outside, except for a few niches, IBM was the world of computing. Information wants to be free, I suppose, but programmers want to be paid. OS/360 programmers had everything they could have wanted. Nice salaries. A prestigious host organization. A large community of similarly-spirited co-workers. Access to all the information they needed to do their jobs (and much more.) Not to mention that there were - by virtue of Brook's early, and now openly regretted, decision to triage "conceptual integrity" in the interests of getting everybody busy - lots of opportunities for creative (if not terribly original) work.

What a revelation. Fred Brooks, Bazaar Model developer. Way back in the mid-60s. But now, more amazingly, a lapsed Bazaar Model developer (if only by conflating information-hiding in software with information-hiding in organizations.) Who'd've thunk it?

I find it very persuasive that Microsoft is, internally, a bazaar- model development organization, a la OS/360. Knowing what we now know about how the Bazaar Model works, it helps explain Microsoft's enduring success. Not to mention all the feeping creaturism.

Comparative Religion: Refining the Taxonomy

To reiterate: In Islamic Republics, the center of powerful ideas is the community of religious scholars and ecclesiarchs, which locates itself in the center of economic power in those countries, and sets the terms of trade within those countries to a great extent. In the extreme cases (Iran), the insularity and self-righteousness runs to truly fanatical extremes.

Too bad the Mullahs aren't in Janet Reno's jurisdiction, huh?

It might, from this point onward, become necessary to make some finer distinctions in our use of the term "Bazaar Model". Prior to what may be a prematurely-characterized Netscape Enlightenment, we can clearly distinguish these two:

1. Fundamentalist Shi'ite Bazaar Model

Microsoft is the paragon now, but it seems to have independently re-evolved Project OS/360 characteristics.

In my brief, ill-fated, stint at Taligent, the same mentality - and the same OS/360 diseases - seemed prevalent. Parallels abound -- new processor architecture, incoherent OS architecture (too many architects), too many projects, little outsourcing, and...IBM was involved! (Hey, maybe Taligent was all just an IBM plot to weaken Apple! Who else would know better than IBM that it could never work? But...nah. Never attribute to malice what can be perfectly well explained by stupidity.)

I suspect the same problems will recur in any corporate-captive programming community that reaches a certain size with a common focus. SGI? Sybase? Both have repeated the history of OS/360 in recent years. Oracle's NC thrust appears to have been a similarly hubristic move.

Ayatollahs? Hoo boy, do they ever have Ayatollahs. "We want 100% of the market." How will it all work out?

Easy. Just remember: "God is great."

2. Baptist Church-Social Bazaar Model

Linux, notably, but also the BSD sects. There is religiousity in the air, but - within decent Protestant limits - a tolerance for diversity.

Commercial vendors of a size modest enough to be admitted to the community (Red Hat, Debian, etc.), are well-regarded, by and large. Main Street is OK - anything bigger is suspect.

Most people still need day jobs, though. How will it all work out?

Easy. Just remember: "God will provide."

But there's a stir in the churchyard. Somebody's arriving.

With Netscape throwing its hat into the ring (or is the free software community throwing its hat into theirs?) ...well, as Eric Raymond says, "this is the Big Time."

Or is it?

Pizza! Pepsi! Stock Options! Hallelujah!

What does it mean, exactly, for the vendor of a popular piece of commercial software to show up at the Baptist Church Social, and declare itself ready for immersion in the waters of the Holy Spirit - provided it's good for business? Especially a vendor that has been "Embracing and Extending" HTML as feverishly (and stupidly) as those Shi'ite Fundamentalists in Redmond?

Yep, you guessed it. You get

3. The Marc Andreesen Reformed Church of the Almighty Dollar Church-Social Bazaar Model

All bazaar booths are allocated by Netscape.

All bazaar merchandise must be tagged "brought to you by Netscape." Special branding irons are used in the case of cotton candy.

Modest-sized vendors who don't sign license agreements with weasel-wording will be strapped into the "dunk bozo" seat, and the top-intellectual property law firms of the Valley will take turns pitching the baseballs to see who can tank them.

Most people still need day jobs, though. How will it all work out?

Easy. Just remember:

"God is cool."

Now let us pray.

[Aug 22, 2010] Some lesser-known truths about programming

This is a rehash of some Brooks ideas. Some comments are really good...
August 17, 2010 | Dot Mac

My experience as a programmer has taught me a few things about writing software. Here are some things that people might find surprising about writing code:

Edward D. Weinberger:

Much of the content of this blog was originally posted in the classic THE MYTHICAL MAN MONTH, by Fredric Brooks. He was the guy that headed the IBM project to build the first modern operating system, OS/360, on IBM Mainframes in the 1960′s, so he clearly knew his stuff. He is convinced of the vast difference between the best and the average.

Believe it or not, the reason why managers make more than programmers is well captured by the comic strip DILBERT. Though Dilbert is technically adept, he clearly does not understand the business world. He therefore needs the "adult supervision" of the pointy-haired boss. Admittedly, the boss is an idiot, with absolutely NO technical savvy; however, he does understand the business world, including the importance of marketing.

And one other thing. Programmers are paid for more than productivity, which is why productive programmers are not necessarily paid more than others. The technology is changing so fast that the guy who may be a mediocre user of a hot technology gets paid more than the best COBOL programmer in the world, simply because nobody cares much about COBOL any more.

Maintenance Man:

A great developer may be worth a lot more than an average one. Might also be 100 times as productive. Then why again is the great one getting paid the same salary as the average one?

jambox:

@MaintenanceMan Too many companies allow their software products to be managed by non-technical managers. They simply do not know who is good and who isn't! They also don't know good software from bad and evaluate performance based on speed of delivery most of all. They're also acutely vulnerable to bafflegibber.

Lou:

Thank you; good article. I recently worked (past tense) at a place that thought you could in fact speed software delivery by adding extra programmers and extra hours per programmer. Programming is not manual labor. Physical bodies can continue laboring long after the mind shuts off – whereas in programming, when the mind shuts off, you may code so ineffectively that you make negative progress. Bless the software managers that understand the pacing and rhythm of development.

@Maintenance Man, that's a good question. I think it's partly due to a lack of understanding on the part of managers and companies. As soon as it's easy to quantify the increased value of a good programmer vs a bad one, the salary gap will increase.

Paul W. Homer:

Nice. Although I'm still a little unsure of the first two points. In the short run, I think a great programmer is not all that much more productive than an average one, but I definitely believe that if you factor in time and the amount of code that actually stays alive the difference is huge. Also, I could agree with the 10-12 lines per day, if you are talking averages. I know a lot of programmers who (without cutting and pasting) can produce thousands of lines in a very intense week, but who then slack shortly afterwards.

Jeff Dege:

I'm seeing that "good programmers are X-times more productive than bad programmers" meme, again, and again without a description of the shapes of the curve. This leaves people with misconceptions.

Productivity is not a normal distribution, it's a Rayleigh distribution. If good programmers are ten times better than bad programmers, the distribution will be such that good programmers are twice as good as average programmers, and average programmers are five times as good as bad programmers.

The curve is skewed, and the median is above the mean. (Meaning that most programmers are better than average, as odd as that sounds.)

J:

I love reading something like this and then hear people talk about how this just proves how great they are at programming. No one ever seems to read and say, "Hmm … I wonder if I'm the guy that is a tenth as productive as good programmers?" On another note, although I agree with some generalizations, I think it is dangerous to put programmers in three categories: bad, good, and great. There is a lot of wiggle room where people don't fit nicely in one of these categories. Some people are also good at some things and not at others. There are also people that have all the skills, but just don't have experience yet. Everyone needs to make mistakes and learn from them, and that just takes some time.

Rosstafarian:

Except that average programmers are not just a multiple better than bad programmers. Somewhere just south of average, poor programmers contribute less than the increased communication overhead needed to include them on the team. A little below that, you have really poor programmers whose typical coding change makes the system worse and either requires time by alert developers to fix the problems they create or dramatically increases the risk of failure of the effort if it's not detected in time.

In my experience, the best programmers routinely achieve goals that average programmers don't understand even when the result is later explained to them. In terms of pure productivity, the difference is something like 10-15x between the best and the simply good. I firmly believe that there is a transition to negative value that occurs well into "average" territory but which is only apparent to a few enlightened members of management.

Gabe da Silveira:

You're making the same mistake though. It is some combination of talent and experience that makes one great at anything. There's absolutely no reason to believe that great programmers were somehow better than average from the outset… they could have had breakthroughs throughout their journey. Likewise the most promising candidate could get lazy in a cush job or maybe lose interest in advancing their skills.

Ulf Wiger:

@Alfred: In my experience, software companies (esp larger ones) need many different kinds of programmers. The trick is to find your niche and figure out how you can best contribute.

The software industry as a whole needs inventors, finishers, motivators, maintainers, … and also project managers and line managers who are (or at least have been) skilled enough at programming. Some people love working for years maintaining and improving a particular product; others wouldn't be caught dead doing that, but want to innovate, prove a concept, and then move on.

The trick for every company is to find the right mix, and dependable programmers who are Good Enough are extremely important. I have plenty of good war stories about brilliant programmers, but they are best told over beer…

Matt J.

When I see that much enthusiastic agreement, it raises my suspicions. The majority is rarely right.

And sure enough, when I look more closely at this, I see lots of problems covered up with sententious authority. I am glad to see that doctor doom caught one of them, but there are more.

I will focus on only one, the point about it not being democratic. This is true, BUT: if you let only one person do all the design, you feed frustration on the rest of the team, and then what do you do if the one designer is hit by the proverbial bus? It is better to admit that the design process cannot be entirely democratic, but let the one designer bounce his ideas of other members of the team. This solves both problems by giving them a hand in the design also, and by distributing knowledge of the design among several people, so that if he is hit by a bus, he is replaceable without as much loss.

For the same reason, it is important that that one designer know how to share with the team. Pick someone who is bright, but autocratic, and you will ruin the team and the project.

Weinberger had an interesting correction to the article too, but he overestimates the PHB's command of the business world. The real problem of modern day business is that the PHB who is technically ignorant is nearly as ignorant of the business world, too! That is WHY Scott Adams has the recurring line about 'manager' really coming from an ancient word for "mastodon dung". That is WHY he reminds us of the managers so dumb, they didn't even know how to use voicemail. He didn't make that example up, either. It comes from real life.

Nor is it really that new. The reasons such gross incompetence is tolerated in the overpaid management class of society was covered very well by Thorstein Veblen in his "The Theory of the Leisure Class". If you really want to understand why managers are so destructive, why they will never admit that Fred Brooks was right, then read this book.

[Nov 5, 2009] Scott Rosenberg's Wordyard " Blog Archive " Code Reads #1 The Mythical Man-Month

The book introduces a whole slew of concepts at the heart of our understanding of how we build software:

The Mythical Man Month Book Review at Mark Needham

Pretty much since I started working at ThoughtWorks 2 1/2 years ago I've been told that this is a book I have to read and I've finally got around to doing so.

Maybe it's not that surprising but my overriding thought about the book is that just about every mistake that we make in software development today is covered in this book!

What did I learn?

In Summary

I really enjoyed reading this book and seeing how a lot of the ideas in more modern methodologies were already known about in the 1980s and aren't in essence new ideas.

I'd certainly recommend this book.

[Dec 15, 2006] Computerworld - The mythical open source miracle by Neil McAllister

Actually Spolsky does not understand the role of scripting languages. But hi is right of target with his critique of OO. Object oriented programming is no silver bullet.

Dec 14, 2006 (InfoWorld) Joel Spolsky is one of our most celebrated pundits on the practice of software development, and he's full of terrific insight. In a recent blog post, he decries the fallacy of "Lego programming" -- the all-too-common assumption that sophisticated new tools will make writing applications as easy as snapping together children's toys. It simply isn't so, he says -- despite the fact that people have been claiming it for decades -- because the most important work in software development happens before a single line of code is written.

By way of support, Spolsky reminds us of a quote from the most celebrated pundit of an earlier generation of developers. In his 1987 essay "No Silver Bullet," Frederick P. Brooks wrote, "The essence of a software entity is a construct of interlocking concepts ... I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation ... If this is true, building software will always be hard. There is inherently no silver bullet."

As Spolsky points out, in the 20 years since Brooks wrote "No Silver Bullet," countless products have reached the market heralded as the silver bullet for effortless software development. Similarly, in the 30 years since Brooks published " The Mythical Man-Month" -- in which, among other things, he debunks the fallacy that if one programmer can do a job in ten months, ten programmers can do the same job in one month -- product managers have continued to buy into various methodologies and tricks that claim to make running software projects as easy as stacking Lego bricks.

Don't you believe it. If, as Brooks wrote, the hard part of software development is the initial design, then no amount of radical workflows or agile development methods will get a struggling project out the door, any more than the latest GUI rapid-development toolkit will.

And neither will open source. Too often, commercial software companies decide to turn over their orphaned software to "the community" -- if such a thing exists -- in the naive belief that open source will be a miracle cure to get a flagging project back on track. This is just another fallacy, as history demonstrates.

In 1998, Netscape released the source code to its Mozilla browser to the public to much fanfare, but only lukewarm response from developers. As it turned out, the Mozilla source was much too complex and of too poor quality for developers outside Netscape to understand it. As Jamie Zawinski recounts, the resulting decision to rewrite the browser's rendering engine from scratch derailed the project anywhere from six to ten months.

This is a classic example of the fallacy of the mythical man-month. The problem with the Mozilla code was poor design, not lack of an able workforce. Throwing more bodies at the project didn't necessarily help; it may have even hindered it. And while implementing a community development process may have allowed Netscape to sidestep its own internal management problems, it was certainly no silver bullet for success.

The key to developing good software the first time around is doing the hard work at the beginning: good design, and rigorous testing of that design. Fail that, and you've got no choice but to take the hard road. As Brooks observed all those years ago, successful software will never be easy. No amount of open source process will change that, and to think otherwise is just more Lego-programming nonsense.

McGee's Musings

I'm working on my weekly InfoWorld column (this one will run in print and online on March 8) and I'm referencing an essay from Frederick Brooks (of "Mythical Man-Month" fame) entitled "No Silver Bullet: Essence and Accidents of Software Engineering."

You just have to read this. I've read it many times before and referenced it in a column on web services two years ago, but the essay continues to amaze me. Although it was written eighteen years ago, the content still rings true. Just a sample:

The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract in that such a conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared with the conceptual errors in most systems.

If this is true, building software will always be hard. There is inherently no silver bullet.

The Mythical Man-Month Essays on Software Engineering REVIEWED BY TAL COHEN

From A Second Look at the Cathedral and Bazaar by Nikolai Bezroukov,
http://firstmonday.org/issues/issue4_12/bezroukov/index.html

One of the most indefensible ideas of CatB (i.e. Raymond) is that Brooks' Law is non-applicable in the Internet-based distributed development environment as exemplified by Linux

The real problem with the CatB statement is that due to the popularity of CatB this statement could discourage OSS community from reading and studying The Mythical Man-Month, one of the few
computer science books that has remained current decades after its initial publication.

Actually the term "Brooks' Law" is usually formulated as "Adding manpower to a late software project makes it later".

I believe that the illusion of the non-applicability of "mythical man-month postulate" and Brooks' law is limited only to projects for which a fully functional prototype already exists and most or all architectural problems are solved...

Table of Contents

Preface to the 20th Anniversary Edition
Preface to the First Edition
Chapter 1 The Tar Pit
Chapter 2 The Mythical Man Month
Chapter 3 The Surgical Team
Chapter 4 Aristocracy, Democracy, and System Design
Chapter 5 The Second System Effect
Chapter 6 Passing the Word
Chapter 7 Why Did the Tower of Babel Fail?
Chapter 8 Calling the Shot
Chapter 9 Ten Pounds in a Five-Pound Sack
Chapter 10 The Documentary Hypothesis
Chapter 11 Plan to Throw One Away
Chapter 12 Sharp Tools
Chapter 13 The Whole and the Parts
Chapter 14 Hatching a Catastrophe
Chapter 15 The Other Face
Chapter 16 No Silver Bullet -- Essence and Accident
Chapter 17 "No Silver Bullet" Refired
Chapter 18 Propositions of The Mythical Man-Month: True or False
Chapter 19 The Mythical Man-Month after 20 Years
Epilogue

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

The Mythical Man-Month - Wikipedia, the free encyclopedia -- pretty extensive review of Brooks' masterpiece...

Mythical Man-Month quotes

The mythical man-month (anniversary ed.) ACM ref list

ERCB The Mythical Man-Month, Anniversary Edition

Brooks' Law and open source The more the merrier

ONLamp.com The Mythical Man-Month Revisited -- a very superficial, but funny paper...

Amazon.com- Books- The Mythical Man-Month- Essays on Software ...

Publisher page

Reviews

Conceptual Integrity

Fred Brooks

Reviews

The Mythical Man-Month

An interesting review by Lane Core Jr.

Few books on software project management have been as influential and timeless as The Mythical Man-Month. With a blend of software engineering facts and thought-provoking opinions, Fred Brooks offers insight for anyone managing complex projects. [outside back cover, Twentieth Anniversary Edition, with four new chapters; Addison-Wesley, 1995]

Chapter 2: The Mythical Man-Month

Good cooking takes time. If you are made to wait, it is to serve you better, and to please you. Menu of Restaurant Antoine, New Orleans [page 13]

More software projects have gone awry for lack of calendar time than for all other causes combined. Why is this cause of disaster so common?

First, our techniques of estimating are poorly developed. More seriously, they reflect an unvoiced assumption which is quite untrue, i.e., that all will go well.

Second, our estimating techniques fallaciously confuse effort with progress, hiding the assumption that men and months are interchangeable.

Third, because we are uncertain of our estimates, software managers often lack the courteous stubbornness of Antoine's chef.

Fourth, schedule progress is poorly monitored. Techniques proven and routine in other engineering disciplines are considered radical innovations in software engineering.

Fifth, when schedule slippage is recognized, the natural (and traditional) response is to add manpower. Like dousing a fire with gasoline, this makes matters worse, much worse. More fire requires more gasoline, and thus begins a regenerative cycle which ends in disaster. [page 14]

Optimism

In many creative activities the medium of execution is intractable. Lumber splits; paints smear; electrical circuits ring. These physical limitations of the medium constrain the ideas that may be expressed, and they also create unexpected difficulties in the implementation. [page 15]

Computer programming, however, creates with an exceedingly tractable medium. The programmer builds from pure thought-stuff: concepts and very flexible representations thereof. Because the medium is tractable, we expect few difficulties in implementation; hence our pervasive optimism. Because our ideas are faulty, we have bugs; hence our optimism is unjustified. [page 15]

The Man-Month

The second fallacious thought mode is expressed in the very unit of effort used in estimating and scheduling: the man-month. Cost does indeed vary as the product of the number of men and the number of months. Progress does not. Hence the man-month as a unit for measuring the size of a job is a dangerous and deceptive myth. It implies that men and months are interchangeable.

When a task cannot be partitioned because of sequential constraints, the application of more effort has no effect on the schedule. The bearing of a child takes nine months, no matter how many women are assigned. Many software tasks have this characteristic because of the sequential nature of debugging. [page 17; emphasis in original]

Systems Test

In examining conventionally scheduled projects, I have found that few allowed one-half of the projected schedule for testing, but that most did indeed spend half of the actual schedule for that purpose. Many of these were on schedule until and except in system testing. [page 20]

Gutless Estimating

Observe that for the programmer, as for the chef, the urgency of the patron may govern the scheduled completion of the task, but it cannot govern the actual completion. An omelette, promised in two minutes, may appear to be progressing nicely. But when it has not set in two minutes, the customer has two choices--wait or eat it raw. Software customers have had the same choices.

The cook has another choice; he can turn up the heat. The result is often an omelette nothing can save--burned in one part, raw in another. [page 21]

Regenerative Schedule Disaster

Oversimplifying outrageously, we state Brooks's Law:

Adding manpower to a late software project makes it later.

This then is the demythologizing of the man-month. The number of months of a project depends upon its sequential constraints. The maximum number of men depends upon the number of independent subtasks. From these two quantities one can derive schedules using fewer men and more months. (The only risk is product obsolescence.) One cannot, however, get workable schedules using more men and fewer months. More software projects have gone awry for lack of calendar time than for all other causes combined. [pages 25-26; emphasis in original]

Chapter 11: Plan to Throw One Away

Two Steps Forward and One Step Back

The fundamental problem with program maintenance is that fixing a defect has a substantial (20-50 percent) chance of introducing another. So the whole process is two steps forward and one step back.

Why aren't defects fixed more cleanly? First, even a subtle defect shows itself as a local failure of some kind. In fact it often has system-wide ramifications, usually nonobvious. Any attempt to fix it with minimum effort will repair the local and obvious, but unless the structure is pure or the documentation very fine, the far-reaching effects of the repair will be overlooked. Second, the repairer is usually not the man who wrote the code, and often he is a junior programmer or trainee. [page 122]

Chapter 14: Hatching a Catastrophe

How does a project get to be a year late?... One day at a time. [page 153]

But the day-by-day slippage is harder to recognize, harder to prevent, harder to make up. Yesterday a key man was sick, and a meeting couldn't be held. Today the machines are all down, because lightning struck the building's power transformer. Tomorrow the disk routines won't start testing, because the first disk is a week late from the factory. Snow, jury duty, family problems, emergency meetings with customer, executive audits--the list goes on and on. Each one only postpones some activity by a half-day or a day. And the schedule slips, one day at a time. [page 154]

Milestones or Millstones?

For picking the milestones there is only one relevant rule. Milestones must be concrete, specific, measurable events, defined with knife-edge sharpness. Coding, for a counterexample, is "90 percent finished" for half of the total coding time. Debugging is "99 percent complete" most of the time. "Planning complete" is an event one can proclaim almost at will. [page 154]

Review by Orville R. Weyrich, Jr.

I met this book shortly after its first publication, while I was a graduate student in chemistry who had just recently mastered the art of programming a Texas Instruments TI-59 calculator. I realized the limitations of the hand-held calculator when I managed to program some chemical analyses which required days of dedicated calculator-time to complete. At that point, I asked for an IBM-360 mainframe account and taught myself programming. Several thousand lines of Fortran-G later, I was ready for bigger and better things, and found The Mythical Man Month in the university bookstore.

Perhaps it is too melodramatic to say that The Mythical Man Month changed my life, so suffice it to say that after reading the book, I went on to get a job programming a PDP-11 for Knoxville Utility Board. After I got my degree in chemistry, I changed my career to software engineering and never looked back. Now almost twenty years later, after three years as Assistant Professor of Computer Science specializing in software engineering and a decade in private industry, this book still has a special place in my heart.

With the melodrama now dispatched, let me say that I wholeheartedly recommend this book for any manager of a software engineering project, and especially for anyone involved in a Year-2000 project.

A few more comments follow, arranged by chapter.

Chapter 1: The Tar Pit

Programming in-the-large seems to be like the tar pits that entrapped the prehistoric creatures such as the giant sloth, whose bones today grace the science library at the University of Georgia. Brooks writes:

Large-system programming has over the past decade been such a tar-pit, and many great and powerful beasts [including the IBM 360 project team] have thrashed violently in it. Most have emerged with running systems -- few have met goals, schedules, and budgets .... No one thing seems to cause the difficulty -- any particular paw can be pulled away. But the accumulation of simultaneous and interacting factors brings slower and slower motion.

Chapter 2: The Mythical Man-Month

Develops the justification for "Brooks's Law"

Adding manpower to a late software project makes it later

Chapter 3: The Surgical Team

Following the lead of Harlan Mills, Brooks constructs an analogy between the roles necessary for a successful software engineering project, and the roles necessary for a surgical team. For many years, I have related best to the roles of "toolsmith" and "tester."

Chapter 4: Aristocracy, Democracy, and System Design

In 1972 I was privileged to tour France as part of a joint expedition of the French and Religion departments of my college. Although I was there as a student of French, visits to many a Cathedral were on the itinerary due to the influence of the religion department. Brooks manages to make my experience relevant to software engineering:

Most European cathedrals show differences in plan or architectural style between parts built in different generations by different builders. The later builders were tempted to "improve" upon the designs of the earlier ones, to reflect the changes in fashion and differences in individual taste ... and the result proclaims the pridefulness of the builders as much as the glory of God.

Brooks goes on to relate this observation to the need for conceptual integrity in the design of a software system, and makes observations on how this can be achieved.

Chapter 5: The Second System Effect

Beware the software engineer who has just completed his first successful system -- his next project runs the risk of dying of "featuritis".

Chapter 6: Passing the Word

Say what you mean and mean what you say -- on the necessity of making the documentation match the implementation.

Chapter 7: Why Did the Tower of Babel Fail?

Using additional analogies, Brooks points out that "The job least well done by project managers is to utilize the technical genius who is not strong on management talent." He then proceeds to illustrate his point with an example of how to do it right, taken from the fictional account of Heinlein's The Man Who Sold the Moon.

Chapter 16: No Silver Bullet

This chapter, which was first published in IEEE Computer, [April, 1987] after the first edition of The Mythical Man Month, is included in the revised edition. It argues that CASE tools are useful adjuncts to software engineering, but are not a panacea.

Epilogue

I will give Frederick P. Brooks, Jr. the last word:

The tar pit of software engineering will continue to be sticky for a long time to come. Once can expect the human race to continue attempting systems just within or just beyond our reach; and software systems are perhaps the most intricate and complex of man's handiworks. The management of this complex craft will demand our best use of new languages and systems, our best adaptation of proven engineering management methods, liberal doses of common sense, and a God-given humility to recognize our fallibility and limitations.

Review The Mythical Man-Month Essays on Software Engineering

Review Discussion Mythical Man Month

Probably the biggest insight I took from the book (i.e. They talk about the "Mythical Man Month") is that the what kills timelines software projects more than anything else is communication overhead. Reducing the communication requirements between groups working on software is the single most important thing you can do during software development
----------------
Eric Raymond's "loophole" to Brooks law - primary development does not scale, debugging does.
-----------------
From A Second Look at the Cathedral and Bazaar by Nikolai Bezroukov, http://firstmonday.org/issues/issue4_12/bezroukov/index.html

One of the most indefensible ideas of CatB (i.e. Raymond) is that Brooks' Law is non-applicable in the Internet-based distributed development environment as exemplified by Linux The real problem with the CatB statement is that due to the popularity of CatB this statement could discourage OSS community from reading and studying The Mythical Man-Month, one of the few computer science books that has remained current decades after its initial publication. Actually the term "Brooks' Law" is usually formulated as "Adding manpower to a late software project makes it later". I believe that the illusion of the non-applicability of "mythical man-month postulate" and Brooks' law is limited only to projects for which a fully functional prototype already exists and most or all architectural problems are solved
----------------
Open Source vs. Brook (i.e. more progs later projects).
----------------------
Now I am reading the newly published Extreme Programming Explained by Kent Beck. He describes a fear of touching another programmer's code as a factor in slowing down many projects. I first read MMM over twenty years ago. Since then, I have been looking for any sign that any manager I worked for had even heard of the book. So far, no joy. Which is really scary, considering who my present employer
-------------------------------------
Brook also managed the development of the IBM 360 Operating System." Note that while manager of the 360 project it was Dr. Brooks who specified that a byte would consist of 8 bits. Whether or not you agree with his decision, it's hard to argue that this has not had a huge impact on the computer field. Tanner Lovelace UNC Chapel Hill

My 1 1/2 Cents

Wow! Nobody seems to have noticed a minor detail... IBM OS 360 looks to have been a major screw up.
And that's the lesson: If you screw up with style, then people will applaud you anyway... Especially if you are a computer scientist. (:-)

After all, computer science is after the first principles of the game, not just the "Implementation business"...


Mr. Fred Brooks must have been a politician in his time. Being in command for several thousand programmers must be more complex than the Job of a General.

And that leads us to Mr. Machiavelli's work "The Prince". In his time, Mr. Machiavelli was a seniour politician in the Republic of Florence. Later the Medicis returned to Florence, the Republic was junked and so was Mr. Machiavelli. Out of favors, out of Job, he found himself in his Palazo under house arrest. The only thing to do was to write a book with his reflections on the process of gaining and maintaining power.
And what? He is sending his writings to the new powers. Do you think the Medicis are stupid enough to hire an all to clever civil servant of the previous regime? Bingo.

So people like Fred Brooks and Machiavelli are probably very clever but not clever enough. Fouche, for that matter, never wrote books - he was a man of action.

MMM vs. OSS advocado.com discussion

Off the top of my head..., posted 13 Nov 2000 by rillian

For example, according to Brooks, opening development up wide should result in basically an infinite time-to-completion for a project, since all effort gets sucked up in inter-developer training and communication.

I think this one is easily explained by the different organizational structures. When a commercial development team hires someone, there's an expectation on both sides now that they're "inside the wall" that someone must be supported and involved and given work to do. Else, why did you hire them? The lower chemical potential of open development means those doing the work are more-or-less free to ignore the training of new contributors, who must learn on their own. With no schedule, or a recognizance of the value of an educational budget, this is no problem.

Microsoft chimes in to remark that the reason OSS has worked is that anarchy is OK at producing copies of systems which are already fully architected. The "taillights are easy to follow" principle.

The advantage of "chasing taillights" is you don't have to do much work to achieve a shared vision among the contributors. It's that vision that is hardest to pass on. I understand taking advantage of this is one of the reasons behind GNOME's emulation of Microsoft products. But the effect applies just as well to designs just over the horizon. How often have you read a new project description and said "Yes, I see what they want to do"?

Others have remonstrated that OSS projects aren't really bazaars, but autocratic cathedrals run by the elect, and so are perfectly understandable.

It is true that one effective model is to have circles of contributors, where new contributors are "trained" by a circle of more experienced developers, who in turn defer to core members. But the term "autocratic cathedral" poorly captures the complexity of the situation. This model is only autocratic in that--for reasons of resource efficiency, lack of distinct vision and respect for their technical competence--everyone follows the decisions made by the core members. And one is free to move between these roles as time and interest permit. In other words, not autocratic at all.

Perhaps (to further mix metaphors) this is a surgical team, drawing their new members from recent graduates of a custom-tailored university degree program, crossed with the social freedom one has in starting/joining a minor shared-interest club in a large city. I don't think there's much interest to be found in calling that "traditional top-down design" and understood.

On a more philosophical note, stephan's point about historical context is a valuable one, and applies just as well to the The Cathedral and the Bazaar. We shouldn't expect to find directly applicable rules in seminal works of this sort. Initial theories often have limited scope or are found to be wrong on some (even many) details. But the central idea still has value, and is either incorporated in later theories, or develops into a sophisticated field in its own right, sharing only the name and central premise of the original. (Darwin is a good example here.) We should keep this in mind and not be surprised when bold claims don't apply to our situation.

As a great friend used to say, "I think we can complicate this a bit." :-)

Mythical Man-Month

The essays in this book are concise, clear, and eminently readable. Brooks has a way of forcing you to question your implicit assumptions. In the title chapter on schedule slippage, he drives the reader humorously, but relentlessly, to this conclusion:

Adding manpower to a late software project makes it later.

In another essay, Brooks points out:

Chemical engineers learned long ago that a process that works in the laboratory cannot be implemented in a factory in one step. An intermediate step called the pilot plant is necessary....In most [software] projects, the first system is barely usable. It may be too slow, too big, awkward to use, or all three. There is no alternative but to start again, smarting but smarter, and build a redesigned version in which these problems are solved.... Delivering the throwaway to customers buys time, but it does so only at the cost of agony for the user, distraction for the builders while they do the redesign, and a bad reputation for the product that the best redesign will find hard to live down. Hence, plan to throw one away; you will, anyhow.

The 20th anniversary edition includes all of the chapters of the original book, plus a lot of new material. The chapter "No Silver Bullet--Essence and Accident in Software Engineering" was first published in 1986. Brooks asserted that no single software engineering development would produce an order-of-magnitude improvement in programming productivity within ten years. While this paper caused a lot of rebuttal in the software engineering community, Brooks was right--there has been "No Silver Bullet."

The other chapters discuss why this is so and Brooks points out one of his mistakes in the first edition--"David Parnas Was Right, and I Was Wrong About Information Hiding." Brooks states, "I am now convinced that information hiding, today often embodied in object programming, is the only way of raising the level of software design."

If you create, maintain, manage, or are involved in part of the software engineering process, you must read The Mythical Man Month. From the hard-nosed advice to the enjoyable anecdotes, you will enjoy this book.

Notes on Fred Brooks' 'The Mythical Man-Month'

by Frederick P. Brooks, Jr. (Addison-Wesley, 1995)
Some books are like an annuity, for both reader and author: they keep paying dividends, year after year. That certainly is the case with The Mythical Man-Month, though I didn't really appreciate it fully until I got a call from Professor Brooks in 1994.

The reason he was calling, he said, was that his publisher had asked him to update his book, which had first been published in 1975. I expressed a wee bit of jealous envy at the news, for my publisher has certainly never called me about updating a book approaching its 20th anniversary. Indeed, I even expressed the opinion that such ancient books would be considered irrelevant by the current generation of software engineers, and thus wouldn't be selling any copies. "Oh, no," replied Professor Brooks. "The Mythical Man-Month has been selling a steady 10,000 copies a year, all along."

More jealousy, more envy, and a sudden realization that what we have here really is like an annuity. I 'm happy to report that I've re-read the 1975 edition at least four times since its publication, and after Brooks' call, I took it down from the shelf and read it again. But this time, it was for a particular purpose: the real reason for his call, Professor Brooks had told me, was to find out if anything significant had happened in the computer field since the book had been published in 1975.

I must have sounded rather baffled by such a question, and Brooks went on to tell me that he had basically "dropped out" of the software engineering community, and had devoted most of his professional energies to teaching and research in the field of virtual reality. So, in preparation for a re-publication of his book, he wanted to know: what has changed, and what hasn't? Which of the premises in the original book turned out to be right, which ones were wrong, and which ones were irrelevant?

Of course, I wasn't the only person he contacted for this kind of information; several of my colleagues, and numerous gurus, authors, consultants, and "movers and shakers" in the industry were asked to respond to this question ... which we all did, quite happily. And, as you might expect, our inputs were processed, analyzed, filtered, and synthesized by Professor Brooks into a marvelous new edition that is truly a national treasure.

The Mythical Man-Month Revisited

http://www.onlamp.com/lpt/a/4900

Published on ONLamp.com (http://www.onlamp.com/)
http://www.onlamp.com/pub/a/onlamp/2004/06/17/mmm_revisited.html
See this if you're having trouble printing code examples

The Mythical Man-Month Revisited

by Ed Willis 06/17/2004

Surely everyone in development has heard of The Mythical Man-Month if for no other reason than its presentation of Brooks' Law: "Adding manpower to a late project makes it later." Having finally read it in its entirety, though, I can say that it's like a time capsule - simultaneously you can see just how much the field has changed since the original writing and just how much has stayed stubbornly the same.

I have had comparatively recent introduction to the field of software development, being only eight or so years out of school. Excepting my efforts in shell scripting to get in touch with my ancient Unix ancestors, I really know nothing about the history of my profession and how it was for those who worked through the early days that defined our industry. Brooks' writing style is fairly dry and formal - seemingly the prose comes to the reader from the time of Arthur Conan Doyle rather than from the time of the Rolling Stones - and this serves to intensify the perception of antiquity.

To say that this book has deepened my appreciation of just how much I take for granted in my field and how hard-won all those little gains were would be to understate the matter.

It was hard to keep my thoughts on what I was reading, as my reactions to the material took off in my mind. That said, I might characterize my varied reactions as:

And so, without further ado ...

Impudent Reactions to the Modus Operandi of the Elder Days of Yore

Ultimately, most of the amusing excerpts I've singled out below trace back to assumptions regarding the existence of a technique, process, or technology that we take for granted now but which had yet to be invented or was in its infancy at the time of the writing. The field of software development, after all, started out not so long ago with only some hardware and some mathematics before it blossomed into a field full of craft and its own science. Brooks wrote from a time far closer to those origins. In a sense, that these things below seem funny now is an indication of how far the field has come.

I'm not likely doing myself any favors etching these in electronic stone. I can already hear the condemnation of the older professors I've worked with ringing in my ears as I write this. But the truth is, this book had me in hysterics so frequently that to avoid this aspect would seem utterly remiss to me. For example:

"Teams building new supervisors or other system-heart software will of course need machines of their own. Such systems will need operators and a system programmer or two who keeps the standard support on the machine current and serviceable."

I just bet this is the root of all my problems - I have not one but two machines all to myself at work. Do I have any systems programmers or operators? Not a one. It's a miracle I can accomplish anything at all, under the circumstances.

Regarding source code documentation:

"The most serious objection is the increase in the size of the source code that must be stored. As the discipline moves more and more toward on-line storage of source code, this has become a growing consideration. I find myself being briefer in comments to an APL program, which will live on disk, then on a PL/I one that I will store as cards."

For who among us is this not true? Honestly, you just can't shut me up on cards.

On estimation:

"My guideline in the morass of estimating complexity is that compilers are three times as bad as normal batch application programs, and operating systems are three times as bad as compilers."

Pretty much everything I've ever worked on would be "batch application programs" then. At least they're only one ninth as bad as operating systems.

"Consider the IBM APL interactive software system. It rents for $400 per month and, when used, takes at least 160K bytes of memory. On a Model 165, memory rents for about $12 per kilobyte per month. If the program is available full-time, one pays $400 software rent and $1920 memory rent for using the program."

I sure hope someone's been paying my software and memory rent. I haven't been, at least the last little while.

"... we went to allocating machine time in substantial blocks. The whole 15-man sort team, for example, would be given a system for a four-to-six-hour block."

A 15-man sort team - a whole baseball team, with pinch-hitters and relievers - for sorting!

"Operating systems, loudly decried in the 1960s for their memory and cycle costs, have proved to be an excellent form in which to use some of the MIPS and cheap memory bytes on the past hardware surge."

Is the jury really in on operating systems? Let's not be too hasty now. The irony of the situation is that, in at least embedded systems, this question is still very much up for debate.

"Many users now operate their own computers day in and day out on varied applications without ever writing a program. Indeed, many of these users cannot write new programs for their machines, but they are nevertheless adept at solving new problems with them."

Strange but true. They're also pretty good at causing new problems with them too, in my experience.

At various points through the book Brooks discusses the effective use of the "transient area" and how vital an issue it is to the success of any project. I was, of course, deeply alarmed to discover that I had no idea what he was talking about. At the end of the book, Brooks, writing much closer to the present day, critiques his earlier work and revisits this point:

"The size of the transient area, hence the amount of program per disk fetch, is a crucial decision, since performance is a super-linear function of that size. [This whole decision has been obsoleted, first by virtual memory, then by cheap real memory. Users now typically buy enough real memory to hold all the code of major applications.]"

I feel much better now.

Ideas That Foreshadowed Significant Advents in the Field

The earlier section aside, there is much to recommend in this book. Brooks is a smart guy looking critically at his field and finding much opportunity for improvement. Many later development practices embody his observations.

"The sooner one puts the pieces together, the sooner the system bugs will emerge. Somewhat less sophisticated is the notion that by using the pieces to test each other, one avoids a lot of test scaffolding. Both of these are obviously true, but experience shows that they are not the whole truth - the use of clean, debugged components saves much more time in system testing than that spent on scaffolding and thorough component test."

The key distinction here for me is the notion of the components being tested and debugged in isolation, separated from the rest of the system. This is essentially the same technique as is practiced in Extreme Programming (XP), that is, XUnit-style unit testing.

There, as well, adherents argue that creating these separate test scaffoldings for each component costs less than the expense of fixing component-level defects at integration time. In fact, XP's argument is even stronger; with a highly evolutionary and iterative model, these test sets are part of the weight that is moved around when it comes time to make new changes. Still it's telling that the value of this approach to the creation of quality was evident to Brooks 15 years or so in advance of XP. In another spot, Brooks mentions:

"... interesting results show that three times as much progress in interactive debugging is made on the first interaction of each session as on subsequent interactions."

The real irony here is that his main point is that debugging sessions should be short, whereas the XUnit take on this would be that the value of debugging drops dramatically as the developer does it, so we should aim to do as little of it as possible. This is certainly one of the motivations of XUnit testing. It's unsurprising that this idea would have eluded Brooks, as the interactivity of the short think-code-test-debug cycle upon which XUnit rests was a long way off in the future when Brooks wrote this.

"For picking milestones there is only one relevant rule. Milestones must be concrete, specific, measurable events, defined with knife-edge sharpness. Coding, for example, is '90 percent finished' for half of the total coding time. Debugging is '99 percent complete' most of the time. 'Planning complete' is an event one can proclaim almost at will."

The first time I encountered similar statements was in Steve McConnell's Rapid Development. As Brooks himself points out, software is ethereal "thought-stuff." Many things come out of this intangibility. William Burroughs talks about the inability to fake certain things (a good meal, for example). The only milestone that is like this in software development is the release itself. All others may be fake, and the further they are from the release, the easier they are to fake. This is all the more reason to be careful in defining and evaluating non-implementation milestones.

Going further, many Agile enthusiasts want to do away with non-implementation phases of development (or at least reduce the effort put into non-implementation work products) and put the emphasis much more squarely on the implementation itself.

Regarding non-implementation work products, especially documentation, Brooks wrote:

"A basic principle of data processing teaches the folly of trying to maintain independent files in synchronization ... Yet our practice in programming documentation violates our own teaching. We typically attempt to maintain a machine-readable form of a program and an independent set of human-readable documentation, consisting of prose and flowcharts ... The solution, I think, is to merge the files, to incorporate the documentation in the source program."

Obviously this foreshadows the movement toward placing ever greater emphasis on internal documentation as is witnessed in the advent of Javadoc, Doxygen, and the remaining host of source code documentation tools. I see these efforts leading to the minimization of external design artifacts as these tools make it tempting to forgo formal design documents altogether in favor of embedding the design documentation in the code itself.

XUnit-style testing reduces the need for requirements documents in a similar, though less dramatic fashion; unit requirements, at least, exist in an executable and readable form in the unit tests themselves. One motivation for both of these techniques is that, of all the work products we can create, we can't avoid creating and maintaining the implementation itself. Let's load the implementation with as much value as humanly possible. That Brooks saw the value of some of these possibilities so early is encouraging.

Ideas Time Has Not Been Overly Kind To

Brooks wrote about waterfall lifecycle models:

"Much of present-day software acquisition procedures rests upon the assumption that one can specify a satisfactory system in advance, get bids for its construction, have it built, and install it. I think this assumption is fundamentally wrong, and that many software acquisition problems spring from that fallacy. Hence they can not be fixed without fundamental revision, one that provides for iterative development and specification of prototypes and products."

Here Brooks is clearly decrying the waterfall lifecycle model and is on the verge of embracing true iterative development, stopping seemingly just shy of recommending iterative development of the actual shipping implementation. Elsewhere he notes:

"Lehman and Belady offer evidence that quanta [of updates to software] should be very large and widely spaced or else very small and frequent. The latter strategy is more subject to instability according to their model. My experience confirms it: I would never risk that strategy in practice."

Recent history, at least, favors the opposite position. I think most organizations, likely in many cases without any real decision on the matter, practice something akin to the continuous integration favored by XP and end up with small and frequent quanta. I doubt there's much support for the other position now, but consider this later passage:

"In most projects, the first system built is barely usable. It may be too slow, too big, awkward to use, or all three. There is no alternative but to start again, smarting but smarter, and build a redesigned version in which these problems are solved. The discard and redesign may be done in one lump, or it may be done piece-by-piece. But all large-system experience shows that it will be done."

The piece-wise redesign sounds like refactoring, but I don't think it is. I think he's saying you will invariably throw away the whole implementation either all in one go or a little bit at a time, so it's wise to "plan to throw one away." I still hear people say this sometimes. This is probably not acceptable now - certainly I'd be embarrassed to have to do this. But this is the world in which Brooks lived when he wrote this book. Even in a lifecycle that tried to reject change after gathering the requirements (or maybe because of this) we still end up throwing one away.

It's easy to see why Brooks couldn't fully justify the essential invitation of change through the development cycle that characterizes evolutionary prototyping, XP, and other iterative methodologies, although he could certainly see the possible value in it.

Bear in mind that Boehm hadn't yet finished his work in estimating the costs of change when Brooks wrote this. This work would ultimately lead to the widely held belief that failing to catch defects very close to the point of their introduction imposed costs exponential in the amount of time between introduction and discovery. This latter idea is still entrenched in our industry despite the fact that, if it were true, practices that allow for change throughout the development lifecycle would be exponentially more expensive than would be the waterfall model, which they clearly are not. (N.B. They are inarguably more expensive, they are just not exponentially so). Equally one still hears variants of this:

"The fundamental problem with software maintenance is that fixing a defect has a substantial (20-50 percent) chance of introducing another."

I do not believe the risks to be this high now in any reasonably well-run organization. They may come close to 20 but should be nowhere near 50 percent. In short, we can claim have become better at maintenance over the past 30 years. Brooks, though, had to play the ball where it lay at the time he was writing and so would not have seen some possibilities we enjoy today as being legitimate or responsible then.

On overall system design and requirements:

"Often the fresh concept does come from an implementer or from a user. However, all my own experience convinces me, and I have tried to show, that the conceptual integrity of a system determines its ease of use. Good features and ideas that do not integrate with a system's basic concepts are best left out. If there appear many such important but incompatible ideas, one scraps the whole system and starts again on an integrated system with different basic concepts."

There is a certain smugness at work in the idea that the architect will make better decisions here than the user will. Certainly this view is out of favor now. We normally try to find out what the user wants (somehow) and then find a way to design our software to provide this to them in the most sensible manner we can envision. I can't imagine saying "no" to the user regarding a feature just because it doesn't fit into my current conceptual view of the system, and the notion of throwing out the current system so we can devise a better one that embodies all the features we want is a luxury no one can afford.

Plainly put, our job as software developers is to distill the system's conceptual integrity given the user's requirements. It's not our job to pick over the user's requests, looking for some set of functions that makes sense as a whole to us. It is also our job to take our lemons and make lemonade. We don't have the option to throw out our organization's software inventory when it doesn't match up well enough with new requirements. We must find a way to refactor that inventory toward a design that accepts (however grudgingly) the complete requirements, both old and new.

"A discipline that will open an architect's eyes is to assign each little function a value: capability x is worth not more than m bytes of memory and n microseconds per invocation. These values will guide initial decisions and serve during implementation as a guide and warning to all."

Even in embedded development where I make my living, I rarely see anything like this level of budgeting detail. I'm sure it was an absolute necessity in the hardware-poor past, though it makes me awfully glad to live in the current age of hardware excess.

On source code control and configuration management:

"First, each group or programmer had an area where he kept copies of his programs, his test cases, and the scaffolding he needed for component testing. In this playpen area there were no restrictions on what a man could do with his own programs.... When a man had his component ready for integration into a larger piece, he passed a copy over to to the manager of that larger system, who put this copy into a system integration sublibrary..."

I shudder to think how miserable and invariably risky this manual approach must have been, but what alternative was there? Even the advent of RCS was still many years in the future.

Brooks spends a lot of time in the book on the subject of identifying and training architects and designers: "How to grow great designers? ... systematically identify top designers ... assign a career mentor to be responsible for the development of the prospect ... devise and maintain a career development plan for each prospect ..."

The sad thing here is that in most development organizations I know of, design is not a desirable thing in its own right. Outside of the development group itself no one knows how to design, or even if anyone does, the issues of how to identify, train, use, and retain top design talents never actually come up.

Essential Ideas

Brooks presents several essential ideas to consider.

The Surgical Team

The surgical team is Brooks' proposed development team model. At its head is the chief programmer or surgeon. Everyone else supports him. It defines the following roles:

Even with the provisos listed in the book (one Language Lawyer can support two or three Surgeons, the Administrator may be able to look after two teams) this all seems excessive. I'm not clear at all on what the Programming Clerk does even after reading through the description a couple of times. I doubt the two Secretaries are truly necessary.

From what I do understand, one good Secretary or Administrator could likely look after all of the duties assigned to the two Secretaries, the Administrator, and the Programming Clerk. Also, the Tester or Toolsmith could handle many of the Programming Clerk's duties. I expect that the Toolsmith role would be controversial now - it's not clear to me that permitting each development team to vary with respect to their tool sets and development environments is either desirable or necessary, but people with these skills are obviously needed in the organization. I'd expect teams to share this role in practice.

Paring this back to the Surgeon, Copilot, Tester, Secretary/Administrator, and Editor would likely suffice for the team itself. An organization that supports multiple teams could provide the Language Lawyer and Toolsmith, or the team could subsume the roles itself.

Would such teams use their manpower effectively? Absolutely - if for no other reason than the well-defined roles. Each person knows what he must do and, perhaps more importantly, what's outside his purview. This alone is a welcome change from the relative chaos of role definition in most organizations. Frequently more than one person is doing the same thing, while unnervingly whole aspects of the work exist that no one is actually doing.

In many organizations also, technical decision-making has no clear process, with people haphazardly CC:d on emails, email threads that go on with no clear direction, and subjects eventually exhausted with no obvious outcome. At least in the proposal, technical decision-making clearly rests first and foremost with the Surgeon, who may delegate at her discretion. Being unambiguous about who is responsible for making the decisions is the first step in making them, I expect. It's clear that this is the Surgeon's responsibility.

This model also takes advantage of the well-documented disparity in measured productivity between programmers. The Surgeon is your star, the Copilot the understudy. Everyone else must ensure that these two people can maximize their contributions. That's smart.

Brooks puts forward his proposal as much for its scalability as for its optimization of productivity. Larger projects would have a set of these teams. Decisions requiring the input or agreement of more than one team - and ultimately the entirety of the architecture, which presumably the whole set of teams would have to ratify - would require the attention and involvement of the Surgeons, rather than the whole teams. In this, the surgical team proposal is reminiscent of the Scrum of Scrums and similar proposals for scaling XP projects.

The main disadvantage of the proposal is that it may reinforce the tribal boundaries within a larger organization where the characteristics and standards of each team diverge based on the tastes of the Surgeons that lead them. All in all, though, I'd expect the good points here to outweigh the bad. Allowing your A guys to do the A work and being painfully clear on who should make decisions would help many organizations enormously.

No Silver Bullets

If there's one thing that will stay with me from reading The Mythical Man-Month, it's Brooks' discussion of accident and essence in this essay. The central conjecture that drives this essay is this:

"There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability or simplicity."

Likely most have heard (or mis-heard) this as I did in a degenerate form of something like "development productivity can not increase by an order of magnitude." This is most definitely not what he's saying. He admits freely the possibility that combinations of improvements may yield this order-of-magnitude improvement - he draws the line at single factors. So there is no one, single silver bullet. This is an important distinction, because once understood, it becomes clear that this statement was probably true then and is in all likelihood true today. Knowing this is a tremendous boon in sorting out the nonsense from the truth in development.

As recently as a few weeks ago, I saw more than one order of magnitude improvement in development productivity attributed to the adoption of a particular set of process improvements. Without even trying to sort out whether manipulating a single factor produced this effect or more than one, I felt confident that either the presenter or Brooks was wrong. My money's on the presenter.

These people are typically not trying to pull the wool over your eyes intentionally - they believe what they're saying. Consider the many people still beating the "defect phase containment will save you orders of magnitude in effort" drum. Today, this is probably true only at the extremes, in really large projects, projects with really stringent quality requirements, or projects staffed with unusually bad teams. Counter evidence is ubiqutuous, but people still tell this story. As professionals, we have a responsibility to sort the wheat from the chaff. Brooks's conjecture is a great tool to bring to bear in this effort.

Brooks supports his conjecture with an inspired discussion that divides the world of software development into accident and essence:

"The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of function. The essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed. ... I believe the hard part of building software to be the specification, design and testing of this conceptual construct, not the labor or representing it and testing the fidelity on the representation."

This is the essence. The accident is everything else involved in software development. The details of the programming language, the configuration management, the modeling language, the packaging, documentation tools, libraries, build tools and so on are all accidental work in software development. Clearly there's lots of accidental work. Here's why what Brooks is saying makes so much sense - no one single area of software development is so badly burdened by accidental work that improving it can yield an order-of-magnitude improvement in overall productivity, reliability or simplicity.

My other realization from this reading is that, while I have never characterized myself (as so many do) as a C guy, a C++ guy, a Java guy, or what have you, I have most definitely prided myself on being a good generalist with a decent background on programming languages, build environments, libraries, installers, and operating systems.

The idea of the essence and accident of software development makes plain where continuing study has the most effect. Anything that improves skills in the accidents of development can only be of benefit in particular software niches whereas study to improve skills in the essence of development necessarily has to be a benefit in all domains.

Ed Willis works in telecommunications.

Producer's Note: Check out this discussion on The Mythical Man-Month that took place on java.net in May. The discussion has since been archived but there are many interesting posts and opinions.

Excerpts

InformIT Mythical Man-Month, The Essays on Software Engineering, Anniversary Edition, 2nd Edition

Preface

To my surprise and delight, The Mythical Man-Month continues to be popular after twenty years. Over 250,000 copies are in print. People often ask which of the opinions and recommendations set forth in 1975 I still hold, and which have changed, and how. Whereas I have from time to time addressed that question in lectures, I have long wanted to essay it in writing.

Peter Gordon, now a Publishing Partner at Addison-Wesley, has been working with me patiently and helpfully since 1980. He proposed that we prepare an Anniversary Edition. We decided not to revise the original, but to reprint it untouched (except for trivial corrections) and to augment it with more current thoughts.

Chapter 16 reprints "No Silver Bullet: Essence and Accidents of Software Engineering," a 1986 IFIPS paper that grew out of my experience chairing a Defense Science Board study on military software. My co-authors of that study, and our executive secretary, Robert L. Patrick, were invaluable in bringing me back into touch with real-world large software projects. The paper was reprinted in 1987 in the IEEE Computer magazine, which gave it wide circulation.

"No Silver Bullet" proved provocative. It predicted that a decade would not see any programming technique which would by itself bring an order-of-magnitude improvement in software productivity. The decade has a year to run; my prediction seems safe. "NSB" has stimulated more and more spirited discussion in the literature than has The Mythical Man-Month. Chapter 17, therefore, comments on some of the published critique and updates the opinions set forth in 1986.

In preparing my retrospective and update of The Mythical Man-Month, I was struck by how few of the propositions asserted in it have been critiqued, proven, or disproven by ongoing software engineering research and experience. It proved useful to me now to catalog those propositions in raw form, stripped of supporting arguments and data. In hopes that these bald statements will invite arguments and facts to prove, disprove, update, or refine those propositions, I have included this outline as Chapter 18.

Chapter 19 is the updating essay itself. The reader should be warned that the new opinions are not nearly so well informed by experience in the trenches as the original book was. I have been at work in a university, not industry, and on small-scale projects, not large ones. Since 1986, I have only taught software engineering, not done research in it at all. My research has rather been on virtual reality and its applications.

In preparing this retrospective, I have sought the current views of friends who are indeed at work in software engineering. For a wonderful willingness to share views, to comment thoughtfully on drafts, and to re-educate me, I am indebted to Barry Boehm, Ken Brooks, Dick Case, James Coggins, Tom DeMarco, Jim McCarthy, David Parnas, Earl Wheeler, and Edward Yourdon. Fay Ward has superbly handled the technical production of the new chapters.

I thank Gordon Bell, Bruce Buchanan, Rick Hayes-Roth, my colleagues on the Defense Science Board Task Force on Military Software, and, most especially, David Parnas for their insights and stimulating ideas for, and Rebekah Bierly for technical production of, the paper printed here as Chapter 16. Analyzing the software problem into the categories of essence and accident was inspired by Nancy Greenwood Brooks, who used such analysis in a paper on Suzuki violin pedagogy.

Addison-Wesley's house custom did not permit me to acknowledge in the 1975 Preface the key roles played by their staff. Two persons' contributions should be especially cited: Norman Stanton, then Executive Editor, and Herbert Boes, then Art Director. Boes developed the elegant style, which one reviewer especially cited: "wide margins, and imaginative use of typeface and layout." More important, he also made the crucial recommendation that every chapter have an opening picture. (I had only the Tar Pit and Rheims Cathedral at the time.) Finding the pictures occasioned an extra year's work for me, but I am eternally grateful for the counsel.

Deo soli gloria or Soli Deo Gloria -- To God alone be the glory.

Chapel Hill, N.C., F.

Why is programming fun- - The Mythical Man Month

Why is programming fun? What delights may its practitioner expect as his reward?

First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things of his own design. I think this delight must be an image of God's delight in making things, a delight shown in the distinctiveness of each leaf and each snowflake.

Second is the pleasure of making things that are useful to other people. Deep within, we want others to use our work and to find it helpful. In this respect the programming system is not essentially different from the child's first clay pencil holder "for Daddy's office."

Third is the fascination of fashioning complex puzzle-like objects of interlocking moving parts and watching them work in subtle cycles, playing out the consequences of principles built in from the beginning. The programmed computer has all the fascination of the pinball machine or the jukebox mechanism, carried to the ultimate.

Fourth is the joy of always learning, which springs from the nonrepeating nature of the task. In one way or another the problem is ever new, and its solver learns something: sometimes practical, sometimes theoretical, and sometimes both.

Finally, there is the delight of working in such a tractable medium. The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures. (As we shall see later, this tractability has its own problems.)

Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separately from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.

Programming then is fun because it gratifies creative longings built deep within us and delights sensibilities we have in common with all men.

Chapter 4

This great church is an incomparable work of art. There is neither aridity nor confusion in the tenets it sets forth.

If is the zenith of a style, the work of artists who had understood and assimilated all their predecessors' successes, in complete possession of the techniques of their times, but using them without indiscreet display nor gratuitous feats of skill.

It was Jean d'Orbais who undoubtedly conceived the general plan of the building, a plan which was respected, at least in its essential elements, by his successors. This is one of the reasons for the extreme coherence and unity of the edifice.

Conceptual Integrity
Most European cathedrals show differences in plan or architectural style between parts built in different generations by different builders. The later builders were tempted to improve upon the designs of the earlier ones, to reflect both changes in fashion and differences in individual taste. So the peaceful Norman transept abuts and contradicts the soaring Gothic nave, and the result proclaims the pridefulness of the builders as much as the glory of God.

Against these, the architectural unity of Reims stands in glorious contrast. The joy that stirs the beholder comes as much from the integrity of the design as from any particular excellences. As the guidebook tells, this integrity was achieved by the self-abnegation of eight generations of builders, each of whom sacrificed some of his ideas so that the whole might be of pure design. The result proclaims not only the glory of God, but also His power to salvage fallen men from their pride.

Even though they have not taken centuries to build, most programming systems reflect conceptual disunity far worse than that of cathedrals. Usually this arises not from a serial succession of master designers, but from the separation of design into many tasks done by many men.

I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas. In this chapter and the next two, we will examine the consequences of this theme for programming system design:

Achieving Conceptual Integrity
The purpose of a programming system is to make a computer easy to use. To do this, it furnishes languages and various facilities that are in fact programs invoked and controlled by language features. But these facilities are bought at a price: the external description of a programming system is ten to twenty times as large as the external description of the computer system itself. The user finds it far easier to specify any particular function, but there are far more to choose from, and far more options and formats to remember.

Ease of use is enhanced only if the time gained in functional specification exceeds the time lost in learning, remembering, and searching manuals. With modern programming systems this gain does exceed the cost, but in recent years the ratio of gain to cost seems to have fallen as more and more complex functions have been added. I am haunted by the memory of the ease of use of the IBM 650, even without an assembler or any other software at all.

Because ease of use is the purpose, this ratio of function to conceptual complexity is the ultimate test of system design. Neither function alone nor simplicity alone defines a good design.

This point is widely misunderstood. Operating System/360 is hailed by its builders as the finest ever built, because it indisputably has the most function. Function, and not simplicity, has always been the measure of excellence for its designers. On the other hand, the Time-Sharing System for the PDP-10 is hailed by its builders as the finest, because of its simplicity and the spareness of its concepts. By any measure, however, its function is not even in the same class as that of OS/360. As soon as ease of use is held up as the criterion, each of these is seen to be unbalanced, reaching for only half of the true goal.

For a given level of function, however, that system is best in which one can specify things with the most simplicity and straightforwardness. Simplicity is not enough. Mooers's TRAC language and Algol 68 achieve simplicity as measured by the number of distinct elementary concepts. They are not, however, straightforward. The expression of the things one wants to do often requires involuted and unexpected combinations of the basic facilities. It is not enough to learn the elements and rules of combination; one must also learn the idiomatic usage, a whole lore of how the elements are combined in practice. Simplicity and straightforwardness proceed from conceptual integrity. Every part must reflect the same philosophies and the same balancing of desiderata. Every part must even use the same techniques in syntax and analogous notions in semantics. Ease of use, then, dictates unity of design, conceptual integrity.

Aristocracy and Democracy
Conceptual integrity in turn dictates that the design must proceed from one mind, or from a very small number of agreeing resonant minds.

Schedule pressures, however, dictate that system building needs many hands. Two techniques are available for resolving this dilemma. The first is a careful division of labor between architecture and implementation. The second is the new way of structuring programming implementation teams discussed in the previous chapter.

The separation of architectural effort from implementation is a very powerful way of getting conceptual integrity on very large projects. I myself have seen it used with great success on IBM's Stretch computer and on the System/360 computer product line. I have seen it fail through lack of application on Operating System/360.

By the architecture of a system, I mean the complete and detailed specification of the user interface. For a computer this is the programming manual. For a compiler it is the language manual. For a control program it is the manuals for the language or languages used to invoke its functions. For the entire system it is the union of the manuals the user must consult to do his entire job.

The architect of a system, like the architect of a building, is the user's agent. It is his job to bring professional and technical knowledge to bear in the unalloyed interest of the user, as opposed to the interests of the salesman, the fabricator, etc.

Architecture must be carefully distinguished from implementation. As Blaauw has said, "Where architecture tells what happens, implementation tells how it is made to happen." He gives as a simple example a clock, whose architecture consists of the face, the hands, and the winding knob. When a child has learned this architecture, he can tell time as easily from a wristwatch as from a church tower. The implementation, however, and its realization, describe what goes on inside the case--powering by any of many mechanisms and accuracy control by any of many.

In System/360, for example, a single computer architecture is implemented quite differently in each of some nine models. Conversely, a single implementation, the Model 30 data flow, memory, and microcode, serves at different times for four different architectures: a System/360 computer, a multiplex channel with up to 224 logically independent subchannels, a selector channel, and a 1401 computer.

The same distinction is equally applicable to programming systems. There is a U.S. standard Fortran IV. This is the architecture for many compilers. Within this architecture many implementations are possible: text-in-core or compiler-in-core, fast-compile or optimizing, syntax-directed or ad-hoc. Likewise any assembler language or job-control language admits of many implementations of the assembler or scheduler.

Now we can deal with the deeply emotional question of aristocracy versus democracy. Are not the architects a new aristocracy, an intellectual elite, set up to tell the poor dumb implementers what to do? Has not all the creative work been sequestered for this elite, leaving the implementers as cogs in the machine? Won't one get a better product by getting the good ideas from all the team, following a democratic philosophy, rather than by restricting the development of specifications to a few?

As to the last question, it is the easiest. I will certainly not contend that only the architects will have good architectural ideas. Often the fresh concept does come from an implementer or from a user. However, all my own experience convinces me, and I have tried to show, that the conceptual integrity of a system determines its ease of use. Good features and ideas that do not integrate with a system's basic concepts are best left out. If there appear many such important but incompatible ideas, one scraps the whole system and starts again on an integrated system with different basic concepts.

As to the aristocracy charge, the answer must be yes and no. Yes, in the sense that there must be few architects, their product must endure longer than that of an implementer, and the architect sits at the focus of forces which he must ultimately resolve in the user's interest. If a system is to have conceptual integrity, someone must control the concepts. That is an aristocracy that needs no apology.

No, because the setting of external specifications is not more creative work than the designing of implementations. It is just different creative work. The design of an implementation, given an architecture, requires and allows as much design creativity, as many new ideas, and as much technical brilliance as the design of the external specifications. Indeed, the cost-performance ratio of the product will depend most heavily on the implementer, just as ease of use depends most heavily on the architect.

There are many examples from other arts and crafts that lead one to believe that discipline is good for art. Indeed, an artist's aphorism asserts, "Form is liberating." The worst buildings are those whose budget was too great for the purposes to be served. Bach's creative output hardly seems to have been squelched by the necessity of producing a limited-form cantata each week. I am sure that the Stretch computer would have had a better architecture had it been more tightly constrained; the constraints imposed by the System/360 Model 30's budget were in my opinion entirely beneficial for the Model 75's architecture.

Similarly, I observe that the external provision of an architecture enhances, not cramps, the creative style of an implementing group. They focus at once on the part of the problem no one has addressed, and inventions begin to flow. In an unconstrained implementing group, most thought and debate goes into architectural decisions, and implementation proper gets short shrift.

This effect, which I have seen many times, is confirmed by R. W. Conway, whose group at Cornell built the PLIC compiler for the PLII language. He says, "We finally decided to implement the language unchanged and unimproved, for the debates about language would have taken all our effort."

What Does the Implementer Do While Waiting?
It is a very humbling experience to make a multimillion-dollar mistake, but it is also very memorable. I vividly recall the night we decided how to organize the actual writing of external specifications for OS/360. The manager of architecture, the manager of control program implementation, and I were threshing out the plan, schedule, and division of responsibilities.

The architecture manager had 10 good men. He asserted that they could write the specifications and do it right. It would take ten months, three more than the schedule allowed.

The control program manager had 150 men. He asserted that they could prepare the specifications, with the architecture team coordinating; it would be well-done and practical, and he could do it on schedule Furthermore, if the architecture team did it, his 150 men would sit twiddling their thumbs for ten months.

To this the architecture manager responded that if I gave the control program team the responsibility, the result would not in fact be on time, but would also be three months late, and of much lower quality. I did, and it was. He was right on both counts. Moreover, the lack of conceptual integrity made the system far more costly to build and change, and I would estimate that it added a year to debugging time.

Many factors, of course, entered into that mistaken decision; but the overwhelming one was schedule time and the appeal of putting all those 150 implementers to work. It is this siren song whose deadly hazards I would now make visible.

When it is proposed that a small architecture team in fact write all the external specifications for a computer or a programming system, the implementers raise three objections:

The first of these is a real danger, and it will be treated in the next chapter. The other two are illusions, pure and simple. As we have seen above, implementation is also a creative activity of the first order. The opportunity to be creative and inventive in implementation is not significantly diminished by working within a given external specification, and the order of creativity may even be enhanced by that discipline. The total product will surely be.

The last objection is one of timing and phasing. A quick answer is to refrain from hiring implementers until the specifications are complete. This is what is done when a building is constructed.

In the computer systems business, however, the pace is quicker, and one wants to compress the schedule as much as possible. How much can specification and building be overlapped?

As Blaauw points out, the total creative effort involves three distinct phases: architecture, implementation, and realization. It turns out that these can in fact be begun in parallel and proceed simultaneously.

In computer design, for example, the implementer can start as soon as he has relatively vague assumptions about the manual, somewhat clearer ideas about the technology, and well-defined cost and performance objectives. He can begin designing data flows, control sequences, gross packaging concepts, and so on. He devises or adapts the tools he will need, especially the record-keeping system, including the design automation system.

Meanwhile, at the realization level, circuits, cards, cables, frames, power supplies, and memories must each be designed, refined, and documented. This work proceeds in parallel with architecture and implementation.

The same thing is true in programming system design. Long before the external specifications are complete, the implementer has plenty to do. Given some rough approximations as to the function of the system that will be ultimately embodied in the external specifications, he can proceed. He must have well-defined space and time objectives. He must know the system configuration on which his product must run. Then he can begin designing module boundaries, table structures, pass or phase breakdowns, algorithms, and all kinds of tools. Some time, too, must be spent in communicating with the architect.

Meanwhile, on the realization level there is much to be done also. Programming has a technology, too. If the machine is a new one, much work must be done on subroutine conventions, supervisory techniques, searching and sorting algorithms.

Conceptual integrity does require that a system reflect a single philosophy and that the specification as seen by the user flow from a few minds. Because of the real division of labor into architecture, implementation, and realization, however, this does not imply that a system so designed will take longer to build. Experience shows the opposite, that the integral system goes together faster and takes less time to test. In effect, a widespread horizontal division of labor has been sharply reduced by a vertical division of labor, and the result is radically simplified communications and improved conceptual integrity.

No Silver Bullet Essence and Accidents of Software Engineering

From The Mythical Man-Month, Anniversary Edition, pages 7-8.

No Silver Bullet Essence and Accidents of Software Engineering -- Famous paper by F. Brooks

Fashioning complex conceptual constructs is the essence; accidental tasks arise in representing the constructs in language. Past progress has so reduced the accidental tasks that future progress now depends upon addressing the essence.

Of all the monsters that fill the nightmares of our folklore, none terrify more than werewolves, because they transform unexpectedly from the familiar into horrors. For these, one seeks bullets of silver that can magically lay them to rest.

The familiar software project, at least as seen by the non-technical manager, has something of this character; it is usually innocent and straightforward, but is capable of becoming a monster of missed schedules, blown budgets, and flawed products. So we hear desperate cries for a silver bullet--something to make software costs drop as rapidly as computer hardware costs do.

But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity. In this article, I shall try to show why, by examining both the nature of the software problem and the properties of the bullets proposed.

Skepticism is not pessimism, however. Although we see no startling breakthroughs--and indeed, I believe such to be inconsistent with the nature of software--many encouraging innovations are under way. A disciplined, consistent effort to develop, propagate, and exploit these innovations should indeed yield an order of-magnitude improvement. There is no royal road, but there is a road.

The first step toward the management of disease was replacement of demon theories and numerous theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.

Does it have to be hard? --Essential difficulties

Not only are there no silver bullets now in view, the very nature of software makes it unlikely that there will be any--no inventions that will do for software productivity, reliability, and simplicity what electronics, transistors, and large-scale integration did for computer hardware.

We cannot expect ever to see two fold gains every two years.

First, one must observe that the anomaly is not that software progress is so slow, but that computer hardware progress is so fast. No other technology since civilization began has seen six orders of magnitude in performance-price gain in 30 years. In no other technology can one choose to take the gain in either improved performance or in reduced costs. These gains flow from the transformation of computer manufacture from an assembly industry into a process industry.

Second, to see what rate of progress one can expect in software technology, let us examine the difficulties of that technology. Following Aristotle, I divide them into essence, the difficulties inherent in the nature of software, and accidents, those difficulties that today attend its production but are not inherent.

The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract in that such a conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared with the conceptual errors in most systems.

If this is true, building software will always be hard. There is inherently no silver bullet.

Let us consider the inherent properties of this irreducible essence of modern software systems: complexity, conformity, changeability, and invisibility.

Complexity. Software entities are more complex for their size than perhaps any other human construct because no two parts are alike (at least above the statement level). If they are, we make the two similar parts into a subroutine--open or closed. In this respect, software systems differ profoundly from computers, buildings, or automobiles, where repeated elements abound.

Digital computers are themselves more complex than most things people build: They have very large numbers of states. This makes conceiving, describing, and testing them hard. Software systems have orders-of-magnitude more states than computers do.

Likewise, a scaling-up of a software entity is not merely a repetition of the same elements in larger sizes; it is necessarily an increase in the number of different elements. In most cases, the elements interact with each other in some nonlinear fashion, and the complexity of the whole increases much more than linearly.

The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence. For three centuries, mathematics and the physical sciences made great strides by constructing simplified models of complex phenomena, deriving properties from the models, and verifying those properties by experiment. This paradigm worked because the complexities ignored in the models were not the essential properties of the phenomena. It does not work when the complexities are the essence.

Many of the classic problems of developing software products derive from this essential complexity and its nonlinear increases with size. From the complexity comes the difficulty of communication among team members, which leads to product flaws, cost overruns, and schedule delays. From the complexity comes the difficulty of enumerating, much less understanding, all the possible states of the program, and from that comes the unreliability. From complexity of function comes the difficulty of invoking function, which makes programs hard to use. From complexity of structure comes the difficulty of extending programs to new functions without creating side effects. From complexity of structure come the unvisualized states that constitute security trapdoors.

Not only technical problems, but management problems as well come from the complexity. It makes overview hard, thus impeding conceptual integrity. It makes it hard to find and control all the loose ends. It creates the tremendous learning and understanding burden that makes personnel turnover a disaster.

Conformity. Software people are not alone in facing complexity. Physics deals with terribly complex objects even at the "fundamental" level [BJC:]. The physicist labors on, however, in a firm faith that there are unifying principles to be found, whether in quarks or in unified field theory. Einstein argued that there must be simplified [BJC: undecipherable] of nature, because God is not capricious or arbitrary.

No such faith comforts the software engineer. Much of the complexity that he must master is arbitrary complexity, forced without rhyme or reason by the many human institutions and systems to which his interfaces must conform. These differ from interface to interface, and from time to time, not because of necessity but only because they were designed by different people, rather than by God.

In many cases, the software must conform because it is the most recent arrival on the scene. In others, it must conform because it is perceived as the most conformable. But in all cases, much complexity comes from conformation to other interfaces; this complexity cannot be simplified out by any redesign of the software alone.


Quotes

Humor

Discussion Mythical Man Month

My 1 1/2 Cents

Wow! Nobody seems to have noticed a minor detail... IBM OS 360 looks to have been a major screw up. And that's the lesson: If you screw up with style, then people will applaud you anyway... Especially if you are a computer scientist. (:-)

After all, computer science is after the first principles of the game, not just the "Implementation business"...