Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Programming style

News Software Engineering Recommended Links Books Compilers Algorithms Donald Knuth Defensive programming Programming Abilities
Literate Programming Perl POD Perl programming environment Orthodox Editors Beautifiers and Pretty Printers      
Books and programs to learn good  programming style K&P The Elements of Programming Style C Programming Style Perl Style Conceptual Integrity   Real Insights into Architecture Come Only From Actual Programming The Art of Debugging
Document Management Systems Literate Programming KISS Principle Featuritis Programming as a Profession Project Management Version Control & Configuration Management Tools Program Understanding
Donald Knuth Slightly Skeptical View on Larry Wall and Perl Slightly Skeptical View on John K. Ousterhout and Tcl Richard Stallman A Slightly Skeptical View on Linus Torvalds Shell Giants Conway Law  
Language Design and Programming Quotes Perl-related Humor C Humor Admin Humor Sysadmin Horror Stories SE quotes Humor Etc

To a certain extent the debate about programming style is a debate about the "form over substance" vs. "substance over form". Only mediocre people are slaves of a rigid style. Any talented programmer while adhering to the chosen and rational style guidelines, say,  99% of the time is able to recognize 1% of situations which require to deviate from this style in this and that direction. In no way the style is absolute. You do not want to be handcuffed with prohibitions of every feature that many people abuse and which leads to subtle errors. 

For example in large class of small to medium programs it does not make much sense to  compartmentalize the namespace into several distinct sub-spaces,  and it is better to use a set of global variables across all subroutines and communicate via them instead of parameters and isolated sub-namespaces in subroutines. That makes the code more transparent and more easily understandable and modifiable, as it shrinks the number of variables (and their aliases). Slicing such a problem reveals all uses of particular variable quite nicely. Which violates a lot of style guidelines, but so be it.  In a more general sense,  slave following of any programming paradigm be it structured programming , or OO, or (God forbid) extreme programming is a sign that a person did not really understand those guidelines, as the real understanding including understanding of the limits of applicability. Of course, this can also be a sign of the burning desire to exploit the situation to the own (often commercial) advantage, which often happens with book authors ;-).  Under neoliberalism truth and integrity end up taking a back seat to winning and  making a lot of money.

When I see regular Unix command line utilities (say, for  backup, or sorting ) which are written in OO style, the question in my head is always the same: why the author thought that this programming paradigm was applicable to this situation. It is clearly not so using this style unnecessary complicated the program and makes its more buggy and less understandable then alternatives including regular procedural programming.  

In a way it is similar to discussion about structured programming in which simple religious slogan was at one  time very popular ‘Goto are considered harmful’. But as Donald  Knuth had shown in his famous paper Structured programming with goto statements this slogan that was clearly a simplistic false start as the key (and very useful) idea is not the absence of goto, but the presence of well understood and well behaved programming control structures. And if language doesn't have a particular structure there no sin to implement it using goto (probably with proper comments).  I would recommend  to read this article as the problem it discusses is immortal problem of "substance over form" and is re-appearing in programming each decade or two with a new face and in different context.

BTW structured programming movement which was the first pseudo-religious movement in programming community ( see Structured programming) which emphasized superficial lexical level constructs, failing to address the fundamental problem of good program structure. As Knuth sarcastically noted and essence of structured programming in not the absence of goto but presence of the elegant structure corresponding to the problem in hand; a far more difficult goal to achieve. The Grand Ayatollah of the movement did not even address/discovered the fundamental fact that  there is a set "prime" programming structures, a special types of directional graphs with a single entry and single exit point, which can't be decomposed into simpler prime graphs.

The emphasis in our culture, unfortunately, has increasingly become one of form over substance: too often image matters more than truth. Here is what Knuth said about this topic:

Before beginning a more technical discussion. I should confess that the title of this article was chosen primarily to generate attention. There are doubtless some readers who are convinced that abolition of go to statements is merely a fad. and they may see this title and think, “Aha! Knuth is rehabilitating the go to statement, and we can go back to our old ways of programming again.” Another class of readers wiil see the heretical title and think, “When are die-hards like Knuth going to get with it?” I hope that both classes of people will read on and discover that what I am really doing is striving for a reasonably well balanced viewpoint about the proper role of go to statements. I argue for the elimination of go to’s in certain cases, and for their introduction in others.

I believe that by presenting such a view I am not in fact disagreeing sharply with Dijkstra’s ideas, since he recently wrote the following: “Please don’t fall into the trap of believing that I am terribly dogmatical about [the go to statement]. I have the uncomfortable feeling that others are making a religion out of it, as if the conceptual problems of programming could be solved by a single trick, by a simple form of coding discipline!” [29]. In other words, it seems that fanatical advocates of the New Programming are going overboard in their strict enforcement of morality and purity in programs.

Programming style reflects the personality of the programmer, the depth of his experience  and specialized knowledge. It's more about the art of decomposition of programs (for example defensive programming style enforces certain modules like logging module) and special "paranoid"  treatment of anything that has a return code.  It as about about knowledge of unique for a given domain algorithms. Lexical or syntactic elements of style should be with automatically using beautifier. In other words  90% of knowledge which went into writing a particular program is implicit and as such elements of style used difficult to deduct from the code.  That's why the best (and largely outdated) book about programming style uses examples to explain particular rule and how to apply it to the problems.

As such good programming style is the matter of years of experience that define given programmer productivity and number or errors per thousand lines of code. It is very difficult to define and emulate the style of really talented programmers such as successful complier writers. One thing that is common is that they are able to avoid excessive complexity. A good style is a simple and transparent style and avoids toot tricky language idioms, unless absolutely necessary and saves a lot of lines of code.    While individual recommendations of what constitute a good style in particular programming language might be quite simple, the knowledge where to apply them and where to ignore them is not. Especially tricky are issues related to applicability (aka limits of applicability) . For example should one use global variables and at some point  they become self-defeating and one needs to switch mostly to parameters. What about using  "common blocks" --- namespaces with universal or almost universal visibility? 

Also it is not that easy to recognize great programming style when we see the program in  a language that we know well enough to be able to judge it.  We might appreciated the depth of knowledge of the language, but then the level of mastery of the language is much more then that. It is more difficult or impossible for other languages and if we lack the knowledge of context in which this program was created and problems the the designer faced ( as is the case with reading Lions' Commentary on Unix ) or Knuth code (see for example www-cs-faculty.stanford.edu. Here is a couple of examples  SIMPATH-DIRECTED-CYCLES  DOT-DIFF and ADVENT (adventure game)

 
Programming style reflects the personality of the programmer, the depth of his experience and specialized knowledge. As such it is difficult to impossible to replicate "at will" for area outside the domain for which particular program is written. The part that is easily replicated is the less interesting part -- superficial elements of the style and most of then can be delegated to a good pretty printer.

So like is the case with the great works of literature,  great programs and style in them it is not asy to emulate, or can be extracted as a set of rules to follow. and it is unclear it blind emulation of the extracted rules set will us a lot of good.  BTW Donald Knuth's magnum three volume opus, The Art of Computer Programming (TAOCP), is often bought, frequently cited, some times browsed, but almost never read :-). The following statement of Bill Gates from his blog in 1995
is often quoted:

"If you think you're a really good programmer, or if you want to challenge your knowledge, read the `Art of Computer Programming' by Donald Knuth. Be sure to solve the problems. ... If some people are so brash that they think they know everything, Knuth will help them understand that the world is deep and complicated. ...

It took incredible discipline, and several months, for me to read it. I studied 20 pages, put it away for a week and came back for another 20 pages. You should definitely send me a resume if you can read the whole thing."

Some  good critique of simplistic approaches to programming style (which includes OO zealotry as a subset;  as well as "for profit" religious movements striving to outdo the Church of Scientology,  like extreme programming ) as a set of rules can be found in the following  old (2009)  post by Beth Elisheva from Perlmonks ( Best practices, revisited, Jul 05, 2009 ):

ELISHEVA,  Jul 05, 2009

... ... ...

In the last 10 years the term "best practice" has lost much of its association with the process of learning. Instead best practice has become a buzz word that is increasingly associated with a laundry list of rules and procedures. Perhaps it is our innate need to measure ourselves against a standard. Or perhaps it is the word "best". There can only be one best, even if it takes a process to find it. Why reinvent the wheel once the best has been found?

Nowhere is this more clear than in the way many organizations and some monks seem to use Damian Conway's book on Perl best practices. The best practice in Damian Conway's book refers (or should refer) to the process that Damian Conway went through while developing his coding practice. He wrote this book in part because, over the years, his own coding style had come to resemble an archeological dig though his own coding history. Interview with Damian Conway, Brian d foy. However, few people talk about his process, whereas many preach (or complain about) his rule list.

It may be human nature to turn best practices into best rules, but it isn't good management:

1. Best practice by the rulebook oversimplifies the knowledge transfer process. Knowledge consists of several components: facts, recipes, thinking processes, information gathering skills, and methods of evaluation. Rules are only effective in transferring the first two of these. However, all the rest are essential. Without them rules get out of date or will be applied in counter productive ways.

Facts, recipes, and coding standards are like wheels and brakes. But they do not drive the car. If the driver doesn't know the difference between the brake and the accelerator, the car will crash no matter how wonderful the wheels. Hard to communicate skills like information gathering and methods of evaluation are what drive the coding car, not the rules capturing layout and syntax.

If we focus only on rules, it is natural to assume that knowledge will be transferred simply by giving people enough motivation to follow rules. But this doesn't turn out to be the case.

In 1996 (Strategic Management Journal), Gabriel Szulanski (The Wharton School) published a study analyzing the impediments to knowledge transfer. (see Exploring internal stickiness: Impediments to the transfer of best practice in the firm). He considered many factors that might get in the way. The study concluded that motivation was overshadowed by three other issues: "lack of absorptive capacity", "causal ambiguity", and "arduousness of the relationship".

If rules alone were enough none of these would matter. "Lack of absorptive capacity" means that the necessary background knowledge to understand and value the rule is missing. Causal ambiguity means insufficient knowledge of how the rules relate to outcomes. Put in plain English: we aren't very good at applying rules without reasons or [proper understanding of their] context.

However, explaining rules also means transferring judgment - something that cannot be captured purely in the rules themselves. And this brings us to the last barrier to knowledge transfer: "arduousness of the relationship". This awkward term refers to how well the knowledge provider and receiver work together. Do they have a mentoring relationship that can answer questions and provide background information? Or are they simply conduits for authority, insisting on the value of the rules without helping show how the knowledge can be adapted to exceptional situations?

2. An overemphasis on rules is a short-term investment in a long-term illusion. Software is full of symbols and a great deal of code is boiler plate. It is easy to imagine that rules play a large role in software and the right set of rules will have a large payback.

This might be true if writing software were merely a transformation process. But if it were, we'd have developed software to automatically translate business processes, math books, and motion studies into software long ago. To be sure some of the coding today could probably be done by software, but not all of it. In every human endeavor there is a certain amount of boiler plate activity that passes for intellectual labour. This can be automated. But there is also a certain amount of genuine creativity and analysis. It takes a human being to know which is which. It takes a human being to do the later.

If we want superior development teams, we need to spend our energy nurturing what only we humans can do. This is where our investment needs to sit. As for the things we can do with rules: if we focus our skills on the creative portions we will figure out a way to write software that makes the boiler plate things go away. It is only a matter of time.

3. Rules that free us from thinking do not provide for change. Rules that free us from thinking are, by their very nature, static. In 1994 a management book "Built to Last" took the management world by storm and became a knock out best seller for several years thereafter. 10 years later, the magazine "Fast Company" wrote an article reviewing the impact of the book and the companies featured in that book. Was Build to Last Built to Last - in 2004 about half the companies described no longer would qualify as built to last. When interviewed for the article, one of the authors of the book, James C. Collins, argued that these companies had lost sight of what had made them great. He emphasized "Theeee most important part of the book is chapter four! ... Preserve the core! And! Stimulate progress! To be built to last, you have to be built for change!"

4. If it isn't abnormal it can't produce abnormal returns. The things that can be reduced to judgment-free rules offer no competitive advantage because they can be easily reproduced. No matter how hard we try we cannot build the best coding shop by following everybody else's rules. To excel, our practices need to be closely matched to our team's strengths and weaknesses.

Some of the more recent management literature has begun stressing the concept of "signature practices". Signature practices are practices that are unique to an organization. They capture its special ethos and talents and serve as a focal point around which the company (or coding team) can develop its competitive edge. (See, for example "Beyond Best Practice", by Linda Gratton and Sumatra Ghoshal, Sloan Management Review, April 15, 2005).

I don't mean to be knocking rules. They have their place. But if we want to have an outstanding development team, our definition of best practice needs to expand beyond rules. We need to think about what makes our teams thrive. What helps them be at their most creative? What gets them into flow? When are they best at sharing knowledge with each other? At understanding each others code? Incorporating new team members? At meeting customers' needs? And then we have to be prepared to be ruthless in getting rid of anything that gets in the way of that. Even if it is the rules themselves.

Best, beth

Wikipedia (Programming style) defines programming style rather narrowly, emphasizing just lexical style. Which should be the domain of the prettyprinter you use snd be done automaticlly. Here is the quote from some early version of  Wikipedia which emphasizes coding standards and as such only scratch the surface:

Programming style is a set of rules or guidelines used when writing the source code for a computer program. It is often claimed that following a particular programming style will help programmers to read and understand source code conforming to the style, and help to avoid introducing errors.

A classic work on the subject was The Elements of Programming Style, written in the 1970s, and illustrated with examples from the Fortran and PL/I languages prevalent at the time.

The programming style used in a particular program may be derived from the coding standards or code conventions of a company or other computing organization, as well as the preferences of the author of the code.

Programming styles are often designed for a specific programming language (or language family): style considered good in C source code may not be appropriate for BASIC source code, and so on.

However, some rules are commonly applied to many languages.

Good style is a subjective matter, and is difficult to define. However, there are several elements common to a large number of programming styles. The issues usually considered as part of programming style include the layout of the source code, including indentation; the use of white space around operators and keywords; the capitalization or otherwise of keywords and variable names; the style and spelling of user-defined identifiers, such as function, procedure and variable names; and the use and style of comments.

We will interpret this concept not only on programming language lexical and semantically levels, but also as a general way of structuring of your programming environment  to enhance the quality of programs and your productivity. The major part is the programming language that you use (or two languages in case you practice "Dual language" programming paradigm) and the level of your experience with this language. But editor and debugger are also important and they influence your programming style in deep but unobvious way. They allow write larger and  more complex programs in the same amount of time and with the same level of programming abilities.  Knuth once sad that he prefers the language with a good debugger to the language with fancy features, and there is a great wisdom in this point of view. 

It is like in fashion. Your suit or dress by and large defines your style, but shoes, watches and haircut are also important ;-)

Speaking of the language there are now several languages which are too big for a programmer to learn and they force them to operates using a suitable subset. Classic example of such a language is Perl, but paradoxically Python with libraries now is also big enough to fit this category. But in way complex language reflect the complexity of environment in which it operated including the complexity of operating system, so it si to a certain extent unavoidable. which means that attempt to promote simpler language (for example Python over Perl) might well be  a barking to the wrong tree.  Simple language just pushes more of this complexity into libraries instead of integrating them in the core.

I would like to emphasize the important of the a good programmable editor and its subtle influence of the programming style. In general, the more primitive is the editor you use the shorter should be the modules that you write. So the editor you use in subtle way affects your modularization decisions and thus the structure of your programs.  Another problem is that it just slow you down as there is no instant feedback via "on the fly" syntax checking and integration with the debugger. I  will just say, that poor programmer editor (Notepad/pico  level)  and poor or zero knowledge  debugger cripples any programmer, downgrading him to lower level of achievements as he will never be able to write then same volume of code in a given interval of time as programmer with better editor and debugger. At the end of the day the volume of programs you produce does matter both in corporate environment and in open source development. In the latter the larger is the volume the better chances that some program "clicks" and becomes a hit.  That's why some talented programmers wrote their own editors and debuggers (Thomson (ed), Stallman (emacs), Bill Joy (vi), etc)

Even in old DOS days MultiEdit was the standard  de-facto among programmers who respect themselves (old version for windows is available for free).  Sublime editor ($80) now is a competitor and should probably be considered. It has Python-based plugin API.  In any case, nobody who value his/her talent should set for less then the Komodo editor (which is free) or, better, Orthodox editor with folding (such as Slickedit (which is expensive, but pretty good) , Kedit, THE editor (free, no longer supported) etc).  VIM is acceptable editor, especially (and mostly)  as GVIM, but it takes some efforts to customize it into suitable programming editor (for example, you need to add Xedit style folding, macros for which are available; see allfold macros  developed by Marion Berryman). 

Please note that  programs like WinSCP allow use of any good Windows based editor for editing files on remote Linux server using scp for retriving and then saving the modified text.  If the editor allows macros you can also use "rsync; open" to open the file and "save; rsync" to save the file combination for file transfer. With  current network speeds you can easily use Windows editor in GUI instead of text based editor in Linux. BTW such an arrangement is also an element of programming style which affect the way how you use the language and your productivity with it. 

High power debugger is also very important  as it provide great insights into actual behavior of your code and allow you better understand you own programs and ths to eliminate more errors and write a better documentation.  Standard of power and flexibility for scripting languages is Perl debugger, which is really amazing programming product and the reason many programmers and, especially, system administrators still use Perl as their main language, despite all the negative publicity around. Recently Python which was deficient in the area  catch up to Perl and since version 2.6 has a decent debugger with unt and run commands). One problem  with OO languages is that they need a more complex debugger. For system administrators bash debugger (bashdb) is very important element of environment which affects your programming style and actually allow to write larger and  more complex scripts ( on the same level of programming abilities) and should be installed and used by any sysadmin writes non trivial scripts and who respects his time. 

Now editor and debugger tend to be integrated into programmer IDE for the particular language, but please note that  skills of using editor and the ability to write complex macros which are important for programmer productivity are as slow and as difficult to develop as programming skills and a long term use of the same powerful editor greatly pays. In this sense Emacs, VIM and Xedit derivatives (Kedit was/is a great editor released in 1983;  SlickEdit whichwas released in 1988, THE Hessling editor (released in 1992)) users have a distinct advantage as they can use the same editor for more then 20 years. Some Emacs, VIM and Kedit users use the same editor for more  then 40 years, almost all their career, and accumulated a lot of sophisticated and often simply brilliant macros.

Another important element of style is the availability of  a book library and maintenance of the personal knowledge base. Many top programmers  learned C programming from K&R's book, which was one of the few introductory programming language books which can be called "great" as it exposes Unix philosophy and Component Model as well and their unique approach to programming. Which actually is distinct and superior to the approach of Knuth in his famous TAOCP books and TeX.  

To become a good writer, one studies the works of great writers. To become a good programmer, perhaps, it would help to study the works of great programmers.  It is easy to say that we should be studying the works of the masters, especially masters in the language that we use! It is much more difficult to accomplish. Along with reading books written by great programmer, such as Knuth, With, Larry Wall, etc, whiten by them compilers and software libraries provide some valuable insights into good programming style.  but no set of rules can replace insight that you get from reading the source code of greater programmers.  See Some books and source code to read to learn a good programming style for some recommendations.

Some general recommendations

When you think about code format, think beyond just indentation.  I wrote my first prettyprinter in 1978 (see neatpl) and I can tall that the habit of using pretty printer is more important then the set of indentation and statement structuring rules.

The habit of using pretty printer is more impornt then the set of indentation and statement structuring rules.

Essentially pretty printer "enforces" particular style which is consistent for all your programs.  Good pretty printer also supplies valuable, more preside then complier diagnostic of some difficult to find errors (extras "{" in C-style language is a classic example of  such error; try to find it in 1000 lines monolithic program, which was not written by you, without a prettyprinter ;-).

This is especially true especially if it implements full lexical analyzer (not possible for all languages; for example for BASH and Perl this probably even not desirable, see for example  Neatbash -- a simple bash prettyprinter based of "fuzzy" determination of nesting level

But in any case while those  recommendation are far from absolute my experience suggest following a few simple rules:

  1. Always use beautifiers, make it a habit.  Adding more code to existing unindented code, without reformatting/beatifying it, verges on unethical and unprofessional behavior.  There's no excuse for making already difficult task even more difficult. Never use tabs.  Various editors have different tab setting and unless you unify them the resulting code is a mess. Keep your lines to less than 132 characters (old line printer width ;-). Longer lines are difficult to comprehend
     
  2. Preferable beautifier style (often configurable) is to put block openers (e.g. "{", "then", etc.) on the same line as the statement that started the block (e.g. "if"). It saves vertical space.  As you read the code, your eye should follow the indentation, not the tiny "}"s or "END"s.  The notion of putting the block openers on the next line simply wastes vertical space -- which means you can see (and thus understand) less code.
  3. Use consistent naming conventions which help to distinguish important classes of variables. For example (in no way those recommendation are absolute):
    1. Use all upper case only for constants, file handles and such.
    2. Name global variables in "mixcase" (like CurrentLine)
    3. Name local variables using underscores between words ( last_line)  
  4. Develop and use a standard header comment.  In particular, that header must describe the purpose of the code, and the context in which it is used, options/parameter and development history. The  description of the "purpose" of the program or module should generally be be no less than three lines!

Less obvious or universal recommendations:

  1. Sometimes you can use horizontal whitespace to make small chunks of code easier to read by aligning  vertical chunks of similar lines of code. 
  2. Sometimes it is helpful to use literary programming tools (like Perl Pod) to generate html version of documentation from the source code.  Or at least small parts of it.

Kernighan & Plauger classic book "The Elements of Programming Style"

Kernighan & Plauger classic book "The Elements of Programming Style" was the first realistic attempt to provide insights on what constitute good programming style and how it is different from  a bad one.  While much of its advice is now outdated, and uses almost forgotten languages (older versions of Fortran and PL/1) for its examples, many of its principles still apply to well written code. A sampling of these principles:

Almost every principle (there are over 70 in all)  in this book is followed by example code. It would be nice to see a modern version of this book.  Most are outdated, but you can get the general ideas even from outdated examples.

The danger of religious fervor

One important observation is that there is a log of religious fervor in the are of programming style. In reality style recommendation are far from being absolute. Much depends on tools and language used.  Some extremes like structured programming or verification movement while containing some useful bits distract more then help inwriting a good program. Like Knuth noted the weakness of structured programming is that it emphasize absence of goto, not the presence of structure. 

Style can easily become a religious issue. And that's a big big danger, as fanatics poison everything they touch. Like Rob Pike aptly noted "Under no circumstances should you program the way I say to because I say to; program the way you think expresses best what you're trying to accomplish in the program. And do so consistently and ruthlessly."

Under no circumstances should you program the way I say to because I say to; program the way you think expresses best what you're trying to accomplish in the program. And do so consistently and ruthlessly

As Dennis Ritchie noted "It is very hard to get things both right (coherent and correct) and usable (consistent enough, attractive enough)". That's the matter of elegance.  Achieving it requires talent. Most time making code shorter improves maintainability but, of course, one needs to avoid overusing/abusing idioms.

Style is not only programming constructs you use in your favorite language

As we mentioned above one of often overlooked element of programming style is the text editor you use. In this sense we can talk about 'extended" programming style. which along with the use of the language include use of text editor, debugger and pretty printer. The level of master of these three vital tools matter greatly and can't be underestimated.

Many programmer deprive themselves using very primitive editor or failing to learn capabilities of the editor they use.  Good editor helps to avoid such mistakes as misspelled variables and can mark syntax errors on the fly (Komodo Editor, which is free, does this; but generally you might want more powerfuleditor with programmable macros.) But the most important part is ease of manipulation with  fragments of text as programming is less writing from scratch then adapting elements of already existing programs to the new task.

Another important tool that I would classify as the element of programming style is the  debugger that you use. And yet another is pretty-printer. both are essential and help to avoid errors. And this is what programming style is about.

If the language discourage the use of pretty printer (like Python) this can be viewed as a serious drawback of the language (although use of Python IDE somewhat compensates for that; also indentation for pretty printer can be make possible by using pseudo-comments that determine nesting. 

Statement that you "do not need a debugger" is a sign of immaturity even is they are voiced by Linus Torvalds.  Not only you need a decent debugger, you also need to learn how to use it productively.  As Donald Knuth noted, the availability and capabilities of the debugger are almost as important as the quality of the primary programming language; it is an important element of "extended" programming style and your mastery of programming. 

Defensive programming

  Anything that can go wrong will go wrong.
Nothing is as easy as it looks.
Everything takes at least twice longer than you think.
If there is a possibility of several things going wrong, the one that will cause the most damage will be the one to go wrong. Corollary: If there is a worse time for something to go wrong, it will happen then.
If anything simply cannot go wrong, it will anyway.
If you perceive that there are four possible ways in which a procedure can receive wrong  parameters, there is always be a fifth way.
Due to maintenance and enhancements which breaks conceptual integrity  programs tend to degenerate form bad to worse and number of bug in them not to decrease but increase
If logs suggest everything seems to be going well, you have obviously overlooked something.
hardware always sides with the flaws in software.
It is extremely difficult to make a program foolproof because fools are so ingenious.
Whenever you set out to do something really important, something else comes out that should be done first.
Every solution of a problem breeds new problems.

Murphy laws of engineering
(author adaptation)

NOTE: Now we have a specialized page devoted to Defensive programming

The basic idea behind this approach is to write the program like a compiler so that it is able to run properly even through unforeseen input by users. In many ways, the concept of defensive programming is much like that of defensive driving, in that it tried to anticipate problems before they arise. One common feature is the ability handle strange input without crashing or creating a disaster.

That essentially means that that the program is written in such way that it is able to able to protect itself against invalid inputs. The invalid inputs (aka bad data) can come from user input via the command line, as a result undetected errors on other parts of the program, as a special conditions related to various objects such as file (missing file, insufficient permissions, etc). Bad data can also come from other routines in your program via input parameters. Defensive programming is greatly facilitated by an awareness of specific threats and vulnerabilities ( for example for sysadmin scripts and utilities this is a collection of "horror stories")

In other words, defensive programming is about making the software work in a predictable manner in spite of unexpected inputs.

The origin of this concept can be traced to the period of creation of ADA programming language (1977-1983) or even earlier.  Former DOD standard for large scale safety critical software development emphasized encapsulation, data hiding, strong typing of data, minimization of dependencies between parts to minimize impact of fixes and changes.

One typical problem in large software changes that changes, fixing one problem creates another, or two. One way to fight this problem of "increasing entropy with age" or loss of conceptual integrity is to institute a set of  sanity checks which detect abnormal parameters values (assertions or some similar mechanism) and such. In most systems resulting overhead is negligible but the positive effect is great. 

As an example of early attempt to formulate some principles of defensive programming style we can list  Tom Christiansen recommendations (Jan 1, 1998) for Perl language Perl does not have strict typing of variables and, by default, does not  require any declaration of variables, creating potential for misspelled variables slipping into production version of the program. Unless you use strict pragma -- the use the latter became standard in modern Perl). While they are more then 20 years old they are still relevant:  

Out of those the most interesting is taint option (strict is also interesting but it simply  partially fixes oversights in the initial design of the language; Python uses more sound idea of typing values and requiring explicit conversion between values of different types). Here is a quote from Perl Command-Line Options - Perl.com:

The final safety net is the -T option. This option puts Perl into "taint mode." In this mode, Perl inherently distrusts any data that it receives from outside the program's source -- for example, data passed in on the command line, read from a file, or taken from CGI parameters.

Tainted data cannot be used in an expression that interacts with the outside world -- for example, you can't use it in a call to system or as the name of a file to open. The full list of restrictions is given in the perlsec manual page.

In order to use this data in any of these potentially dangerous operations you need to untaint it. You do this by checking it against a regular expression. A detailed discussion of taint mode would fill an article all by itself so I won't go into any more details here, but using taint mode is a very good habit to get into -- particularly if you are writing programs (like CGI programs) that take unknown input from users.

More on defensive programming and Murphy law

  Mathematician Augustus De Morgan wrote on June 23, 1866:[3] "The first experiment already illustrates a truth of the theory, well confirmed by practice, what-ever can happen will happen if we make trials enough." In later publications "whatever can happen will happen" occasionally is termed "Murphy's law," which raises the possibility—if something went wrong—that "Murphy" is "De Morgan" misremembered (an option, among others, raised by Goranson on the American Dialect Society list).[4]

American Dialect Society member Bill Mullins has found a slightly broader version of the aphorism in reference to stage magic. The British stage magician Nevil Maskelyne wrote in 1908:

"It is an experience common to all men to find that, on any special occasion, such as the production of a magical effect for the first time in public, everything that can go wrong will go wrong. Whether we must attribute this to the malignity of matter or to the total depravity of inanimate things, whether the exciting cause is hurry, worry, or what not, the fact remains".[5]

In 1948, humorist Paul Jennings coined the term resistentialism, a jocular play on resistance and existentialism, to describe "seemingly spiteful behavior manifested by inanimate objects",[6] where objects that cause problems (like lost keys or a runaway bouncy ball) are said to exhibit a high degree of malice toward humans.[7][8]

The contemporary form of Murphy's law goes back as far as 1952, as an epigraph to a mountaineering book by John Sack, who described it as an "ancient mountaineering adage": Anything that can possibly go wrong, does.[9]

Murphy's law - Wikipedia

 

Number of bugs in more or less complex software is indefinite. Often nasty bugs that go undetected for years. This is the the fact of life, yet another confirmation of validity of  Murphy law in software engineering ;-). Defensive programming is in some way just set of practices that protect us from effects of Murphy law in software.  In other words, when coding, we always need to assume the worst  as Murphy law suggests.

Of course shorter, simpler code is always better, but defensive programming proposes somewhat paradoxical combination of simplifying code by elimination of unnecessary complexity, while adding specialized code devoted to analyzing the validity of inputs and values of variables (aka "sanity checks").  There is no free lunch.

Assuming the worst means that we have to deal with potential failures that theoretically should never happen.  In some cases errors are typical and repeatable, and in those cases the correction can be made "on the fly" with high probability of success (for example missing semicolon at the end of the line in programming languages). This attempt to correct things that can be corrected and bailing out of the situation which can't is a distinctive feature of defensive programming that makes it even more similar to how compiler behave with source submitted to them.  For example, if the subroutine requires low bound and high bound extracted from input data, and those two parameters are switched, often if make sense not to abort the program, but to correct this error by swapping the parameters and proceed.  If a record has wrong structure or values we can discard this particular records and proceed with remaining, at least to find more errors, even if output does not make much sense.  Those example can be continued indefinitely, but you got the idea.

The key ideas of defensive programming in the context of writing scripts, utilities and small to medium system programs

As concept "defensive programming"  is interpreted differently by different authors. In our interpretation which stems for the author experience with compiler writing which mainly is related to small to medium program (less then 10K source lines) often written for sysadmins by sysadmins. This is the current area of the author expertise. 

Writing large program like compilers (typically over 100K lines of source code; and area in which the author started his programming career) is a team effort and requires some additional organizational measures that are partially outlined by Frederick Brooks in The Mythical Man-Month (his later summary in No Silver Bullet, freely available online) and later expanded upon by Steve Steve McConnell in Code Complete (1993). They do not deny the principles outlined below, but large scale software development requires much more, especially on the level of software teams.

In this, more narrow context, it includes several ideas intended to ensure the continuing functioning of software supplied with incorrect input data

Among the key ideas are

  1. Production code should handle errors in a more sophisticated way than "garbage in, garbage out." Also constraints that apply to the production system do not necessarily apply to the development version. That means that some code that helps to flush out errors quickly can exists in the development version and be removed by macroprocessor in production version.
  2. The program should always provide meaningful diagnostics and logging. Meaningful diagnostic is typically a weak spot of many Unix utilities, which were written when every byte of storage was a premium and computer used to have just one 1M bytes of memory or less (Xenix -- one of the early Unixes worked well on 2MB IBM PCs) If messages you get in case of errors or crashes are cryptic and its takes a lot of efforts to related the message to the root case. If you are the user of the program that you yourself have written that insult after injury :-) Here we strive to the quality of diagnostics that is typically demonstrated by debugging complier. Defensive programming also presume presence of a sophisticated logging infrastructure within the program. Logs should are easy to parse and filter for relevant information.

    Messages are classified by severity with at least four levels distinguished:

    1. Warnings: informational messages that do not affect the validly of the program output or any results of its execution. Still the situation that deserve some attention
    2. Errors: (correctable errors) Messages that something went wrong but the resuls excution of the program is still OK and/or output of the program most probably is still valid
    3. Severe errors (failures). Program can continue but the results are most probably a garbage and should be discarded. Diagnostic messages provides after this point might still have a value.
    4. Terminal errors (abends). Program can't continue at this point and need to exit. For such abnormal situations you can even try to email the developer.

    To achieve this one needs to write or borrow and adapt a special messages generation subroutine for example logmes, modeled after one used in compilers. One of the parameters passes to this subroutines should be the one byte code of the error (or its number equivalent) along with the line in which error was detected.  For example

    The abbreviated string for those codes has the mnemonic string iWest
     

  3. Assertions are used in the critical areas of the code (often checking the validity of parameters). Sometimes they are called preconditions and refer to the Boolean conditions that must be verified for the method or subroutine at the start of its execution. The idea of using assertions is to prevent the program from causing damage if an invalid combination of values exists at the particular point of the program.
  4. All return codes from external programs and modules are checked. For example after executing rm command. "Postconditions" often involve checking the return code. Generally a postcondition is a Boolean condition that holds true upon exit from the subroutine or method. For example in case of sysadmin scripts each executed external command is checked for the return code (RC) and if the code is outside acceptable range appropriate error is generated.
  5. There is a pre-planned debugging infrastructure within the program. At least, there is a special variable (typically variable DEBUG) should be introduced  that allow to switch program to debugging mode in which it produced more output and/or particular actions are blocked or converted to printing statement.
  6. Presence of the "external command generation mode", if it makes sense. When the output is dangerous to execute and generated commands can benefit from visual inspection and/or edition by humans the mode of generation of external commands should be implemented as an alterative to immediate execution.
  7. Program design includes design for semi-autotic testing (so called acceptable tests working on predefined data). At this procedure of testing the program after makign changes should be documented.  Despite a great deal of effort put into ensuring code is perfect, developers almost always miss a mistake or create code with unexpected results. Thorough testing by professional testers allows a developer to have hundreds of hours of product use to find errors before software is released. If modified code can be retested semi-automatically, at least in some case it increase chances that no blunders were introduced in the code. Modest efforts on creating such a set of test cases will pay itself multiple times. 
  8. Staging -- structuring the program as several consecutive stages, communicating using Conway "coroutine development paradigm" It is author conviction that main  ideas about structuring the program using in complier writing have more general applicability, especially in writing tools like classic Unix utilities. Conway corporatize methodology is an early program development methodology which Melvin Conway (the author of Conway Law) applied to his early Cobol complier for USAF (melconway.com) is a tool of reducing complexity as valid today as it was in early 60th. When components of your program can be structured as stages and at first debugged while exchanging information via intermediate files, the level of  the isolation of components is usually better thought out and intermediate tables and data structures are more solid then achievable by using more fancy programming methodologies.  Such an approach to program design where task is separated on several consecutive stages  is almost forgotten now outside of compiler writing community, but it has more general applicability.

Generally a balance must be struck, between programming that accounts for unexpected scenarios and code that contains too much extra checks without providing a benefit.

Audits are often used by a developer to review code that has been created. This allows other programmers to see the work that has been done, and readable code is important for this to be a realistic part of development.

The key tasks

We will enumerate just two key task that proved to be used in creating reliable program using "defensive programming" paradigm

Creation of powerful and flexible log routine

Any "decent" program or script should write log of action it takes. It is as important document as protocol of compilation in compliers and this subroutine deserves thingking and careful programming. Logging beats a debugger if you want to know what's going on in your code during runtime. Good logging system should provide the following capabilities

For more complex program addition  two additional facilities which in certain cases might make sense to implement too: 

In Bash you can create multiple subroutines (one for each type of error, like info (priority 0), warn (1), error(2), failure(3) and abend (4), which all call special subroutine logme. The latter can write the message to the log and display it on the screen depending of parameters such as verbosity. Bash provides the system variable $LINENO which can help to detect from which part of the program particular message was generated. Use it as the first parameter to all subroutines mentioned above (info, warn, error, failure and abend). For example

(( $UID == 0)) && abend $LINENO "The script can be run only as root"

Bash also allows to use an interesting hack connected with the ability of exec to redirect standard input within your script.

if (( $debug == 0 )) ; then
   DAY=`date +%a`
   LOG=/var/adm/logs/fs_warning.log.$DAY
   exec 1>>$LOG
   exec 2>&1
}

This way you can forward all STDER output in your LOG which is important for troubleshooting, but this can be done only in production mode, because in debug mode you lose all messages -- they will not be displayed on the screen

As Perl is more flexible and more powerful that bash as for writing sysadmin scripts and such. In Perl you can be more sophisticated than in Bash and, for example,  also create the summary of errors that is printed at the end of the log as well as determine the return code based on diagnostic messages uncounted.  Like Bash, Perl also has specials system variable __LINENO__ that is always equal to the line of number of script where it si used. For example:

For example

( -d $d ) && logme(__LINENO__,'S', "The directory $d does not exists");

But in Perl you can more sophisticated and use caller function within the logme to determine the line number. So Perl is the only known to me scripting language which allows you not to pass __LINENO__ as a parameter. Which is a very nice, unique feature. The built-in caller function in Perl returns three values one of which is the line from which the function was called:

my ($package, $filename, $line) = caller;

For Python solution see

On top of Seb's very useful answer, here is a handy code snippet that demonstrates the logger usage with a reasonable format:

#!/usr/bin/env python import logging logging.basicConfig(format='%(asctime)s,%(msecs)d %(levelname)-8s [%(filename)s:%(lineno)d] %(message)s', datefmt='%Y-%m-%d:%H:%M:%S', level=logging.DEBUG) logger = logging.getLogger(__name__) logger.debug("This is a debug log") logger.info("This is an info log") logger.critical("This is critical") logger.error("An error occurred")

Generates this output:

2017-06-06:17:07:02,158 DEBUG [log.py:11] This is a debug log 
2017-06-06:17:07:02,158 INFO [log.py:12] This is an info log 
2017-06-06:17:07:02,158 CRITICAL [log.py:13] This is critical 
2017-06-06:17:07:02,158 ERROR [log.py:14] An error occurred

For Ruby __LINE__ does the trick

The correct variable to get line number is __LINE__, so the proper implementation of your function would be

Debugging infrastructure should be a part of the program and needs to be well thought out

 
  • You cannot fix everything, even though you think you can.
  • You do not know everything, even though you think you do.
  • No two programmers agree on the same fix.
  • Your fix is always better than the one accomplished.
  • If you fix too much, you will be laid off.
  • Blaming other is always acceptable.
  • If you don't know what you are doing, read the manual for the rest of your day.
  • Asking for help means you're an idiot. Not asking for help means you're an idiot.

If bugs are fact of any program life, debugging is a part of the life-cycle of the program that continues until the program finally discarded. That means we need to take efforts to make it efficient. In many case it is deeply wrong to remove it after program supposedly reach production quality (but if macroprocessor is present in particular language a special mode of compilation can be used when those fragment of source are not complied. )

In other words defensive programming presupposes adding debugging infrastructure into the program and also crating external testing infrastructure ("compliance testing"). That also comes form compiler writing with such early examples as Perl testing infrastructure (which was a breakthrough at the time if its creation like Perl itself) when each new change in the compiler is verified via batter of predefined test. Of course this is easier to say then to do, so the amount of efforts in this direction should be calibrated by the importance of the particular program or script.

def mylog(str) puts "#{__FILE__}:#{__LINE__}:#{str}" end

Generalization of some sanity checks and creation of subroutines/methods for performing them

Some sanity check are easily generalizable. Among them:

That allows to program those once and use in many of your programs

 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 07, 2021] What is your tale of lasagna code- (Code with too many layers)

Highly recommended!
Notable quotes:
"... unncessary thick lasagne ..."
"... I think there's a very pervasive mentality of "I must ..."
Jun 07, 2021 | dev.to

The working assumption should "Nobody inclusing myself will ever reuse this code". It is very reastic assumption as programmers are notoriously resultant to reuse the code from somebody elses. And you programming skills evolve you old code will look pretty foreign to use.

"In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)." - Roberto Waltman

This week on our show we discuss this quote. Does OOP encourage too many layers in code?

I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of abstraction. I wrote about this before in the false abstraction antipattern

So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us? Bertil MuthDec 9 '18

I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality. Maybe the unncessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution. Nested SoftwareDec 9 '18 "¢ Edited on Dec 16

I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need to solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of brainwashing that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a fundamental problem in software development... Nested SoftwareDec 9 '18

I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always a better fit for re-using code.

[Jun 06, 2021] Lasagna Code by lispian

Notable quotes:
"... Lasagna Code is layer upon layer of abstractions, objects and other meaningless misdirections that result in bloated, hard to maintain code all in the name of "clarity". ..."
"... Turbo Pascal v3 was less than 40k. That's right, 40 thousand bytes. Try to get anything useful today in that small a footprint. Most people can't even compile "Hello World" in less than a few megabytes courtesy of our object-oriented obsessed programming styles which seem to demand "lines of code" over clarity and "abstractions and objects" over simplicity and elegance. ..."
Jan 01, 2011 | www.pixelstech.net

Anyone who claims to be even remotely versed in computer science knows what "spaghetti code" is. That type of code still sadly exists. But today we also have, for lack of a better term" and sticking to the pasta metaphor" "lasagna code".

Lasagna Code is layer upon layer of abstractions, objects and other meaningless misdirections that result in bloated, hard to maintain code all in the name of "clarity". It drives me nuts to see how badly some code today is. And then you come across how small Turbo Pascal v3 was , and after comprehending it was a full-blown Pascal compiler, one wonders why applications and compilers today are all so massive.

Turbo Pascal v3 was less than 40k. That's right, 40 thousand bytes. Try to get anything useful today in that small a footprint. Most people can't even compile "Hello World" in less than a few megabytes courtesy of our object-oriented obsessed programming styles which seem to demand "lines of code" over clarity and "abstractions and objects" over simplicity and elegance.

Back when I was starting out in computer science I thought by today we'd be writing a few lines of code to accomplish much. Instead, we write hundreds of thousands of lines of code to accomplish little. It's so sad it's enough to make one cry, or just throw your hands in the air in disgust and walk away.

There are bright spots. There are people out there that code small and beautifully. But they're becoming rarer, especially when someone who seemed to have thrived on writing elegant, small, beautiful code recently passed away. Dennis Ritchie understood you could write small programs that did a lot. He comprehended that the algorithm is at the core of what you're trying to accomplish. Create something beautiful and well thought out and people will examine it forever, such as Thompson's version of Regular Expressions !

... ... ...

Source: http://lispian.net/2011/11/01/lasagna-code/

[Jun 02, 2021] Simplicity is the core of a good infrastructure by Steve Webb

Dec 04, 2011 | www.badcheese.com

I've seen many infrastructures in my day. I work for a company with a very complicated infrastructure now. They've got a dev/stage/prod environment for every product (and they've got many of them). Trust is not a word spoken lightly here. There is no 'trust' for even sysadmins (I've been working here for 7 months now and still don't have production sudo access). Developers constantly complain about not having the access that they need to do their jobs and there are multiple failures a week that can only be fixed by a small handful of people that know the (very complex) systems in place. Not only that, but in order to save work, they've used every cutting-edge piece of software that they can get their hands on (mainly to learn it so they can put it on their resume, I assume), but this causes more complexity that only a handful of people can manage. As a result of this the site uptime is (on a good month) 3 nines at best.

In my last position (pronto.com) I put together an infrastructure that any idiot could maintain. I used unmanaged switches behind a load-balancer/firewall and a few VPNs around to the different sites. It was simple. It had very little complexity, and a new sysadmin could take over in a very short time if I were to be hit by a bus. A single person could run the network and servers and if the documentation was lost, a new sysadmin could figure it out without much trouble.

Over time, I handed off my ownership of many of the Infrastructure components to other people in the operations group and of course, complexity took over. We ended up with a multi-tier network with bunches of VLANs and complexity that could only be understood with charts, documentation and a CCNA. Now the team is 4+ people and if something happens, people run around like chickens with their heads cut off not knowing what to do or who to contact when something goes wrong.

Complexity kills productivity. Security is inversely proportionate to usability. Keep it simple, stupid. These are all rules to live by in my book.

Downtimes: Beatport: not unlikely to have 1-2 hours downtime for the main site per month.

Pronto: several 10-15 minute outages a year Pronto (under my supervision): a few seconds a month (mostly human error though, no mechanical failure)

[Jun 02, 2021] The System Standards Stockholm Syndrome

John Waclawsky (from Cisco's mobile solutions group), coined the term S4 for "Systems Standards Stockholm Syndrome" - like hostages becoming attached to their captors, systems standard participants become wedded to the process of setting standards for the sake of standards.
It looks like the paper disappeared by there is a book by this author QoS- Myths and Hype eBook by John G. Waclawsky - 9781452463964 - Rakuten Kobo United States
Notable quotes:
"... The "Stockholm Syndrome" describes the behavior of some hostages. The "System Standards Stockholm Syndrome" (S4) describes the behavior of system standards participants who, over time, become addicted to technology complexity and hostages of group thinking. ..."
"... What causes S4? Captives identify with their captors initially as a defensive mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors, such as granting a secretarial role (often called a "chair") to a captive in a working group are magnified, since finding perspective in a systems standards meeting, just like a hostage situation, is by definition impossible. Rescue attempts are problematic, since the captive could become mentally incapacitated by suddenly being removed from a codependent environment. ..."
Jul 22, 2005 | hxr.us

grumpOps

Fri Jul 22 13:56:52 EDT 2005
Category [ Internet Politics ]

This was sent to me by a colleague. From "S4 -- The System Standards Stockholm Syndrome" by John G. Waclawsky, Ph.D.:

The "Stockholm Syndrome" describes the behavior of some hostages. The "System Standards Stockholm Syndrome" (S4) describes the behavior of system standards participants who, over time, become addicted to technology complexity and hostages of group thinking.

Read the whole thing over at BCR .

And while this particularly picks on the ITU types, it should hit close to home to a whole host of other "endeavors".

IMS & Stockholm Syndrome - Light Reading

12:45 PM -- While we flood you with IMS-related content this week, perhaps it's sensible to share some airtime with a clever warning about being held "captive" to the hype.

This warning comes from John G. Waclawsky, PhD, senior technical staff, Wireless Group, Cisco Systems Inc. (Nasdaq: CSCO). Waclawsky, writing in the July issue of Business Communications Review , compares the fervor over IMS to the " Stockholm Syndrome ," a term that comes from a 1973 hostage event in which hostages became sympathetic to their captors.

Waclawsky says a form of the Stockholm Syndrome has taken root in technical standards groups, which he calls "System Standards Stockholm Syndrome," or S4.

Here's a snippet from Waclawsky's column:

What causes S4? Captives identify with their captors initially as a defensive mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors, such as granting a secretarial role (often called a "chair") to a captive in a working group are magnified, since finding perspective in a systems standards meeting, just like a hostage situation, is by definition impossible. Rescue attempts are problematic, since the captive could become mentally incapacitated by suddenly being removed from a codependent environment.

The full article can be found here -- R. Scott Raynovich, US Editor, Light Reading

VoIP and ENUM

Sunday, August 07, 2005 S4 - The Systems Standards Stockholm Syndrome John Waclawsky, part of the Mobile Wireless Group at Cisco Systems, features an interesting article in the July 2005 issue of the Business Communications Review on The Systems Standards Stockholm Syndrome. Since his responsibilities include standards activities (WiMAX, IETF, OMA, 3GPP and TISPAN), identification of product requirements and the definition of mobile wireless and broadband architectures, he seems to know very well what he is talking about, namely the IP Multimedia Subsytem (IMS). See also his article in the June 2005 issue on IMS 101 - What You Need To Know Now .

See also the Wikedpedia glossary from Martin below:

IMS. Internet Monetisation System . A minor adjustment to Internet Protocol to add a "price" field to packet headers. Earlier versions referred to Innovation Minimisation System . This usage is now deprecated. (Expected release Q2 2012, not available in all markets, check with your service provider in case of sudden loss of unmediated connectivity.)
It is so true that I have to cite it completely (bold emphasis added):

The "Stockholm Syndrome" describes the behavior of some hostages. The "System Standards Stockholm Syndrome" (S 4 ) describes the behavior of system standards participants who, over time, become addicted to technology complexity and hostages of group thinking.

Although the original name derives from a 1973 hostage incident in Stockholm, Sweden, the expanded name and its acronym, S 4 , applies specifically to systems standards participants who suffer repeated exposure to cult dogma contained in working group documents and plenary presentations. By the end of a week in captivity, Stockholm Syndrome victims may resist rescue attempts, and afterwards refuse to testify against their captors. In system standards settings, S4 victims have been known to resist innovation and even refuse to compete against their competitors.

Recent incidents involving too much system standards attendance have resulted in people being captured by radical ITU-like factions known as the 3GPP or 3GPP2.

I have to add of course ETSI TISPAN and it seems that the syndrome is also spreading into IETF, especially to SIP and SIPPING.

The victims evolve to unwitting accomplices of the group as they become immune to the frustration of slow plodding progress, thrive on complexity and slowly turn a blind eye to innovative ideas. When released, they continue to support their captors in filtering out disruptive innovation, and have been known to even assist in the creation and perpetuation of bureaucracy.

Years after intervention and detoxification, they often regret their system standards involvement. Today, I am afraid that S 4 cases occur regularly at system standards organizations.

What causes S 4 ? Captives identify with their captors initially as a defensive mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors, such as granting a secretarial role (often called a "chair") to a captive in a working group are magnified, since finding perspective in a systems standards meeting, just like a hostage situation, is by definition impossible. Rescue attempts are problematic, since the captive could become mentally incapacitated by suddenly being removed from a codependent environment.

It's important to note that these symptoms occur under tremendous emotional and/or physical duress due to lack of sleep and abusive travel schedules. Victims of S 4 often report the application of other classic "cult programming" techniques, including:

  1. The encouraged ingestion of mind-altering substances. Under the influence of alcohol, complex systems standards can seem simpler and almost rational.
  2. "Love-fests" in which victims are surrounded by cultists who feign an interest in them and their ideas. For example, "We'd love you to tell us how the Internet would solve this problem!"
  3. Peer pressure. Professional, well-dressed individuals with standing in the systems standards bureaucracy often become more attractive to the captive than the casual sorts commonly seen at IETF meetings.

Back in their home environments, S 4 victims may justify continuing their bureaucratic behavior, often rationalizing and defending their system standard tormentors, even to the extent of projecting undesirable system standard attributes onto component standards bodies. For example, some have been heard murmuring, " The IETF is no picnic and even more bureaucratic than 3GPP or the ITU, " or, "The IEEE is hugely political." (For more serious discussion of component and system standards models, see " Closed Architectures, Closed Systems And Closed Minds ," BCR, October 2004.)

On a serious note, the ITU's IMS (IP Multimedia Subsystem) shows every sign of becoming the latest example of systems standards groupthink. Its concepts are more than seven years old and still not deployed, while its release train lengthens with functional expansions and change requests. Even a cursory inspection of the IMS architecture reveals the complexity that results from:

  1. decomposing every device into its most granular functions and linkages; and
  2. tracking and controlling every user's behavior and related billing.

The proliferation of boxes and protocols, and the state management required for data tracking and control, lead to cognitive overload but little end user value.

It is remarkable that engineers who attend system standards bodies and use modern Internet- and Ethernet-based tools don't apply to their work some of the simplicity learned from years of Internet and Ethernet success: to build only what is good enough, and as simply as possible.

Now here I have to break in: I think the syndrome is also spreading to the IETF, becuase the IETF is starting to leave these principles behind - especially in SIP and SIPPING, not to mention Session Border Confuser (SBC).

The lengthy and detailed effort that characterizes systems standards sometimes produces a bit of success, as the 18 years of GSM development (1980 to 1998) demonstrate. Yet such successes are highly optimized, very complex and thus difficult to upgrade, modify and extend.

Email is a great example. More than 15 years of popular email usage have passed, and today email on wireless is just beginning to approach significant usage by ordinary people.

The IMS is being hyped as a way to reduce the difficulty of integrating new services, when in fact it may do just the opposite. IMS could well inhibit new services integration due to its complexity and related impacts on cost, scalability, reliability, OAM, etc.

Not to mention the sad S 4 effects on all those engineers participating in IMS-related standards efforts.

Here the Wikedpedia glossary from Martin Geddes ( Telepocalypse ) fit in very well:

[May 03, 2021] What is your tale of lasagna code? (Code with too many layers)

Notable quotes:
"... I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality. Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution. ..."
May 03, 2021 | dev.to

# discuss mortoray profile image edAâ€'qa mortâ€'oraâ€'y Dec 8, 2018 ãƒ"1 min read

“In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)." - Roberto Waltman

This week on our show we discuss this quote. Does OOP encourage too many layers in code?

#14 Spaghetti OOPs Edaqa & Stephane Podcast Follow

I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of abstraction. I wrote about this before in the false abstraction antipattern

So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us? Discussion (12) Subscribe

eljayadobe profile image Eljay-Adobe Dec 8 '18

Shrek: Object-oriented programs are like onions.
Donkey: They stink?
Shrek: Yes. No.
Donkey: Oh, they make you cry.
Shrek: No.
Donkey: Oh, you leave em out in the sun, they get all brown, start sproutin’ little white hairs.
Shrek: No. Layers. Onions have layers. Object-oriented programs have layers. Onions have layers. You get it? They both have layers.
Donkey: Oh, they both have layers. Oh. You know, not everybody like onions. 8 likes Reply Dec 8 '18

Unrelated, but I love both spaghetti and lasagna 😋 6 likes Reply

Bertil Muth Dec 9 '18

I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality. Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution.

Nested Software Dec 9 '18 Edited on Dec 16

I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need to solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of brainwashing that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a fundamental problem in software development... 4 likes Reply

Nested Software Dec 9 '18

I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always a better fit for re-using code. 2 likes Reply

mort Dec 9 '18

Inheritance is my preferred option for things that model type hierarchies. For example, widgets in a UI, or literal types in a compiler.

One reason inheritance is over-used is because languages don't offer enough options to do composition correctly. It ends up becoming a lot of boilerplate code. Proper support for mixins would go a long way to reducing bad inheritance. 2 likes Reply

Marcell Lipp Dec 8 '18

It is always up to the task. For small programms of course you don't need so many layers, interfaces and so on. For a bigger, more complex one you need it to avoid a lot of issues: code duplications, unreadable code, constant merge conflicts etc. 2 likes Reply

JSn1nj Dec 8 '18

So build layers only as needed. I would agree with that. 2 likes Reply

Nathan Graule Dec 8 '18

I'm building a personal project as a mean to get something from zero to production for learning purpose, and I am struggling with wiring the front-end with the back. Either I dump all the code in the fetch callback or I use DTOs, two sets of interfaces to describe API data structure and internal data structure... It's a mess really, but I haven't found a good level of compromise. 2 likes Reply

Nick Cinger Dec 9 '18

Thanks for sharing your thoughts!

It's interesting, because a project that gets burned by spaghetti can drift into lasagna code to overcompensate. Still bad, but lasagna code is somewhat more manageable (just a huge headache to reason about).

But having an ungodly combination of those two... I dare not think about it. shudder 2 likes Reply

Nick Cinger Dec 9 '18

Sidenote before I finish listening: I appreciate that I can minimize the browser on mobile and have this keep playing, unlike with others apps(looking at you, YouTube). 2 likes Reply

Xander Dec 11 '18

Do not build solutions for problems you do not have.

At some point you need to add something because it makes sense. Until it makes sense, STICK WITH THE SPAGHETTI!!

[May 03, 2021] Spaghetti, lasagna and raviolli code

The pasta theory is a theory of programming. It is a common analogy for application development describing different programming structures as popular pasta dishes. Pasta theory highlights the shortcomings of the code. These analogies include spaghetti, lasagna and ravioli code.
May 03, 2021 | georgik.rocks

Code smells or anti-patterns are a common classification of source code quality. There is also classification based on food which you can find on Wikipedia.

Spaghetti code

Spaghetti code is a pejorative term for source code that has a complex and tangled control structure, especially one using many GOTOs, exceptions, threads, or other “unstructured†branching constructs. It is named such because program flow tends to look like a bowl of spaghetti, i.e. twisted and tangled. Spaghetti code can be caused by several factors, including inexperienced programmers and a complex program which has been continuously modified over a long life cycle. Structured programming greatly decreased the incidence of spaghetti code.

Ravioli code

Ravioli code is a type of computer program structure, characterized by a number of small and (ideally) loosely-coupled software components. The term is in comparison with spaghetti code, comparing program structure to pasta; with ravioli (small pasta pouches containing cheese, meat, or vegetables) being analogous to objects (which ideally are encapsulated modules consisting of both code and data).

Lasagna code

Lasagna code is a type of program structure, characterized by several well-defined and separable layers, where each layer of code accesses services in the layers below through well-defined interfaces. The term is in comparison with spaghetti code, comparing program structure to pasta.

Spaghetti with meatballs

The term “spaghetti with meatballs†is a pejorative term used in computer science to describe loosely constructed object-oriented programming (OOP) that remains dependent on procedural code. It may be the result of a system whose development has transitioned over a long life-cycle, language constraints, micro-optimization theatre, or a lack of coherent coding standards.

Do you know about other interesting source code classification?


[Apr 22, 2021] Technical Evaluations- 6 questions to ask yourself - Enable Sysadmin

Notable quotes:
"... [ You might also like: Six deployment steps for Linux services and their related tools ] ..."
"... [ A free guide from Red Hat: 5 steps to automate your business . ] ..."
Apr 22, 2021 | www.redhat.com

When introducing a new tool, programming language, or dependency into your environment, what steps do you take to evaluate it? In this article, I will walk through a six-question framework I use to make these determinations.

What problem am I trying to solve?

We all get caught up in the minutiae of the immediate problem at hand. An honest, critical assessment helps divulge broader root causes and prevents micro-optimizations.

[ You might also like: Six deployment steps for Linux services and their related tools ]

Let's say you are experiencing issues with your configuration management system. Day-to-day operational tasks are taking longer than they should, and working with the language is difficult. A new configuration management system might alleviate these concerns, but make sure to take a broader look at this system's context. Maybe switching from virtual machines to immutable containers eases these issues and more across your environment while being an equivalent amount of work. At this point, you should explore the feasibility of more comprehensive solutions as well. You may decide that this is not a feasible project for the organization at this time due to a lack of organizational knowledge around containers, but conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next quarter.

This intellectual exercise helps you drill down to the root causes and solve core issues, not the symptoms of larger problems. This is not always going to be possible, but be intentional about making this decision.

In the cloud Does this tool solve that problem?

Now that we have identified the problem, it is time for critical evaluation of both ourselves and the selected tool.

A particular technology might seem appealing because it is new because you read a cool blog post about it or you want to be the one giving a conference talk. Bells and whistles can be nice, but the tool must resolve the core issues you identified in the first question.

What am I giving up?

The tool will, in fact, solve the problem, and we know we're solving the right problem, but what are the tradeoffs?

These considerations can be purely technical. Will the lack of observability tooling prevent efficient debugging in production? Does the closed-source nature of this tool make it more difficult to track down subtle bugs? Is managing yet another dependency worth the operational benefits of using this tool?

Additionally, include the larger organizational, business, and legal contexts that you operate under.

Are you giving up control of a critical business workflow to a third-party vendor? If that vendor doubles their API cost, is that something that your organization can afford and is willing to accept? Are you comfortable with closed-source tooling handling a sensitive bit of proprietary information? Does the software licensing make this difficult to use commercially?

While not simple questions to answer, taking the time to evaluate this upfront will save you a lot of pain later on.

Is the project or vendor healthy?

This question comes with the addendum "for the balance of your requirements." If you only need a tool to get your team over a four to six-month hump until Project X is complete, this question becomes less important. If this is a multi-year commitment and the tool drives a critical business workflow, this is a concern.

When going through this step, make use of all available resources. If the solution is open source, look through the commit history, mailing lists, and forum discussions about that software. Does the community seem to communicate effectively and work well together, or are there obvious rifts between community members? If part of what you are purchasing is a support contract, use that support during the proof-of-concept phase. Does it live up to your expectations? Is the quality of support worth the cost?

Make sure you take a step beyond GitHub stars and forks when evaluating open source tools as well. Something might hit the front page of a news aggregator and receive attention for a few days, but a deeper look might reveal that only a couple of core developers are actually working on a project, and they've had difficulty finding outside contributions. Maybe a tool is open source, but a corporate-funded team drives core development, and support will likely cease if that organization abandons the project. Perhaps the API has changed every six months, causing a lot of pain for folks who have adopted earlier versions.

What are the risks?

As a technologist, you understand that nothing ever goes as planned. Networks go down, drives fail, servers reboot, rows in the data center lose power, entire AWS regions become inaccessible, or BGP hijacks re-route hundreds of terabytes of Internet traffic.

Ask yourself how this tooling could fail and what the impact would be. If you are adding a security vendor product to your CI/CD pipeline, what happens if the vendor goes down?

Kubernetes and OpenShift

This brings up both technical and business considerations. Do the CI/CD pipelines simply time out because they can't reach the vendor, or do you have it "fail open" and allow the pipeline to complete with a warning? This is a technical problem but ultimately a business decision. Are you willing to go to production with a change that has bypassed the security scanning in this scenario?

Obviously, this task becomes more difficult as we increase the complexity of the system. Thankfully, sites like k8s.af consolidate example outage scenarios. These public postmortems are very helpful for understanding how a piece of software can fail and how to plan for that scenario.

What are the costs?

The primary considerations here are employee time and, if applicable, vendor cost. Is that SaaS app cheaper than more headcount? If you save each developer on the team two hours a day with that new CI/CD tool, does it pay for itself over the next fiscal year?

Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral if you save the dev team a couple of hours a day, but you're removing a huge blocker in their daily workflow, and they would be much happier for it. That happiness is likely worth the financial cost. Onboarding new developers is costly, so don't underestimate the value of increased retention when making these calculations.

[ A free guide from Red Hat: 5 steps to automate your business . ]

Wrap up

I hope you've found this framework insightful, and I encourage you to incorporate it into your own decision-making processes. There is no one-size-fits-all framework that works for every decision. Don't forget that, sometimes, you might need to go with your gut and make a judgment call. However, having a standardized process like this will help differentiate between those times when you can critically analyze a decision and when you need to make that leap.

[Sep 30, 2020] Object-Oriented Programming is Garbage- 3800 SLOC example - YouTube

Sep 30, 2020 | www.youtube.com

xcelina , 4 years ago

Awesome video, I loved watching it. In my experience, there are many situations where, like you pointed out, procedural style makes things easier and prevents you from overthinking and overgeneralizing the problem you are trying to tackle. However, in some cases, object-oriented programming removes unnecessary conditions and switches that make your code harder to read. Especially in complex game engines where you deal with a bunch of objects which interact in diverse ways to the environment, other objects and the physics engine. In a procedural style, a program like this would become an unmanageable clutter of flags, variables and switch-statements. Therefore, the statement "Object-Oriented Programming is Garbage" is an unnecessary generalization. Object-oriented programming is a tool programmers can use - and just like you would not use pliers to get a nail into a wall, you should not force yourself to use object-oriented programming to solve every problem at hand. Instead, you use it when it is appropriate and necessary. Nevertheless, i would like to hear how you would realize such a complex program. Maybe I'm wrong and procedural programming is the best solution in any case - but right now, I think you need to differentiate situations which require a procedural style from those that require an object-oriented style.

MarquisDeSang , 3 years ago

I have been brainwashed with c++ for 20 years. I have recently switched to ANSI C and my mind is now free. Not only I feel free to create design that are more efficient and elegant, but I feel in control of what I do.

Gm3dco , 3 months ago

You make a lot of very solid points. In your refactoring of the Mapper interface to a type-switch though: what is the point of still using a declared interface here? If you are disregarding extensibility (which would require adding to the internal type switch, rather than conforming a possible new struct to an interface) anyway, why not just make Mapper of type interface{} and add a (failing) default case to your switch?

Marvin Blum , 4 years ago

I recommend to install the Gosublime extension, so your code gets formatted on save and you can use autocompletion. But looks good enough. But I disagree with large functions. Small ones are just easier to understand and test.


Lucid Moses
, 4 years ago

Being the lead designer of an larger app (2m lines of code as of 3 years ago). I like to say we use C+. Because C++ breaks down in the real world. I'm happy to use encapsulation when it fits well. But developers that use OO just for OO-ness sake get there hands slapped. So in our app small classes like PhoneNumber and SIN make sense. Large classes like UserInterface also work nicely (we talk to specialty hardware like forklifts and such). So, it may be all coded in C++ but basic C developers wouldn't have to much of an issue with most of it. I don't think OO is garbage. It's just a lot people use it in in appropriate ways. When all you have is a hammer, everything looks like a nail. So if you use OO on everything then you sometimes end up with garbage.


TekkGnostic
, 4 years ago (edited)

Loving the series. The hardest part of actually becoming an efficient programmer is unlearning all the OOP brainwashing. It can be useful for high-level structuring so I've been starting with C++ then reducing everything into procedural functions and tightly-packed data structs. Just by doing that I reduced static memory use and compiled program size at least 10-15%+ (which is a lot when you only have 32kb.) And holy damn, nearly 20 years of C and I never knew you could nest a function within a function, I had to try that right away.


RyuDarragh
, 4 years ago

I have a design for a networked audio platform that goes into large buildings (over 11 stories) and can have 250 networked nodes (it uses an E1 style robbed bit networking system) and 65K addressable points (we implemented 1024 of them for individual control by grouping them). This system ties to a fire panel at one end with a microphone and speakers at the other end. You can manually select any combination of points to page to, or the fire panel can select zones to send alarm messages to. It works in real time with 50mS built in delays and has access to 12 audio channels. What really puts the frosting on this cake is, the CPU is an i8051 running at 18MHz and the code is a bit over 200K bytes that took close to 800K lines of code. In assembler. And it took less than a Year from concept to first installation. By one designer/coder. The only OOP in this code was when an infinite loop happened or a bug crept in - "OOPs!"


Y HA
, 1 month ago

For many cases OOP has a heavy overhead. But as I learned the hard way, in many others it can save a huge deal of time and being more practical.


LedoCool1
, 1 year ago (edited)

There's a way of declaring subfunctions in C++ (idk if works in C). I saw it done by my friend. General idea is to declare a struct inside which a function can be declared. Since you can declare structs inside functions, you can safely use it as a wrapper for your function-inside-function declaration. This has been done in MSVC but I believe it will compile in gcc too.

[Sep 29, 2020] Handmade Hero - Getting rid of the OOP mindset - YouTube

Sep 29, 2020 | www.youtube.com

Thoughts Feeder , 3 months ago

"Is pixel an object or a group of objects? Is there a container? Do I have to ask a factory to get me a color?" I literally died there... that's literally the best description of my programming for the last 5 years.


Karan Joisher
, 2 years ago

It's really sad that we are only taught OOP and no other paradigms in our college, when I discovered programming I had no idea about OOP and it was really easy to build programs, bt then I came across OOP:"how to deconstruct a problem statement into nouns for objects and verbs for methods" and it really messed up my thinking, I have been struggling for a long time on how to organize my code on the conceptual level, only recently I realized that OOP is the reason for this struggle, handmadehero helped alot to bring me back to the roots of how programming is done, remember never push OOP into areas where it is not needed, u don't have to model ur program as real world entities cause it's not going to run on real world, it's going to run on CPU!


Ai
, 2 years ago

Learned C# first and that was a huge mistake. Programming got all exciting when I learned C

Esben Olsen , 10 months ago

I made a game 4 years ago. Then I learned OOP and now I haven't finished any projects since


theb1rd
, 5 months ago (edited)

I lost an entire decade to OOP, and agree with everything Casey said here. The code I wrote in my first year as a programmer (before OOP) was better than the code I wrote in my 15th year (OOP expert). It's a shame that students are still indoctrinated into this regressive model.


John Appleseed
, 2 years ago

Unfortunately, when I first started programming, I encountered nothing but tutorials that jumped right into OOP like it was the only way to program. And of course I didn't know any better! So much friction has been removed from my process since I've broken free from that state of mind. It's easier to judge when objects are appropriate when you don't think they're always appropriate!


judged by time
, 1 year ago

"It's not that OOP is bad or even flawed. It's that object-oriented programming isn't the fundamental particle of computing that some people want it to be. When blindly applied to problems below an arbitrary complexity threshold, OOP can be verbose and contrived, yet there's often an aesthetic insistence on objects for everything all the way down. That's too bad, because it makes it harder to identify the cases where an object-oriented style truly results in an overall simplicity and ease of understanding." - https://prog21.dadgum.com/156.html


Chen Huang
, 3 years ago

The first language I was taught was Java, so I was taught OOP from the get go. Removing the OOP mindset was actually really easy, but what was left stuck in my head is the practice of having small functions and make your code look artificially "clean". So I am in a constant struggle of refactoring and not refactoring, knowing that over-refactoring will unnecessarily complicate my codebase if it gets big. Even after removing my OOP mindset, my emphasis is still on the code itself, and that is much harder to cure in comparison.

judged by time , 1 year ago

"I want to emphasize that the problem with object-oriented programming is not the concept that there could be an object. The problem with it is the fact that you're orienting your program, the thinking, around the object, not the function. So it's the orientation that's bad about it, NOT whether you end up with an object. And it's a really important distinction to understand."


joseph fatur
, 2 years ago

Nicely stated, HH. On youtube, MPJ, Brian Will, and Jonathan Blow also address this matter. OOP sucks and can be largely avoided. Even "reuse" is overdone. Straightline probably results in faster execution but slightly greater memory use. But memory is cheap and the resultant code is much easier to follow. Learn a little assembly language. X86 is fascinating and you'll know what the computer is actually doing.


Hao Wu
, 1 year ago

I think schools should teach at least 3 languages / paradigms, C for Procedural, Java for OOP, and Scheme (or any Lisp-style languages) for Functional paradigms.


J. Bradley Bulsterbaum
, 10 months ago

It sounds to me like you're describing JavaScript framework programming that people learn to start from. It hasn't seemed to me like object-oriented programmers who aren't doing web stuff have any problem directly describing an algorithm and then translating it into imperative or functional or just direct instructions for a computer. it's quite possible to use object-oriented languages or languages that support object-oriented stuff to directly command a computer.

bbkane , 5 months ago (edited)

I dunno man. Object oriented programming can (sometimes badly) solve real problems - notably polymorphism. For example, if you have a Dog and a Cat sprite and they both have a move method. The "non-OO" way Casey does this is using tagged unions - and that was not an obvious solution when I first saw it. Quite glad I watched that episode though, it's very interesting! Also see this tweet thread from Casey - https://twitter.com/cmuratori/status/1187262806313160704

[Sep 29, 2020] https://en.wikipedia.org/wiki/List_of_object-oriented_programming_languages

Sep 29, 2020 | en.wikipedia.org

Geovane Piccinin , PHP Programmer (2015-present) Answered November 23, 2018

My deepest feeling after crossing so many discussions and books about this is a sincere YES.

Without entering in any technical details about it, because even after some years I don’t find myself qualified to talk about this (is there someone who really understand it completely?), I would argument that the main problem is that every time a read something about OOP it is trying to justify why it is “so good”.

Then, a huge amount of examples are shown, many arguments, and many expectations are created.

It is not stated simply like this: “oh, this is another programming paradigm.” It is usually stated that: “This in a fantastic paradigm, it is better, it is simpler, it permits so many interesting things, … it is this, it is that… and so on.

What happens is that, based on the “good” arguments, it creates some expectation that things produced with OOP should be very good. But, no one really knows if they are doing it right. They say: the problem is not the paradigm, it is you that are not experienced yet. When will I be experienced enough?

Are you following me? My feeling is that the common place of saying it is so good at the same time you never know how good you are actually being makes all of us very frustrated and confuse.

Yes, it is a great paradigm since you see it just as another paradigm and drop all the expectations and excessive claiming that it is so good.

It seems to me, that the great problem is that huge propaganda around it, not the paradigm itself. Again, if it had a more humble claim about its advantages and how difficult is to achieve then, people would be much less frustrated.

Sourav Datta , A programmer trying find the ultimate source code of life. Answered August 6, 2015 · Author has 145 answers and 292K answer views

In recent years, OOP is indeed being regarded as a overrated paradigm by many. If we look at the most recent famous languages like Go and Rust, they do not have the traditional OO approaches in language design. Instead, they choose to pack data into something akin to structs in C and provide ways to specify "protocols" (similar to interfaces/abstract methods) which can work on those packed data...

[Sep 29, 2020] Is Object Oriented Programming over rated - Another view ! by Chris Boss

Apr 20, 2013 | cwsof.com

The last decade has seen object oriented programming (OOP) dominate the programming world. While there is no doubt that there are benefits of OOP, some programmers question whether OOP has been over rated and ponder whether alternate styles of coding are worth pursuing. To even suggest that OOP has in some way failed to produce the quality software we all desire could in some instances cost a programmer his job, so why even ask the question ?

Quality software is the goal.

Likely all programmers can agree that we all want to produce quality software. We would like to be able to produce software faster, make it more reliable and improve its performance. So with such goals in mind, shouldn't we be willing to at least consider all possibilities ? Also it is reasonable to conclude that no single tool can match all situations. For example, while few programmers today would even consider using assembler, there are times when low level coding such as assembler could be warranted. The old adage applies "the right tool for the job". So it is fair to pose the question, "Has OOP been over used to the point of trying to make it some kind of universal tool, even when it may not fit a job very well ?"

Others are asking the same question.

I won't go into detail about what others have said about object oriented programming, but I will simply post some links to some interesting comments by others about OOP.

Richard Mansfield

http://www.4js.com/files/documents/products/genero/WhitePaperHasOOPFailed.pdf

Intel Blog: by Asaf Shelly

http://software.intel.com/en-us/blogs/2008/08/22/flaws-of-object-oriented-modeling/

Usenix article: by Stephen C. Johnson (Melismatic Software)

http://static.usenix.org/publications/library/proceedings/sf94/johnson.html

Department of Computer. Science and IT, University of Jammu

http://www.csjournals.com/IJCSC/PDF1-2/9..pdf

An aspect which may be overlooked.

I have watched a number of videos online and read a number of articles by programmers about different concepts in programming. When OOP is discussed they talk about thinks like modeling the real world, abtractions, etc. But two things are often missing in such discussions, which I will discuss here. These two aspects greatly affect programming, but may not be discussed.

First is, what is programming really ? Programming is a method of using some kind of human readable language to generate machine code (or scripts eventually read by machine code) so one can make a computer do a task. Looking back at all the years I have been programming, the most profound thing I have ever learned about programming was machine language. Seeing what a CPU is actually doing with our programs provides a great deal of insight. It helps one understand why integer arithmetic is so much faster than floating point. It helps one understand what graphics is really all about (simply the moving around a lot of pixels or blocks of four bytes). It helps one understand what a procedure really must do to have parameters passed. It helps one understand why a string is simply a block of bytes (or double bytes for unicode). It helps one understand why we use bytes so much and what bit flags are and what pointers are.

When one looks at OOP from the perspective of machine code and all the work a compiler must do to convert things like classes and objects into something the machine can work with, then one very quickly begins to see that OOP adds significant overhead to an application. Also if a programmer comes from a background of working with assembler, where keeping things simple is critical to writing maintainable code, one may wonder if OOP is improving coding or making it more complicated.

Second, is the often said rule of "keep it simple". This applies to programming. Consider classic Visual Basic. One of the reasons it was so popular was that it was so simple compared to other languages, say C for example. I know what is involved in writing a pure old fashioned WIN32 application using the Windows API and it is not simple, nor is it intuitive. Visual Basic took much of that complexity and made it simple. Now Visual Basic was sort of OOP based, but actually mostly in the GUI command set. One could actually write all the rest of the code using purely procedural style code and likely many did just that. I would venture to say that when Visual Basic went the way of dot.net, it left behind many programmers who simply wanted to keep it simple. Not that they were poor programmers who didn't want to learn something new, but that they knew the value of simple and taking that away took away a core aspect of their programming mindset.

Another aspect of simple is also seen in the syntax of some programming languages. For example, BASIC has stood the test of time and continues to be the language of choice for many hobby programmers. If you don't think that BASIC is still alive and well, take a look at this extensive list of different BASIC programming languages.

http://basic.mindteq.com/index.php?i=full

While some of these BASICs are object oriented, many of them are also procedural in nature. But the key here is simplicity. Natural readable code.

Simple and low level can work together.

Now consider this. What happens when you combine a simple language with the power of machine language ? You get something very powerful. For example, I write some very complex code using purely procedural style coding, using BASIC, but you may be surprised that my appreciation for machine language (or assembler) also comes to the fore. For example, I use the BASIC language GOTO and GOSUB. How some would cringe to hear this. But these constructs are native to machine language and very useful, so when used properly they are powerful even in a high level language. Another example is that I like to use pointers a lot. Oh how powerful pointers are. In BASIC I can create variable length strings (which are simply a block of bytes) and I can embed complex structures into those strings by using pointers. In BASIC I use the DIM AT command, which allows me to dimension an array of any fixed data type or structure within a block of memory, which in this case happens to be a string.

Appreciating machine code also affects my view of performance. Every CPU cycle counts. This is one reason I use BASICs GOSUB command. It allows me to write some reusable code within a procedure, without the need to call an external routine and pass parameters. The performance improvement is significant. Performance also affects how I tackle a problem. While I want code to be simple, I also want it to run as fast as possible, so amazingly some of the best performance tips have to do with keeping code simple, with minimal overhead and also understanding what the machine code must accomplish to do with what I have written in a higher level language. For example in BASIC I have a number of options for the SELECT CASE structure. One option can optimize the code using jump tables (compiler handles this), one option can optimize if the values are only Integers or DWords. But even then the compiler can only do so much. What happens if a large SELECT CASE has to compare dozens and dozens of string constants to a variable length string being tested ? If this code is part of a parser, then it really can slow things down. I had this problem in a scripting language I created for an OpenGL based 3D custom control. The 3D scripting language is text based and has to be interpreted to generate 3D OpenGL calls internally. I didn't want the scripting language to bog things down. So what would I do ?

The solution was simple and appreciating how the compiled machine code would have to compare so many bytes in so many string constants, one quickly realized that the compiler alone could not solve this. I had to think like I was an assembler programmer, but still use a high level language. The solution was so simple, it was surprising. I could use a pointer to read the first byte of the string being parsed. Since the first character would always be a letter in the scripting language, this meant there were 26 possible outcomes. The SELECT CASE simply tested for the first character value (convert to a number) which would execute fast. Then for each letter (A,B,C, ) I would only compare the parsed word to the scripting language keywords which started with that letter. This in essence improved speed by 26 fold (or better).

The fastest solutions are often very simple to code. No complex classes needed here. Just a simple procedure to read through a text string using the simplest logic I could find. The procedure is a little more complex than what I describe, but this is the core logic of the routine.

From experience, I have found that a purely procedural style of coding, using a language which is natural and simple (BASIC), while using constructs of the language which are closer to pure machine (or assembler) in the language produces smaller and faster applications which are also easier to maintain.

Now I am not saying that all OOP is bad. Nor am I saying that OOP never has a place in programming. What I am saying though is that it is worth considering the possiblity that OOP is not always the best solution and that there are other choices.

Here are some of my other blog articles which may interest you if this one interested you:

Classic Visual Basic's end marked a key change in software development.

http://cwsof.com/blog/?p=608

Is software development too complex today ?

http://cwsof.com/blog/?p=579

BASIC, OOP and Learning programming in the 21st century !

http://cwsof.com/blog/?p=252

Why BASIC ?

http://cwsof.com/blog/?p=171

Reliable Software !

http://cwsof.com/blog/?p=148

Maybe a shift in software development is required ?

http://cwsof.com/blog/?p=134

Stop being a programmer for a moment !

http://cwsof.com/blog/?p=36

[Sep 29, 2020] OOP is Overrated

Sep 29, 2020 | beinghappyprogramming.wordpress.com

Posted on January 26, 2013 by silviomarcovilla -- Leave a comment

Yes it is. For application code at least, I'm pretty sure.
Not claiming any originality here, people smarter than me already noticed this fact ages ago.

Also, don't misunderstand me, I'm not saying that OOP is bad. It probably is the best variant of procedural programming.
Maybe the term is OOP overused to describe anything that ends up in OO systems.
Things like VMs, garbage collection, type safety, mudules, generics or declarative queries (Linq) are a given , but they are not inherently object oriented.
I think these things (and others) are more relevant than the classic three principles.

Inheritance
Current advice is usually prefer composition over inheritance . I totally agree.

Polymorphism
This is very, very important. Polymorphism cannot be ignored, but you don't write lots of polymorphic methods in application code. You implement the occasional interface, but not every day.
Mostly you use them.
Because polymorphism is what you need to write reusable components, much less to use them.

Encapsulation
Encapsulation is tricky. Again, if you ship reusable components, then method-level access modifiers make a lot of sense. But if you work on application code, such fine grained encapsulation can be overkill. You don't want to struggle over the choice between internal and public for that fantastic method that will only ever be called once. Except in test code maybe. Hiding all implementation details in private members while retaining nice simple tests can be very difficult and not worth the troulbe. (InternalsVisibleTo being the least trouble, abstruse mock objects bigger trouble and Reflection-in-tests Armageddon).
Nice, simple unit tests are just more important than encapsulation for application code, so hello public!

So, my point is, if most programmers work on applications, and application code is not very OO, why do we always talk about inheritance at the job interview? 🙂

PS
If you think about it, C# hasn't been pure object oriented since the beginning (think delegates) and its evolution is a trajectory from OOP to something else, something multiparadigm.

[Sep 18, 2020] Global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP is overkill.

Notable quotes:
"... Also, global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP is overkill. ..."
Nov 22, 2019 | stackoverflow.com

Peter Mortensen, Mar 4 '17 at 22:00

If you want to refer to a global variable in a function, you can use the global keyword to declare which variables are global. You don't have to use it in all cases (as someone here incorrectly claims) - if the name referenced in an expression cannot be found in local scope or scopes in the functions in which this function is defined, it is looked up among global variables.

However, if you assign to a new variable not declared as global in the function, it is implicitly declared as local, and it can overshadow any existing global variable with the same name.

Also, global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP is overkill.

J S, Jan 8 '09

Absolutely re. zealots. Most Python users use it for scripting and create little functions to separate out small bits of code. – Paul Uszak Sep 22 at 22:57

[Sep 09, 2020] Object-oriented programming - Wikipedia

Sep 09, 2020 | en.wikipedia.org

Criticism [ edit ]

The OOP paradigm has been criticised for a number of reasons, including not meeting its stated goals of reusability and modularity, [36] [37] and for overemphasizing one aspect of software design and modeling (data/objects) at the expense of other important aspects (computation/algorithms). [38] [39]

Luca Cardelli has claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take longer to compile, and that OOP languages have "extremely poor modularity properties with respect to class extension and modification", and tend to be extremely complex. [36] The latter point is reiterated by Joe Armstrong , the principal inventor of Erlang , who is quoted as saying: [37]

The problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

A study by Potok et al. has shown no significant difference in productivity between OOP and procedural approaches. [40]

Christopher J. Date stated that critical comparison of OOP to other technologies, relational in particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP; [41] however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind of customizable type system to support RDBMS . [42]

In an article Lawrence Krubner claimed that compared to other languages (LISP dialects, functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden of unneeded complexity. [43]

Alexander Stepanov compares object orientation unfavourably to generic programming : [38]

I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras -- families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting -- saying that everything is an object is saying nothing at all.

Paul Graham has suggested that OOP's popularity within large companies is due to "large (and frequently changing) groups of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one programmer from "doing too much damage". [44]

Leo Brodie has suggested a connection between the standalone nature of objects and a tendency to duplicate code [45] in violation of the don't repeat yourself principle [46] of software development.

Steve Yegge noted that, as opposed to functional programming : [47]

Object Oriented Programming puts the Nouns first and foremost. Why would you go to such lengths to put one part of speech on a pedestal? Why should one kind of concept take precedence over another? It's not as if OOP has suddenly made verbs less important in the way we actually think. It's a strangely skewed perspective.

Rich Hickey , creator of Clojure , described object systems as overly simplistic models of the real world. He emphasized the inability of OOP to model time properly, which is getting increasingly problematic as software systems become more concurrent. [39]

Eric S. Raymond , a Unix programmer and open-source software advocate, has been critical of claims that present object-oriented programming as the "One True Solution", and has written that object-oriented programming languages tend to encourage thickly layered programs that destroy transparency. [48] Raymond compares this unfavourably to the approach taken with Unix and the C programming language . [48]

Rob Pike , a programmer involved in the creation of UTF-8 and Go , has called object-oriented programming "the Roman numerals of computing" [49] and has said that OOP languages frequently shift the focus from data structures and algorithms to types . [50] Furthermore, he cites an instance of a Java professor whose "idiomatic" solution to a problem was to create six new classes, rather than to simply use a lookup table . [51]

[Sep 09, 2020] Goodbye, Object Oriented Programming - by Charles Scalfani - Medium

Sep 09, 2020 | medium.com

The Reference Problem

For efficiency sake, Objects are passed to functions NOT by their value but by reference.

What that means is that functions will not pass the Object, but instead pass a reference or pointer to the Object.

If an Object is passed by reference to an Object Constructor, the constructor can put that Object reference in a private variable which is protected by Encapsulation.

But the passed Object is NOT safe!

Why not? Because some other piece of code has a pointer to the Object, viz. the code that called the Constructor. It MUST have a reference to the Object otherwise it couldn't pass it to the Constructor?

The Reference Solution

The Constructor will have to Clone the passed in Object. And not a shallow clone but a deep clone, i.e. every object that is contained in the passed in Object and every object in those objects and so on and so on.

So much for efficiency.

And here's the kicker. Not all objects can be Cloned. Some have Operating System resources associated with them making cloning useless at best or at worst impossible.

And EVERY single mainstream OO language has this problem.

Goodbye, Encapsulation.

[Jul 02, 2020] My 20-Year Experience of Software Development Methodologies by Ian Miell

Oct 15, 2017 | zwischenzugs.com

Uncategorized 7 Minutes Sapiens and Collective Fictions

Recently I read Sapiens: A Brief History of Humankind by Yuval Harari. The basic thesis of the book is that humans require 'collective fictions' so that we can collaborate in larger numbers than the 150 or so our brains are big enough to cope with by default. Collective fictions are things that don't describe solid objects in the real world we can see and touch. Things like religions, nationalism, liberal democracy, or Popperian falsifiability in science. Things that don't exist, but when we act like they do, we easily forget that they don't.

Collective Fictions in IT – Waterfall

This got me thinking about some of the things that bother me today about the world of software engineering. When I started in software 20 years ago, God was waterfall. I joined a consultancy (ca. 400 people) that wrote very long specs which were honed to within an inch of their life, down to the individual Java classes and attributes. These specs were submitted to the customer (God knows what they made of it), who signed it off. This was then built, delivered, and monies were received soon after. Life was simpler then and everyone was happy.

Except there were gaps in the story – customers complained that the spec didn't match the delivery, and often the product delivered would not match the spec, as 'things' changed while the project went on. In other words, the waterfall process was a 'collective fiction' that gave us enough stability and coherence to collaborate, get something out of the door, and get paid.

This consultancy went out of business soon after I joined. No conclusions can be drawn from this.

Collective Fictions in IT – Startups ca. 2000

I got a job at another software development company that had a niche with lots of work in the pipe. I was employee #39. There was no waterfall. In fact, there was nothing in the way of methodology I could see at all. Specs were agreed with a phone call. Design, prototype and build were indistinguishable. In fact it felt like total chaos; it was against all of the precepts of my training. There was more work than we could handle, and we got on with it.

The fact was, we were small enough not to need a collective fiction we had to name. Relationships and facts could be kept in our heads, and if you needed help, you literally called out to the room. The tone was like this, basically:

Of course there were collective fictions, we just didn't name them:

We got slightly bigger, and customers started asking us what our software methodology was. We guessed it wasn't acceptable to say 'we just write the code' (legend had it our C-based application server – still in use and blazingly fast – was written before my time in a fit of pique with a stash of amphetamines over a weekend. It's still in use.)

Turns out there was this thing called 'Rapid Application Development' that emphasized prototyping. We told customers we did RAD, and they seemed happy, as it was A Thing. It sounded to me like 'hacking', but to be honest I'm not sure anyone among us really properly understood it or read up on it.

As a collective fiction it worked, because it kept customers off our backs while we wrote the software.

Soon we doubled in size, moved out of our cramped little office into a much bigger one with bigger desks, and multiple floors. You couldn't shout out your question to the room anymore. Teams got bigger, and these things called 'project managers' started appearing everywhere talking about 'specs' and 'requirements gathering'. We tried and failed to rewrite our entire platform from scratch.

Yes, we were back to waterfall again, but this time the working cycles were faster and smaller, and the same problems of changing requirements and disputes with customers as before. So was it waterfall? We didn't really know.

Collective Fictions in IT – Agile

I started hearing the word 'Agile' about 2003. Again, I don't think I properly read up on it ever, actually. I got snippets here and there from various websites I visited and occasionally from customers or evangelists that talked about it. When I quizzed people who claimed to know about it their explanations almost invariably lost coherence quickly. The few that really had read up on it seemed incapable of actually dealing with the very real pressures we faced when delivering software to non-sprint-friendly customers, timescales, and blockers. So we carried on delivering software with our specs, and some sprinkling of agile terminology. Meetings were called 'scrums' now, but otherwise it felt very similar to what went on before.

As a collective fiction it worked, because it kept customers and project managers off our backs while we wrote the software.

Since then I've worked in a company that grew to 700 people, and now work in a corporation of 100K+ employees, but the pattern is essentially the same: which incantation of the liturgy will satisfy this congregation before me?

Don't You Believe?

I'm not going to beat up on any of these paradigms, because what's the point? If software methodologies didn't exist we'd have to invent them, because how else would we work together effectively? You need these fictions in order to function at scale. It's no coincidence that the Agile paradigm has such a quasi-religious hold over a workforce that is immensely fluid and mobile. (If you want to know what I really think about software development methodologies, read this because it lays it out much better than I ever could.)

One of many interesting arguments in Sapiens is that because these collective fictions can't adequately explain the world, and often conflict with each other, the interesting parts of a culture are those where these tensions are felt. Often, humour derives from these tensions.

'The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.' F. Scott Fitzgerald

I don't know about you, but I often feel this tension when discussion of Agile goes beyond a small team. When I'm told in a motivational poster written by someone I've never met and who knows nothing about my job that I should 'obliterate my blockers', and those blockers are both external and non-negotiable, what else can I do but laugh at it?

How can you be agile when there are blockers outside your control at every turn? Infrastructure, audit, security, financial planning, financial structures all militate against the ability to quickly deliver meaningful iterations of products. And who is the customer here, anyway? We're talking about the square of despair:

When I see diagrams like this representing Agile I can only respond with black humour shared with my colleagues, like kids giggling at the back of a church.

When within a smaller and well-functioning functioning team, the totems of Agile often fly out of the window and what you're left with (when it's good) is a team that trusts each other, is open about its trials, and has a clear structure (formal or informal) in which agreement and solutions can be found and co-operation is productive. Google recently articulated this (reported briefly here , and more in-depth here ).

So Why Not Tell It Like It Is?

You might think the answer is to come up with a new methodology that's better. It's not like we haven't tried:

It's just not that easy, like the book says:

'Telling effective stories is not easy. The difficulty lies not in telling the story, but in convincing everyone else to believe it. Much of history revolves around this question: how does one convince millions of people to believe particular stories about gods, or nations, or limited liability companies? Yet when it succeeds, it gives Sapiens immense power, because it enables millions of strangers to cooperate and work towards common goals. Just try to imagine how difficult it would have been to create states, or churches, or legal systems if we could speak only about things that really exist, such as rivers, trees and lions.'

Let's rephrase that:

'Coming up with useful software methodologies is not easy. The difficulty lies not in defining them, but in convincing others to follow it. Much of the history of software development revolves around this question: how does one convince engineers to believe particular stories about the effectiveness of requirements gathering, story points, burndown charts or backlog grooming? Yet when adopted, it gives organisations immense power, because it enables distributed teams to cooperate and work towards delivery. Just try to images how difficult it would have been to create Microsoft, Google, or IBM if we could only speak about specific technical challenges.'

Anyway, does the world need more methodologies? It's not like some very smart people haven't already thought about this.

Acceptance

So I'm cool with it. Lean, Agile, Waterfall, whatever, the fact is we need some kind of common ideology to co-operate in large numbers. None of them are evil, so it's not like you're picking racism over socialism or something. Whichever one you pick is not going to reflect the reality, but if you expect perfection you will be disappointed. And watch yourself for unspoken or unarticulated collective fictions. Your life is full of them. Like that your opinion is important. I can't resist quoting this passage from Sapiens about our relationship with wheat:

'The body of Homo sapiens had not evolved for [farming wheat]. It was adapted to climbing apple trees and running after gazelles, not to clearing rocks and carrying water buckets. Human spines, knees, necks and arches paid the price. Studies of ancient skeletons indicate that the transition to agriculture brought about a plethora of ailments, such as slipped discs, arthritis and hernias. Moreover, the new agricultural tasks demanded so much time that people were forced to settle permanently next to their wheat fields. This completely changed their way of life. We did not domesticate wheat. It domesticated us. The word 'domesticate' comes from the Latin domus, which means 'house'. Who's the one living in a house? Not the wheat. It's the Sapiens.'

Maybe we're not here to direct the code, but the code is directing us. Who's the one compromising reason and logic to grow code? Not the code. It's the Sapiens.


If you liked this, you may want to look at my book Learn Bash the Hard Way , available at $5 :

Also currently co-authoring Second Edition of a book on Docker: Get 39% off with the code 39miell2

Share this:

https://widgets.wp.com/likes/index.html?ver=20190321#blog_id=20870870&post_id=1474&origin=zwischenzugs.wordpress.com&obj_id=20870870-1474-5efdf020c3f1f&domain=zwischenzugs.com Related

Things I Learned Managing Site Reliability for Some of the World's Busiest Gambling Sites With 22 comments

Why Are Enterprises So Slow? With 28 comments

Riding the Tiger: Lessons Learned Implementing Istio With 4 comments Published by zwischenzugs

View all posts by zwischenzugs Published October 15, 2017

60 thoughts on "My 20-Year Experience of Software Development Methodologies"

  1. Pingback: My 20-Year Experience of Software Development Methodologies | ExtendTree
  2. gregjor October 15, 2017 at 11:28 am

    Great article, matches my experience. And thanks for the link and compliment on my article. Reply

    1. zwischenzugs October 15, 2017 at 1:07 pm

      Wow, that was yours? Have toted that article around for years. Pleasure to finally meet you! Reply

  3. primogatto October 15, 2017 at 1:04 pm

    "And watch yourself for unspoken or unarticulated collective fictions. Your life is full of them."

    Agree completely.

    As for software development methodologies, I personally think that with a few tweaks the waterfall methodology could work quite well. The key changes I'd suggest would help is to introduce developer guidance at the planning stage, including timeboxed explorations of the feasibility of the proposals, as well as aiming for specs to outline business requirements rather than dictating how they should be implemented. Reply

    1. pheeque October 15, 2017 at 6:19 pm

      And then there were 16 competing standards. Reply

  4. Neel October 15, 2017 at 5:30 pm

    wonderful Reply

  5. Rob Lang October 15, 2017 at 9:15 pm

    A very entertaining article! I have as similar experience and outlook. I've not tried LEAN. I once heard a senior developer say that methodologies were just a stick with which to beat developers. This was largely in the case of clients who agree to engage in whatever process when amongst business people and then are absent at grooming, demos, releases, feedback meetings and so on. When the software is delivered at progressively short notice, it's always the developer that has to carry the burden of ensuring quality, feeling keenly responsible for the work they do (the conscientious ones anyway). Then non-technical management hide behind the process and failing to have the client fully engaged is quickly forgotten.

    It reminds me (I'm rambling now, sorry) of factory workers in the 80s complaining about working conditions and the management nodding and smiling while doing nothing to rectify the situation and doomed to repeat the same error. Except now the workers are intelligent and will walk, taking their business knowledge and skill set with them. Reply

  6. Mike Will October 16, 2017 at 1:36 am

    Very enjoyable. I had a stab at the small sub-trail of 'syntonicity' here: http://www.scidata.ca/?p=895
    Syntonicity is Stuart Watt's term which he probably got from Seymour Papert.

    Of course, this may all become moot soon as our robot overlords take their place at the keyboard. Reply

  7. joskid October 16, 2017 at 7:23 am

    Reblogged this on josephdung . Reply

  8. otomato October 16, 2017 at 8:31 am

    A great article! I was very much inspired by Yuval's book myself. So much that I wrote a post about DevOps being a collective fiction : http://otomato.link/devops-is-a-myth/
    Basically same ideas as yours but from a different angle. Reply

  9. Roger October 16, 2017 at 5:24 pm

    Fantastic article – I wonder what the next fashionable methodology will be? Reply

  10. Pingback: Evolving Software Development | CR 279 | Jupiter Broadcasting
  11. Rafiqunnabi Nayan October 17, 2017 at 5:31 am

    A great article. Thanks a lot for writing. Reply

  12. Follow Blog Widget - Support - WordPress.com October 17, 2017 at 6:47 am

    This site truly has all the information I needed about this subject and didn't
    know who to ask. Reply

  13. Pingback: Five Blogs – 18 October 2017 – 5blogs
  14. Pingback: Weekly Links #83 – Useful Links For Developers
  15. Anthony Kesterton October 22, 2017 at 3:16 pm

    Brilliant – well said Ian!

    I think part of the "need" for methodology is the desire for a common terminology. However, if everyone has their own view of what these terms mean, then it all starts to go horribly wrong. The focus quickly becomes adhering to the methodology rather than getting the work done. Reply

  16. Pingback: Die KW 42/2017 im Link-Rückblick | artodeto's blog about coding, politics and the world
  17. Pingback: programming reading notes | Electronics DIY
  18. Steve Naidamast October 23, 2017 at 1:15 pm

    A very well-written article. I retired from corporate development in 2014 but am still developing my own projects. I have written on this very subject and these pieces have been published as well.

    The idea that the Waterfall technique for development was the only one in use as we go back towards the earlier years is a myth that has been built up by the folks who have been promoting the Agile technique, which for seniors like me has been just another word for what we used to call "guerrilla programming". In fact, if one were to review that standards of design in software engineering there are 13 types of design techniques, all of which have been used at one time or another by many different companies successfully. Waterfall was just one of them and was only recommended for very large projects.

    The author is correct to conclude by implication that the best technique for design and implementation is the RAD technique promoted by Stephen McConnell of Construx and a team that can work well with other. His book, still in its first edition since 1996, is considered the Bible for software development and describes every aspect of software engineering one could require. His point. However, his book is only suggested as a guide where engineers can pick what they really need for the development of their projects; not hard standards. Nonetheless, McConnell stresses the need for good specifications and risk management, the latter if not used always causes a project to fail or result in less than satisfactory results. His work is proven by over 35 years of research Reply

  19. Mike October 23, 2017 at 1:39 pm

    Hilarious and oh so true. Remember the first time you were being taught Agile and they told you that the stakeholders would take responsibility for their role and decisions. What a hoot! Seriously, I guess they did used to write detailed specs, but in my twenty some years, I've just been thrilled if I had a business analyst that knew about what they wanted Reply

  20. Kurt Guntheroth October 23, 2017 at 4:16 pm

    OK, here's a collective fiction for you. "Methodologies don't work. They don't reflect reality. They are just something we tell customers because they are appalled when we admit that our software is developed in a chaotic and unprofessional manner." This fiction serves those people who already don't like process, and gives them excuses.
    We do things the same way over and over for a reason. We have traffic lights because it reduces congestion and reduces traffic fatalities. We make cakes using a recipe because we like it when the result is consistently pleasing. So too with software methodologies.
    Like cake recipes, not all software methodologies are equally good at producing a consistently good result. This fact alone should tell you that there is something of value in the best ones. While there may be a very few software chefs who can whip up a perfect result every time, the vast bulk of developers need a recipe to follow or the results are predictably bad.
    Your diatribe against process does the community a disservice. Reply

  21. Doug October 24, 2017 at 5:34 am

    I have arrived at the conclusion that any and all methodologies would work – IF (and it's a big one), everyone managed to arrive at a place where they considered the benefit of others before themselves. And, perhaps, they all used the same approach.

    For me, it comes down to character rather than anything else. I can learn the skills or trade a chore with someone else.

    Software developers; the ones who create "new stuff", by definition, have no roadmap. They have experience, good judgment, the ability to 'survive in the wild', are always wanting to "see what is over there" and trust, as was noted is key. And there are varying levels of developer. Some want to build the roads; others use the roads built for them and some want to survey for the road yet to be built. None of these are wrong – or right.

    The various methodology fights are like arguing over what side of the road to drive on, how to spell colour and color. Just pick one, get over yourself and help your partner(s) become successful.

    Ah, right Where do the various methodologies resolve greed, envy, distrust, selfishness, stepping on others for personal gain, and all of the other REAL killers of success again?

    I have seen great teams succeed and far too many fail. Those that have failed more often than not did so for character-related issues rather than technical ones. Reply

  22. Pingback: into #SoftwareDevelopment ? this is a good read https://zwischenzugs.wordpress.com/2017/10/15/my-20-year-experience-of-software-development-methodologies/
  23. Morten Damsgaard-madsen October 24, 2017 at 7:32 am

    One of the best articles I have read in a long time about – well everything :-). Reply

  24. Pingback: Java Weekly, Issue 199 | Baeldung
  25. Pingback: My 20-Year Experience of Software Development Methodologies | beloschuk
  26. Pingback: 테스트메일 | simple note
  27. Ben Hayden November 7, 2017 at 1:36 pm

    Before there exists any success, a methodology must freeze a definition for roles, as well as process. Unless there exist sufficient numbers and specifications of roles, and appropriate numbers of sapiens to hold those roles, then the one on the end becomes overburdened and triggers systemic failure.

    There has never been a sufficiently-complex methodology that could encompass every field, duty, and responsibility in a software development task. (This is one of the reasons "chaos" is successful. At least it accepts the natural order of things, and works within the interstitial spaces of a thousand objects moving at once.)

    We even lie to ourselves when we name what we're doing: Methodology. It sounds so official, so logical, so orderly. That's a myth. It's just a way of pushing the responsibility down from the most powerful to the least powerful -- every time.

    For every "methodology," who is the caboose on the end of this authority train? The "coder."

    The tighter the role definitions become in any methodology, the more actual responsibilities cascade down to the "coder." If the specs conflict, who raises his hand and asks the question? If a deadline is unreasonable, who complains? If a technique is unusable in a situation, who brings that up?

    The person is obviously the "coder." And what happens when the coder asks this question?

    In one methodology the "coder" is told to stop production and raise the issue with the manager who will talk to the analyst who will talk to the client who will complain that his instructions were clear and it all falls back to the "coder" who, obviously, was too dim to understand the 1,200 pages of specifications the analyst handed him.

    In another, the "coder" is told, "you just work it out." And the concomitant chaos renders the project unstable.

    In another, the "coder" is told "just do what you're told." And the result is incompatible with the rest of the project.

    I've stopped "coding" for these reasons and because everybody is happy with the myth of programming process because they aren't the caboose. Reply

    1. Kurt Guntheroth November 7, 2017 at 4:29 pm

      I was going to make fun of this post for being whiney and defeatust. But the more I thought about it, the more I realized it contained a big nugget of truth. A lot of methodologies, as practiced, have the purpose of putting off risk onto the developers, of fixing responsibility on developers so the managers aren't responsible for any of the things that can go wrong with projects. Reply

  28. Pingback: Organizing Teams With Collective Fictions | Hackaday
  29. Pingback: Organizing Teams With Collective Fictions – High Tech Newz
  30. Pingback: Organizing Teams With Collective Fictions – LorePop
  31. Pingback: Seven Hypothesis of German Tech Culture and Challenging the Status Quo – @Virtual_Patrick
  32. Pingback: My 20-Year Experience of Software Development Methodologies – InnovateStartup
  33. Pingback: Interesting Links for 04-12-2017 | Made from Truth and Lies
  34. Pingback: My 20-Year Trip of Gadget Trend Methodologies | A1A
  35. William (Bill) Meade December 4, 2017 at 2:27 pm

    A pleasure to read. Gödel incompleteness in software? Development environments are nothing if not formalisms. :-) Reply

  36. Pingback: My 20-Year Experience of Software Development Methodologies – Demo
  37. Scott Armit (@smarmit) December 4, 2017 at 4:32 pm

    Really enjoyable and matches my 20+ years in the industry. Thank you. Reply

  38. dinkarshastri December 4, 2017 at 5:44 pm

    Reblogged this on High output engineering . Reply

  39. Pedro Liska December 6, 2017 at 4:14 pm

    Great article! I have experienced the same regarding software methodologies. And at a greater level, thank you for introducing me to the concept of collective fictions; it makes so much sense. I will be reading Sapiens. Reply

  40. Pingback: The 20 MB hard drive; 3.5 billion Reddit comments; and much more - Intertech Blog
  41. Alex Staveley December 8, 2017 at 5:33 pm

    Actually, come to think of it there are two types of Software Engineers who take process very seriously. One who is acutely aware of software entropy and wants to pro -actively fight against it because they want to engineer to a high standard and don't like working the weekend. So they wants things organised. Then there's another type who can come across as being a bit dogmatic. Maybe your links with collective delusions help explain some of the human psychology here. Reply

  42. Pingback: My 20-Year Experience of Software Development Methodologies – zwischenzugs | A Place Like This
  43. Pingback: Newsletter 40 | import digest
  44. Pingback: Interesting articles Jan-Mar 2018 – ProgBlog
  45. Frank Thun February 11, 2018 at 10:31 am

    Great Article. Here is one I did about about Agile Management Systems, which are trying to lay the managerial foundations for "Agile" . Or should I say to liberate Organisations? None of the systems help if a full is using this tool, though.
    https://managementdigital.net/2017/06/30/holacracy-liberation-and-management-3-0/ Reply

  46. Pingback: Five Things I Did to Change a Team's Culture – zwischenzugs
  47. Pingback: Things I Learned Managing Site Reliability for Some of the World's Busiest Gambling Sites – zwischenzugs
  48. Cara Mudah Memblokir Situs dengan MikroTik June 2, 2018 at 4:02 pm

    Mumtaz, i like this so much Reply

  49. Pingback: Personal experiences with agile: 16 comments, pictures and a video about practically applying agile - stratejos blog
  50. Praxent July 24, 2018 at 2:49 pm

    really good site Reply

  51. Pingback: The software dev "process" | Joe Teibel
  52. Pingback: Why Are Enterprises So Slow? – zwischenzugs
  53. Kostas Chairopoulos (@khairop) November 17, 2018 at 8:54 am

    First of all this is a great article, very well written. A couple of remarks. Early in waterfall, the large business requirements documents didn't work for two reasons: There was no new business process, it was the same business process that should be applied within a new technology (from mainframes to open unix systems, from ascii to RAD tools and 4-GL languages). . Second many consultancy companies (mostly the big 4) there were using "copy&paste" methods to fill these documents, submit the time and material forms for the consultants, increasing the revenue and move on. Things have change with the adoption of the smartphones use etc
    To reflect the author idea, to my humble opinion the collective fictions is the embedded quality of work into the whole life cycle development
    Thanks
    Kostas Reply

  54. AriC December 8, 2018 at 3:40 pm

    Sorry, did you forget to finish the article? I don't see the conclusion providing the one true programming methodology that works in all occasions. What is the magic procedure? Thanks in advance. Reply

  55. Pingback: Notes on Books Read in 2018 – zwischenzugs
  56. Pingback: 'AWS vs K8s' is the new 'Windows vs Linux' – zwischenzugs
  57. Pingback: Notes on Books Read in 2019 – zwischenzugs

[May 27, 2020] Features Considered Harmful

Microsoft's EEE tactics which can be redefined as "Steal; Add complexity and bloat; trash original" can be used on open source and as success of systemd has shown can be pretty successful strategy.
Notable quotes:
"... Free software acts like proprietary software when it treats the existence of alternatives as a problem to be solved. I personally never trust a project with developers as arrogant as that. ..."
May 27, 2020 | techrights.org

...it was developed along lines that are not entirely different from Microsoft's EEE tactics -- which today I will offer a new acronym and description for:

1. Steal
2. Add Bloat
3. Original Trashed

It's difficult conceptually to "steal" Free software, because it (sort of, effectively) belongs to everyone. It's not always Public Domain -- copyleft is meant to prevent that. The only way you can "steal" free software is by taking it from everyone and restricting it again. That's like "stealing" the ocean or the sky, and putting it somewhere that people can't get to it. But this is what non-free software does. (You could also simply go against the license terms, but I doubt Stallman would go for the word "stealing" or "theft" as a first choice to describe non-compliance).

... ... ...

Again and again, Microsoft "Steals" or "Steers" the development process itself so it can gain control (pronounced: "ownership") of the software. It is a gradual process, where Microsoft has more and more influence until they dominate the project and with it, the user. This is similar to the process where cults (or drug addiction) take over people's lives, and similar to the process where narcissists interfere in the lives of others -- by staking a claim and gradually dominating the person or project.

Then they Add Bloat -- more features. GitHub is friendly to use, you don't have to care about how Git works to use it (this is true of many GitHub clones as well, as even I do not really care how Git works very much. It took a long time for someone to even drag me towards GitHub for code hosting, until they were acquired and I stopped using it) and due to its GLOBAL size, nobody can or ought to reproduce its network effects.

I understand the draw of network effects. That's why larger federated instances of code hosts are going to be more popular than smaller instances. We really need a mix -- smaller instances to be easy to host and autonomous, larger instances to draw people away from even more gigantic code silos. We can't get away from network effects (just like the War on Drugs will never work) but we can make them easier and less troublesome (or safer) to deal with.

Finally, the Original is trashed, and the SABOTage is complete. This has happened with Python against Python 2, despite protests from seasoned and professional developers, it was deliberately attempted with Systemd against not just sysvinit but ALL alternatives -- Free software acts like proprietary software when it treats the existence of alternatives as a problem to be solved. I personally never trust a project with developers as arrogant as that.

... ... ...

There's a meme about creepy vans with "FREE CANDY" painted on the side, which I took one of the photos from and edited it so that it said "FEATURES" instead. This is more or less how I feel about new features in general, given my experience with their abuse in development, marketing and the takeover of formerly good software projects.

People then accuse me of being against features, of course. As with the Dijkstra article, the real problem isn't Basic itself. The problem isn't features per se (though they do play a very key role in this problem) and I'm not really against features -- or candy, for that matter.

I'm against these things being used as bait, to entrap people in an unpleasant situation that makes escape difficult. You know, "lock-in". Don't get in the van -- don't even go NEAR the van.

Candy is nice, and some features are nice too. But we would all be better off if we could get the candy safely, and delete the creepy horrible van that comes with it. That's true whether the creepy van is GitHub, or surveillance by GIAFAM, or a Leviathan "init" system, or just breaking decades of perfectly good Python code, to try to force people to develop differently because Google or Microsoft (who both have had heavy influence over newer Python development) want to try to force you to -- all while using "free" software.

If all that makes free software "free" is the license -- (yes, it's the primary and key part, it's a necessary ingredient) then putting "free" software on GitHub shouldn't be a problem, right? Not if you're running LibreJS, at least.

In practice, "Free in license only" ignores the fact that if software is effectively free, the user is also effectively free. If free software development gets dragged into doing the bidding of non-free software companies and starts creating lock-in for the user, even if it's external or peripheral, then they simply found an effective way around the true goal of the license. They did it with Tivoisation, so we know that it's possible. They've done this in a number of ways, and they're doing it now.

If people are trying to make the user less free, and they're effectively making the user less free, maybe the license isn't an effective monolithic solution. The cost of freedom is eternal vigilance. They never said "The cost of freedom is slapping a free license on things", as far as I know. (Of course it helps). This really isn't a straw man, so much as a rebuttal to the extremely glib take on software freedom in general that permeates development communities these days.

But the benefits of Free software, free candy and new features are all meaningless, if the user isn't in control.

Don't get in the van.

"The freedom to NOT run the software, to be free to avoid vendor lock-in through appropriate modularization/encapsulation and minimized dependencies; meaning any free software can be replaced with a user's preferred alternatives (freedom 4)." – Peter Boughton

... ... ...

[Dec 01, 2019] Academic Conformism is the road to 1984. - Sic Semper Tyrannis

Highly recommended!
Dec 01, 2019 | turcopolier.typepad.com

Academic Conformism is the road to "1984."

Symptoms-of-groupthink-janis-72-l

The world is filled with conformism and groupthink. Most people do not wish to think for themselves. Thinking for oneself is dangerous, requires effort and often leads to rejection by the herd of one's peers.

The profession of arms, the intelligence business, the civil service bureaucracy, the wondrous world of groups like the League of Women Voters, Rotary Club as well as the empire of the thinktanks are all rotten with this sickness, an illness which leads inevitably to stereotyped and unrealistic thinking, thinking that does not reflect reality.

The worst locus of this mentally crippling phenomenon is the world of the academics. I have served on a number of boards that awarded Ph.D and post doctoral grants. I was on the Fulbright Fellowship federal board. I was on the HF Guggenheim program and executive boards for a long time. Those are two examples of my exposure to the individual and collective academic minds.

As a class of people I find them unimpressive. The credentialing exercise in acquiring a doctorate is basically a nepotistic process of sucking up to elders and a crutch for ego support as well as an entrance ticket for various hierarchies, among them the world of the academy. The process of degree acquisition itself requires sponsorship by esteemed academics who recommend candidates who do not stray very far from the corpus of known work in whichever narrow field is involved. The endorsements from RESPECTED academics are often decisive in the award of grants.

This process is continued throughout a career in academic research. PEER REVIEW is the sine qua non for acceptance of a "paper," invitation to career making conferences, or to the Holy of Holies, TENURE.

This life experience forms and creates CONFORMISTS, people who instinctively boot-lick their fellows in a search for the "Good Doggy" moments that make up their lives. These people are for sale. Their price may not be money, but they are still for sale. They want to be accepted as members of their group. Dissent leads to expulsion or effective rejection from the group.

This mentality renders doubtful any assertion that a large group of academics supports any stated conclusion. As a species academics will say or do anything to be included in their caste.

This makes them inherently dangerous. They will support any party or parties, of any political inclination if that group has the money, and the potential or actual power to maintain the academics as a tribe. pl


doug , 01 December 2019 at 01:01 PM

Sir,

That is the nature of tribes and humans are very tribal. At least most of them. Fortunately, there are outliers. I was recently reading "Political Tribes" which was written by a couple who are both law professors that examines this.

Take global warming (aka the rebranded climate change). Good luck getting grants to do any skeptical research. This highly complex subject which posits human impact is a perfect example of tribal bias.

My success in the private sector comes from consistent questioning what I wanted to be true to prevent suboptimal design decisions.

I also instinctively dislike groups that have some idealized view of "What is to be done?"

As Groucho said: "I refuse to join any club that would have me as a member"

J , 01 December 2019 at 01:22 PM
Reminds one of the Borg, doesn't it?

The 'isms' had it, be it Nazism, Fascism, Communism, Totalitarianism, Elitism all demand conformity and adherence to group think. If one does not co-tow to whichever 'ism' is at play, those outside their group think are persecuted, ostracized, jailed, and executed all because they defy their conformity demands, and defy allegiance to them.

One world, one religion, one government, one Borg. all lead down the same road to -- Orwell's 1984.

Factotum , 01 December 2019 at 03:18 PM
David Halberstam: The Best and the Brightest. (Reminder how the heck we got into Vietnam, when the best and the brightest were serving as presidential advisors.)

Also good Halberstam re-read: The Powers that Be - when the conservative media controlled the levers of power; not the uber-liberal one we experience today.

[Nov 26, 2019] OOP has been a golden hammer ever since Java, but we ve noticed the downsides quite a while ago. Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that Java was/is Qbertino ( 265505 )

Notable quotes:
"... In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm. ..."
"... In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules ..."
"... I get tired of the purists who think that OO is the only possible answer. The world is not a nail. ..."
Nov 15, 2019 | developers.slashdot.org
No, not really, don't think so. ( Score: 2 )

OOP has been a golden hammer ever since Java, but we've noticed the downsides quite a while ago. Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that Java was/is. Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast, despite having a DB model that was built by non-programmers on crack.

Most critical processes are procedural, even today if only for the OOP has been a golden hammer ever since Java, but we've noticed the downsides quite a while ago.

Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that Java was/is.

Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast, despite having a DB model that was built by non-programmers on crack.

bradley13 ( 1118935 ) , Monday July 22, 2019 @01:15AM ( #58963622 ) Homepage

It depends... ( Score: 5 , Insightful)

There are a lot of mediocre programmers who follow the principle "if you have a hammer, everything looks like a nail". They know OOP, so they think that every problem must be solved in an OOP way.

In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm.

In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules. For example, I am working on a natural language system that is supposed to generate textual answers to user inquiries. What "object" am I supposed to create to do this task? An "Answer" object that generates itself? Yes, that would work, but an imperative, static "generate answer" method makes at least as much sense.

There are different ways of thinking, different ways of modelling a problem. I get tired of the purists who think that OO is the only possible answer. The world is not a nail.

[Nov 15, 2019] Your Code: OOP or POO?

Mar 02, 2007 | blog.codinghorror.com
I'm not a fan of object orientation for the sake of object orientation. Often the proper OO way of doing things ends up being a productivity tax . Sure, objects are the backbone of any modern programming language, but sometimes I can't help feeling that slavish adherence to objects is making my life a lot more difficult . I've always found inheritance hierarchies to be brittle and unstable , and then there's the massive object-relational divide to contend with. OO seems to bring at least as many problems to the table as it solves.

Perhaps Paul Graham summarized it best :

Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code. Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work.

Eric Lippert observed a similar occupational hazard among developers. It's something he calls object happiness .

What I sometimes see when I interview people and review code is symptoms of a disease I call Object Happiness. Object Happy people feel the need to apply principles of OO design to small, trivial, throwaway projects. They invest lots of unnecessary time making pure virtual abstract base classes -- writing programs where IFoos talk to IBars but there is only one implementation of each interface! I suspect that early exposure to OO design principles divorced from any practical context that motivates those principles leads to object happiness. People come away as OO True Believers rather than OO pragmatists.

I've seen so many problems caused by excessive, slavish adherence to OOP in production applications. Not that object oriented programming is inherently bad, mind you, but a little OOP goes a very long way . Adding objects to your code is like adding salt to a dish: use a little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's better to err on the side of simplicity, and I tend to favor the approach that results in less code, not more .

Given my ambivalence about all things OO, I was amused when Jon Galloway forwarded me a link to Patrick Smacchia's web page . Patrick is a French software developer. Evidently the acronym for object oriented programming is spelled a little differently in French than it is in English: POO.

S.S. Adams gag fake dog poo 'Doggonit'

That's exactly what I've imagined when I had to work on code that abused objects.

But POO code can have another, more constructive, meaning. This blog author argues that OOP pales in importance to POO. Programming fOr Others , that is.

The problem is that programmers are taught all about how to write OO code, and how doing so will improve the maintainability of their code. And by "taught", I don't just mean "taken a class or two". I mean: have pounded into head in school, spend years as a professional being mentored by senior OO "architects" and only then finally kind of understand how to use properly, some of the time. Most engineers wouldn't consider using a non-OO language, even if it had amazing features. The hype is that major.

So what, then, about all that code programmers write before their 10 years OO apprenticeship is complete? Is it just doomed to suck? Of course not, as long as they apply other techniques than OO. These techniques are out there but aren't as widely discussed.

The improvement [I propose] has little to do with any specific programming technique. It's more a matter of empathy; in this case, empathy for the programmer who might have to use your code. The author of this code actually thought through what kinds of mistakes another programmer might make, and strove to make the computer tell the programmer what they did wrong.

In my experience the best code, like the best user interfaces, seems to magically anticipate what you want or need to do next. Yet it's discussed infrequently relative to OO. Maybe what's missing is a buzzword. So let's make one up, Programming fOr Others, or POO for short.

The principles of object oriented programming are far more important than mindlessly, robotically instantiating objects everywhere:

Stop worrying so much about the objects. Concentrate on satisfying the principles of object orientation rather than object-izing everything. And most of all, consider the poor sap who will have to read and support this code after you're done with it . That's why POO trumps OOP: programming as if people mattered will always be a more effective strategy than satisfying the architecture astronauts .

[Nov 15, 2019] Why do many people assume OOP is on the decline?

Nov 15, 2019 | www.quora.com

Daniel Korenblum

Daniel Korenblum , works at Bayes Impact Updated May 25, 2015 There are many reasons why non-OOP languages and paradigms/practices are on the rise, contributing to the relative decline of OOP.

First off, there are a few things about OOP that many people don't like, which makes them interested in learning and using other approaches. Below are some references from the OOP wiki article:

  1. Cardelli, Luca (1996). "Bad Engineering Properties of Object-Oriented Languages". ACM Comput. Surv. (ACM) 28 (4es): 150. doi:10.1145/242224.242415. ISSN 0360-0300. Retrieved 21 April 2010.
  2. Armstrong, Joe. In Coders at Work: Reflections on the Craft of Programming. Peter Seibel, ed. Codersatwork.com , Accessed 13 November 2009.
  3. Stepanov, Alexander. "STLport: An Interview with A. Stepanov". Retrieved 21 April 2010.
  4. Rich Hickey, JVM Languages Summit 2009 keynote, Are We There Yet? November 2009. (edited)
taken from:

Object-oriented programming

Also see this post and discussion on hackernews:

Object Oriented Programming is an expensive disaster which must end

One of the comments therein linked a few other good wikipedia articles which also provide relevant discussion on increasingly-popular alternatives to OOP:

  1. Modularity and design-by-contract are better implemented by module systems ( Standard ML )
  2. Encapsulation is better served by lexical scope ( http://en.wikipedia.org/wiki/Sco... )
  3. Data is better modelled by algebraic datatypes ( Algebraic data type )
  4. Type-checking is better performed structurally ( Structural type system )
  5. Polymorphism is better handled by first-class functions ( First-class function ) and parametricity ( Parametric polymorphism )

Personally, I sometimes think that OOP is a bit like an antique car. Sure, it has a bigger engine and fins and lots of chrome etc., it's fun to drive around, and it does look pretty. It is good for some applications, all kidding aside. The real question is not whether it's useful or not, but for how many projects?

When I'm done building an OOP application, it's like a large and elaborate structure. Changing the way objects are connected and organized can be hard, and the design choices of the past tend to become "frozen" or locked in place for all future times. Is this the best choice for every application? Probably not.

If you want to drive 500-5000 miles a week in a car that you can fix yourself without special ordering any parts, it's probably better to go with a Honda or something more easily adaptable than an antique vehicle-with-fins.

Finally, the best example is the growth of JavaScript as a language (officially called EcmaScript now?). Although JavaScript/EcmaScript (JS/ES) is not a pure functional programming language, it is much more "functional" than "OOP" in its design. JS/ES was the first mainstream language to promote the use of functional programming concepts such as higher-order functions, currying, and monads.

The recent growth of the JS/ES open-source community has not only been impressive in its extent but also unexpected from the standpoint of many established programmers. This is partly evidenced by the overwhelming number of active repositories on Github using JavaScript/EcmaScript:

Top Github Languages of 2014 (So far)

Because JS/ES treats both functions and objects as structs/hashes, it encourages us to blur the line dividing them in our minds. This is a division that many other languages impose - "there are functions and there are objects/variables, and they are different".

This seemingly minor (and often confusing) design choice enables a lot of flexibility and power. In part this seemingly tiny detail has enabled JS/ES to achieve its meteoric growth between 2005-2015.

This partially explains the rise of JS/ES and the corresponding relative decline of OOP. OOP had become a "standard" or "fixed" way of doing things for a while, and there will probably always be a time and place for OOP. But as programmers we should avoid getting too stuck in one way of thinking / doing things, because different applications may require different approaches.

Above and beyond the OOP-vs-non-OOP debate, one of our main goals as engineers should be custom-tailoring our designs by skillfully choosing the most appropriate programming paradigm(s) for each distinct type of application, in order to maximize the "bang for the buck" that our software provides.

Although this is something most engineers can agree on, we still have a long way to go until we reach some sort of consensus about how best to teach and hone these skills. This is not only a challenge for us as programmers today, but also a huge opportunity for the next generation of educators to create better guidelines and best practices than the current OOP-centric pedagogical system.

Here are a couple of good books that elaborates on these ideas and techniques in more detail. They are free-to-read online:

  1. https://leanpub.com/javascriptal...
  2. https://leanpub.com/javascript-s...
Mike MacHenry , software engineer, improv comedian, maker Answered Feb 14, 2015 · Author has 286 answers and 513.7k answer views Because the phrase itself was over hyped to an extrodinary degree. Then as is common with over hyped things many other things took on that phrase as a name. Then people got confused and stopped calling what they are don't OOP.

Yes I think OOP ( the phrase ) is on the decline because people are becoming more educated about the topic.

It's like, artificial intelligence, now that I think about it. There aren't many people these days that say they do AI to anyone but the laymen. They would say they do machine learning or natural language processing or something else. These are fields that the vastly over hyped and really nebulous term AI used to describe but then AI ( the term ) experienced a sharp decline while these very concrete fields continued to flourish.

[Nov 15, 2019] There is nothing inherently wrong with some of the functionality it offers, its the way OOP is abused as a substitute for basic good programming practices

Nov 15, 2019 | developers.slashdot.org

spazmonkey ( 920425 ) , Monday July 22, 2019 @12:22AM ( #58963430 )

its the way OOP is taught ( Score: 5 , Interesting)

There is nothing inherently wrong with some of the functionality it offers, its the way OOP is abused as a substitute for basic good programming practices.

I was helping interns - students from a local CC - deal with idiotic assignments like making a random number generator USING CLASSES, or displaying text to a screen USING CLASSES. Seriously, WTF?

A room full of career programmers could not even figure out how you were supposed to do that, much less why.

What was worse was a lack of understanding of basic programming skill or even the use of variables, as the kids were being taught EVERY program was to to be assembled solely by sticking together bits of libraries.

There was no coding, just hunting for snippets of preexisting code to glue together. Zero idea they could add their own, much less how to do it. OOP isn't the problem, its the idea that it replaces basic programming skills and best practice.

sjames ( 1099 ) , Monday July 22, 2019 @01:30AM ( #58963680 ) Homepage Journal

Re:its the way OOP is taught ( Score: 5 , Interesting)

That and the obsession with absofrackinglutely EVERYTHING just having to be a formally declared object including the while program being an object with a run() method.

Some things actually cry out to be objects, some not so much. Generally, I find that my most readable and maintainable code turns out to be a procedural program that manipulates objects.

Even there, some things just naturally want to be a struct or just an array of values.

The same is true of most ingenious ideas in programming. It's one thing if code is demonstrating a particular idea, but production code is supposed to be there to do work, not grind an academic ax.

For example, slavish adherence to "patterns". They're quite useful for thinking about code and talking about code, but they shouldn't be the end of the discussion. They work better as a starting point. Some programs seem to want patterns to be mixed and matched.

In reality those problems are just cargo cult programming one level higher.

I suspect a lot of that is because too many developers barely grasp programming and never learned to go beyond the patterns they were explicitly taught.

When all you have is a hammer, the whole world looks like a nail.

[Nov 15, 2019] Inheritance, while not "inherently" bad, is often the wrong solution

Nov 15, 2019 | developers.slashdot.org

mfnickster ( 182520 ) , Monday July 22, 2019 @09:54AM ( #58965660 )

Re:Tiresome ( Score: 5 , Interesting)

Inheritance, while not "inherently" bad, is often the wrong solution. See: Why extends is evil [javaworld.com]

Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny little anecdote in Cocoa Programming for Mac OS X [google.com]:

"Once upon a time, there was a company called Taligent. Taligent was created by IBM and Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the peak of its mindshare, I met one of its engineers at a trade show.

I asked him to create a simple application for me: A window would appear with a button, and when the button was clicked, the words 'Hello, World!' would appear in a text field. The engineer created a project and started subclassing madly: subclassing the window and the button and the event handler.

Then he started generating code: dozens of lines to get the button and the text field onto the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew that the company was doomed. A couple of years later, Taligent quietly closed its doors forever."

[Nov 15, 2019] Never mind that OOP essentially began very early and has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially an object oriented system. It's just data encapsulation and separating work into manageable modules

Nov 15, 2019 | developers.slashdot.org

Darinbob ( 1142669 ) , Monday July 22, 2019 @02:00AM ( #58963760 )

Re:The issue ( Score: 5 , Insightful)

Almost every programming methodology can be abused by people who really don't know how to program well, or who don't want to. They'll happily create frameworks, implement new development processes, and chart tons of metrics, all while avoiding the work of getting the job done. In some cases the person who writes the most code is the same one who gets the least amount of useful work done.

So, OOP can be misused the same way. Never mind that OOP essentially began very early and has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially an object oriented system. It's just data encapsulation and separating work into manageable modules. That's how it was before anyone ever came up with the dumb name "full-stack developer".

[Nov 15, 2019] Is Object-Oriented Programming a Trillion Dollar Disaster?

Nov 15, 2019 | developers.slashdot.org

(medium.com) 782 Posted by EditorDavid on Monday July 22, 2019 @12:04AM from the OOPs dept. Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay calling object-oriented programming "a trillion dollar disaster."

Precious time and brainpower are being spent thinking about "abstractions" and "design patterns" instead of solving real-world problems... Object-Oriented Programming (OOP) has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization . There's no objective and open evidence that OOP is better than plain procedural programming ...

Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard...

[Nov 15, 2019] Bad programmers create objects for objects sake following so called "design patterns" and no amount of comments saves this spaghetti interacting "objects"

Nov 15, 2019 | developers.slashdot.org

cardpuncher ( 713057 ) , Monday July 22, 2019 @03:06AM ( #58963948 )

Re:The issue ( Score: 5 , Insightful)

As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the rise of OOP with some curiosity. I think there's a general consensus that abstraction and re-usability are good things - they're the reason subroutines exist - the issue is whether they are ends in themselves.

I struggle with the whole concept of "design patterns". There are clearly common themes in software, but there seems to be a great deal of pressure these days to make your implementation fit some pre-defined template rather than thinking about the application's specific needs for state and concurrency. I have seen some rather eccentric consequences of "patternism".

Correctly written, OOP code allows you to encapsulate just the logic you need for a specific task and to make that specific task available in a wide variety of contexts by judicious use of templating and virtual functions that obviate the need for "refactoring".

Badly written, OOP code can have as many dangerous side effects and as much opacity as any other kind of code. However, I think the key factor is not the choice of programming paradigm, but the design process.

You need to think first about what your code is intended to do and in what circumstances it might be reused. In the context of a larger project, it means identifying commonalities and deciding how best to implement them once. You need to document that design and review it with other interested parties. You need to document the code with clear information about its valid and invalid use. If you've done that, testing should not be a problem.

Some people seem to believe that OOP removes the need for some of that design and documentation. It doesn't and indeed code that you intend to be reused needs *more* design and documentation than the glue that binds it together in any one specific use case. I'm still a firm believer that coding begins with a pencil, not with a keyboard. That's particularly true if you intend to design abstract interfaces that will serve many purposes. In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the costs - and that usually means you not only know your code will be genuinely reusable but will also genuinely be reused.

Rockoon ( 1252108 ) , Monday July 22, 2019 @04:23AM ( #58964192 )

Re:The issue ( Score: 5 , Insightful)
I struggle with the whole concept of "design patterns".

Because design patterns are stupid.

A reasonable programmer can understand reasonable code so long as the data is documented even when the code isn't documented, but will struggle immensely if it were the other way around.

Bad programmers create objects for objects sake, and because of that they have to follow so called "design patterns" because no amount of code commenting makes the code easily understandable when its a spaghetti web of interacting "objects" The "design patterns" don't make the code easier the read, just easier to write.

Those OOP fanatics, if they do "document" their code, add comments like "// increment the index" which is useless shit.

The big win of OOP is only in the encapsulation of the data with the code, and great code treats objects like data structures with attached subroutines, not as "objects", and document the fuck out of the contained data, while more or less letting the code document itself.

[Nov 15, 2019] 600K line of code probably would have been more like 100K lines if you had used a language whose ecosystem doesn't goad people into writing so many superfluous layers of indirection, abstraction and boilerplate.

Nov 15, 2019 | developers.slashdot.org

Waffle Iron ( 339739 ) , Monday July 22, 2019 @01:22AM ( #58963646 )

Re:680,303 lines ( Score: 4 , Insightful)
680,303 lines of Java code in the main project in my system.

Probably would've been more like 100,000 lines if you had used a language whose ecosystem doesn't goad people into writing so many superfluous layers of indirection, abstraction and boilerplate.

[Nov 04, 2019] Go (programming language) - Wikipedia

Nov 04, 2019 | en.wikipedia.org

... ... ...

The designers were primarily motivated by their shared dislike of C++ . [26] [27] [28]

... ... ...

Omissions [ edit ]

Go deliberately omits certain features common in other languages, including (implementation) inheritance , generic programming , assertions , [e] pointer arithmetic , [d] implicit type conversions , untagged unions , [f] and tagged unions . [g] The designers added only those facilities that all three agreed on. [95]

Of the omitted language features, the designers explicitly argue against assertions and pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful language, encouraging instead the use of interfaces to achieve dynamic dispatch [h] and composition to reuse code. Composition and delegation are in fact largely automated by struct embedding; according to researchers Schmager et al. , this feature "has many of the drawbacks of inheritance: it affects the public interface of objects, it is not fine-grained (i.e, no method-level control over embedding), methods of embedded objects cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse it to the extent that programmers in other languages are reputed to overuse inheritance. [61]

The designers express an openness to generic programming and note that built-in functions are in fact type-generic, but these are treated as special cases; Pike calls this a weakness that may at some point be changed. [53] The Google team built at least one compiler for an experimental Go dialect with generics, but did not release it. [96] They are also open to standardizing ways to apply code generation. [97]

Initially omitted, the exception -like panic / recover mechanism was eventually added, which the Go authors advise using for unrecoverable errors such as those that should halt an entire program or server request, or as a shortcut to propagate errors up the stack within a package (but not across package boundaries; there, error returns are the standard API). [98

[Oct 08, 2019] Southwest Pilots Blast Boeing in Suit for Deception and Losses from -Unsafe, Unairworthy- 737 Max -

Notable quotes:
"... The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air crashes: ..."
"... When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more technical data -- than [they] needed or could realistically digest." ..."
"... The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like: ..."
"... Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition likely could exist or develop on 737 MAX aircraft. ..."
"... Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft. ..."
"... And making the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude ✓. ..."
"... Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit above safety all else , that is glaringly obvious to everyone except Boeing. ..."
"... In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots " ..."
"... This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the automation crapification of everything. ..."
Oct 08, 2019 | www.nakedcapitalism.com

At first blush, the suit filed in Dallas by the Southwest Airlines Pilots Association (SwAPA) against Boeing may seem like a family feud. SWAPA is seeking an estimated $115 million for lost pilots' pay as a result of the grounding of the 34 Boeing 737 Max planes that Southwest owns and the additional 20 that Southwest had planned to add to its fleet by year end 2019. Recall that Southwest was the largest buyer of the 737 Max, followed by American Airlines. However, the damning accusations made by the pilots' union, meaning, erm, pilots, is likely to cause Boeing not just more public relations headaches, but will also give grist to suits by crash victims.

However, one reason that the Max is a sore point with the union was that it was a key leverage point in 2016 contract negotiations:

And Boeing's assurances that the 737 Max was for all practical purposes just a newer 737 factored into the pilots' bargaining stance. Accordingly, one of the causes of action is tortious interference, that Boeing interfered in the contract negotiations to the benefit of Southwest. The filing describes at length how Boeing and Southwest were highly motivated not to have the contract dispute drag on and set back the launch of the 737 Max at Southwest, its showcase buyer. The big point that the suit makes is the plane was unsafe and the pilots never would have agreed to fly it had they known what they know now.

We've embedded the compliant at the end of the post. It's colorful and does a fine job of recapping the sorry history of the development of the airplane. It has damning passages like:

Boeing concealed the fact that the 737 MAX aircraft was not airworthy because, inter alia, it incorporated a single-point failure condition -- a software/flight control logic called the Maneuvering Characteristics Augmentation System ("MCAS") -- that,if fed erroneous data from a single angle-of-attack sensor, would command the aircraft nose-down and into an unrecoverable dive without pilot input or knowledge.

The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air crashes:

Had SWAPA known the truth about the 737 MAX aircraft in 2016, it never would have approved the inclusion of the 737 MAX aircraft as a term in its CBA [collective bargaining agreement], and agreed to operate the aircraft for Southwest. Worse still, had SWAPA known the truth about the 737 MAX aircraft, it would have demanded that Boeing rectify the aircraft's fatal flaws before agreeing to include the aircraft in its CBA, and to provide its pilots, and all pilots, with the necessary information and training needed to respond to the circumstances that the Lion Air Flight 610 and Ethiopian Airlines Flight 302 pilots encountered nearly three years later.

And (boldface original):

Boeing Set SWAPA Pilots Up to Fail

As SWAPA President Jon Weaks, publicly stated, SWAPA pilots "were kept in the dark" by Boeing.

Boeing did not tell SWAPA pilots that MCAS existed and there was no description or mention of MCAS in the Boeing Flight Crew Operations Manual.

There was therefore no way for commercial airline pilots, including SWAPA pilots, to know that MCAS would work in the background to override pilot inputs.

There was no way for them to know that MCAS drew on only one of two angle of attack sensors on the aircraft.

And there was no way for them to know of the terrifying consequences that would follow from a malfunction.

When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more technical data -- than [they] needed or could realistically digest."

SWAPA's pilots, like their counterparts all over the world, were set up for failure

The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like:

By March 2016, Boeing settled on a revision of the MCAS flight control logic.

However, Boeing chose to omit key safeguards that had previously been included in earlier iterations of MCAS used on the Boeing KC-46A Pegasus, a military tanker derivative of the Boeing 767 aircraft.

The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or cause a pilot to lose control. Those familiar with the tanker's design explained that these checks were incorporated because "[y]ou don't want the solution to be worse than the initial problem."

The 737 MAX version of MCAS abandoned the safeguards previously relied upon. As discussed below, the 737 MAX MCAS had greater control authority than its predecessor, activated repeatedly upon activation, and relied on input from just one of the plane's two sensors that measure the angle of the plane's nose.

In other words, Boeing can't credibly say that it didn't know better.

Here is one of the sections describing Boeing's cover-ups:

Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS itself.

In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots.

We urge you to read the complaint in full, since it contains juicy insider details, like the significance of Southwest being Boeing's 737 Max "launch partner" and what that entailed in practice, plus recounting dates and names of Boeing personnel who met with SWAPA pilots and made misrepresentations about the aircraft.

If you are time-pressed, the best MSM account is from the Seattle Times, In scathing lawsuit, Southwest pilots' union says Boeing 737 MAX was unsafe

Even though Southwest Airlines is negotiating a settlement with Boeing over losses resulting from the grounding of the 737 Max and the airline has promised to compensate the pilots, the pilots' union at a minimum apparently feels the need to put the heat on Boeing directly. After all, the union could withdraw the complaint if Southwest were to offer satisfactory compensation for the pilots' lost income. And pilots have incentives not to raise safety concerns about the planes they fly. Don't want to spook the horses, after all.

But Southwest pilots are not only the ones most harmed by Boeing's debacle but they are arguably less exposed to the downside of bad press about the 737 Max. It's business fliers who are most sensitive to the risks of the 737 Max, due to seeing the story regularly covered in the business press plus due to often being road warriors. Even though corporate customers account for only 12% of airline customers, they represent an estimated 75% of profits.

Southwest customers don't pay up for front of the bus seats. And many of them presumably value the combination of cheap travel, point to point routes between cities underserved by the majors, and close-in airports, which cut travel times. In other words, that combination of features will make it hard for business travelers who use Southwest regularly to give the airline up, even if the 737 Max gives them the willies. By contrast, premium seat passengers on American or United might find it not all that costly, in terms of convenience and ticket cost (if they are budget sensitive), to fly 737-Max-free Delta until those passengers regain confidence in the grounded plane.

Note that American Airlines' pilot union, when asked about the Southwest claim, said that it also believes its pilots deserve to be compensated for lost flying time, but they plan to obtain it through American Airlines.

If Boeing were smart, it would settle this suit quickly, but so far, Boeing has relied on bluster and denial. So your guess is as good as mine as to how long the legal arm-wrestling goes on.

Update 5:30 AM EDT : One important point that I neglected to include is that the filing also recounts, in gory detail, how Boeing went into "Blame the pilots" mode after the Lion Air crash, insisting the cause was pilot error and would therefore not happen again. Boeing made that claim on a call to all operators, including SWAPA, and then three days later in a meeting with SWAPA.

However, Boeing's actions were inconsistent with this claim. From the filing:

Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition likely could exist or develop on 737 MAX aircraft.

Relying on Boeing's description of the problem, the AD directed that in the event of un-commanded nose-down stabilizer trim such as what happened during the Lion Air crash, the flight crew should comply with the Runaway Stabilizer procedure in the Operating Procedures of the 737 MAX manual.

But the AD did not provide a complete description of MCAS or the problem in 737 MAX aircraft that led to the Lion Air crash, and would lead to another crash and the 737 MAX's grounding just months later.

An MCAS failure is not like a runaway stabilizer. A runaway stabilizer has continuous un-commanded movement of the tail, whereas MCAS is not continuous and pilots (theoretically) can counter the nose-down movement, after which MCAS would move the aircraft tail down again.

Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft.

Even after the Lion Air crash, Boeing's description of MCAS was still insufficient to put correct its lack of disclosure as demonstrated by a second MCAS-caused crash.

We hoisted this detail because insiders were spouting in our comments section, presumably based on Boeing's patter, that the Lion Air pilots were clearly incompetent, had they only executed the well-known "runaway stabilizer," all would have been fine. Needless to say, this assertion has been shown to be incorrect.


Titus , October 8, 2019 at 4:38 am

Excellent, by any standard. Which does remind of of the NYT zine story (William Langewiesche Published Sept. 18, 2019) making the claim that basically the pilots who crashed their planes weren't real "Airman".

And making the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude ✓.

And especially if you as a pilot didn't know MCAS was there in the first place. This sort of engineering by Boeing is criminal. And the lying. To everyone. Oh, least we all forget the processing power of the in flight computer is that of a intel 286. There are times I just want to be beamed back to the home planet. Where we care for each other.

Carolinian , October 8, 2019 at 8:32 am

One should also point out that Langewiesche said that Boeing made disastrous mistakes with the MCAS and that the very future of the Max is cloudy. His article was useful both for greater detail about what happened and for offering some pushback to the idea that the pilots had nothing to do with the accidents.

As for the above, it was obvious from the first Seattle Times stories that these two events and the grounding were going to be a lawsuit magnet. But some of us think Boeing deserves at least a little bit of a defense because their side has been totally silent–either for legal reasons or CYA reasons on the part of their board and bad management.

Brooklin Bridge , October 8, 2019 at 8:08 am

Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit above safety all else , that is glaringly obvious to everyone except Boeing.

Summer , October 8, 2019 at 9:01 am

"The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or cause a pilot to lose control "

"Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS itself.

In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots "

This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the automation crapification of everything.

[Oct 08, 2019] Serious question/Semi-Rant. What the hell is DevOps supposed to be and how does it affect me as a sysadmin in higher ed?

Notable quotes:
"... Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? ..."
"... So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming). ..."
Oct 08, 2019 | www.reddit.com

Posted by u/kevbo423 59 minutes ago

What the hell is DevOps? Every couple months I find myself trying to look into it as all I ever hear and see about is DevOps being the way forward. But each time I research it I can only find things talking about streamlining software updates and quality assurance and yada yada yada. It seems like DevOps only applies to companies that make software as a product. How does that affect me as a sysadmin for higher education? My "company's" product isn't software.

Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? Again, when I try to research them a majority of what I find just links back to software development.

To give a rough idea of what I deal with, below is a list of my three main responsibilities.

  1. macOS/iOS Systems Administration (I'm the only sysadmin that does this for around 150+ machines)

  2. Network Administration (I just started with this a couple months ago and I'm slowly learning about our infrastructure and network administration in general from our IT director. We have several buildings spread across our entire campus with a mixture of Juniper, Dell, and Brocade equipment.)

  3. AV Systems Design and Programming (I'm the only person who does anything related to video conferencing, meeting room equipment, presentation systems, digital signage, etc. for 7 buildings.)

So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming).

I've been working at the same job for 5 years and I feel like I'm being left in the dust by the entire rest of the industry. I'm being pulled in so many different directions that I feel like it's impossible for me to ever get another job. At the same time, I can't specialize in anything because I have so many different unrelated areas I'm supposed to be doing work in.

And this is what I go through/ask myself every few months I try to research and learn DevOps. This is mainly a rant, but I am more than open to any and all advice anyone is willing to offer. Thanks in advance.

kimvila 2 points · 27 minutes ago

· edited 23 minutes ago

there's a lot of tools that can be used to make your life much easier that's used on a daily basis for DevOps, but apparently that's not the case for you. when you manage infra as code, you're using DevOps.

there's a lot of space for operations guys like you (and me) so look to DevOps as an alternative source of knowledge, just to stay tuned on the trends of the industry and improve your skills.

for higher education, this is useful for managing large projects and looking for improvement during the development of the product/service itself. but again, that's not the case for you. if you intend to switch to another position, you may try to search for a certification program that suits your needs

Mongoloid_the_Retard 0 points · 46 minutes ago

DevOps is a cult.

[Oct 08, 2019] FALLACIES AND PITFALLS OF OO PROGRAMMING by David Hoag and Anthony Sintes

Notable quotes:
"... In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part! ..."
"... Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture. ..."
"... OO will insure the success of your project: An object-oriented approach to software development does not guarantee the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only careful analysis and a complete understanding of the problem will make the project succeed. A successful project will utilize sound techniques, competent programmers, sound processes and solid project management. ..."
"... OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. ..."
"... OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake. ..."
Apr 27, 2000 | www.chicagotribune.com

"Hooked on Objects" is dedicated to providing readers with insight into object-oriented technologies. In our first few articles, we introduced the three tenants of object-oriented programming: encapsulation, inheritance and polymorphism. We then covered software process and design patterns. We even got our hands dirty and dissected the Java class.

Each of our previous articles had a common thread. We have written about the strengths and benefits of the object paradigm and highlighted the advantages the object approach brings to the development effort. However, we do not want to give anyone a false sense that object-oriented techniques are always the perfect answer. Object-oriented techniques are not the magic "silver bullets" of programming.

In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part!

Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture.

If anything, using OO makes design and architecture more important because without a clear, well-planned design, OO will fail almost every time. Spaghetti code (that which is written without a coherent structure) spells trouble for procedural programming, and weak architecture and design can mean the death of an OO project. A poorly planned system will fail to achieve the promises of OO: increased productivity, reusability, scalability and easier maintenance.

Some critics claim OO has not lived up to its advance billing, while others claim its techniques are flawed. OO isn't flawed, but some of the hype has given OO developers and managers a false sense of security.

Successful OO requires careful analysis and design. Our previous articles have stressed the positive attributes of OO. This time we'll explore some of the common fallacies of this promising technology and some of the potential pitfalls.

Fallacies of OO

It is important to have realistic expectations before choosing to use object-oriented technologies. Do not allow these common fallacies to mislead you.

OO Pitfalls

Life is full of compromise and nothing comes without cost. OO is no exception. Before choosing to employ object technologies it is imperative to understand this. When used properly, OO has many benefits; when used improperly, however, the results can be disastrous.

OO technologies take time to learn: Don't expect to become an OO expert overnight. Good OO takes time and effort to learn. Like all technologies, change is the only constant. If you do not continue to enhance and strengthen your skills, you will fall behind.

OO benefits might not pay off in the short term: Because of the long learning curve and initial extra development costs, the benefits of increased productivity and reuse might take time to materialize. Don't forget this or you might be disappointed in your initial OO results.

OO technologies might not fit your corporate culture: The successful application of OO requires that your development team feels involved. If developers are frequently shifted, they will struggle to deliver reusable objects. There's less incentive to deliver truly robust, reusable code if you are not required to live with your work or if you'll never reap the benefits of it.

OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. This isn't as much of a problem today. Memory prices are dropping every day. CPUs continue to provide better performance and compilers and virtual machines continue to improve. The small efficiency that you trade for increased productivity and reuse should be well worth it. However, if you're developing an application that tracks millions of data points in real time, OO might not be the answer for you.

OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake.

What do you need to do to avoid these pitfalls and fallacies? The answer is to keep expectations realistic. Beware of the hype. Use an OO approach only when appropriate.

Programmers should not feel compelled to use every OO trick that the implementation language offers. It is wise to use only the ones that make sense. When used without forethought, object-oriented techniques could cause more harm than good. Of course, there is one other thing that you should always do to improve your OO: Don't miss a single installment of "Hooked on Objects."

David Hoag is vice president-development and chief object guru for ObjectWave, a Chicago-based object-oriented software engineering firm. Anthony Sintes is a Sun Certified Java Developer and team member specializing in telecommunications consulting for ObjectWave. Contact them at [email protected] or visit their Web site at www.objectwave.com.

BOOKMARKS

Hooked on Objects archive:

chicagotribune.com/go/HOBarchive

Associated message board:

chicagotribune.com/go/HOBtalk

[Oct 07, 2019] Pitfalls of Object Oriented Programming by Tony Albrecht - Technical Consultant

This isn't a general discussion of OO pitfalls and conceptual weaknesses, but a discussion of how conventional 'textbook' OO design approaches can lead to inefficient use of cache & RAM, especially on consoles or other hardware-constrained environments. But it's still good.
Sony Computer Entertainment Europe Research & Development Division

OO is not necessarily EVIL

Its all about the memory

Homogeneity

Data Oriented Design Delivers

[Oct 06, 2019] Weird Al Yankovic - Mission Statement

Highly recommended!
This song seriously streamlined my workflow.
Oct 06, 2019 | www.youtube.com

FanmaR , 4 years ago

Props to the artist who actually found a way to visualize most of this meaningless corporate lingo. I'm sure it wasn't easy to come up with everything.

Maxwelhse , 3 years ago

He missed "sea change" and "vertical integration". Otherwise, that was pretty much all of the useless corporate meetings I've ever attended distilled down to 4.5 minutes. Oh, and you're getting laid off and/or no raises this year.

VenetianTemper , 4 years ago

From my experiences as an engineer, never trust a company that describes their product with the word "synergy".

Swag Mcfresh , 5 years ago

For those too young to get the joke, this is a style parody of Crosby, Stills & Nash, a folk-pop super-group from the 60's. They were hippies who spoke out against corporate interests, war, and politics. Al took their sound (flawlessly), and wrote a song in corporate jargon (the exact opposite of everything CSN was about). It's really brilliant, to those who get the joke.

112steinway , 4 years ago

Only in corporate speak can you use a whole lot of words while saying nothing at all.

Jonathan Ingersoll , 3 years ago

As a business major this is basically every essay I wrote.

A.J. Collins , 3 years ago

"The company has undergone organization optimization due to our strategy modification, which includes empowering the support to the operation in various global markets" - Red 5 on why they laid off 40 people suddenly. Weird Al would be proud.

meanmanturbo , 3 years ago

So this is basically a Dilbert strip turned into a song. I approve.

zyxwut321 , 4 years ago

In his big long career this has to be one of the best songs Weird Al's ever done. Very ambitious rendering of one of the most ambitious songs in pop music history.

teenygozer , 3 years ago

This should be played before corporate meetings to shame anyone who's about to get up and do the usual corporate presentation. Genius as usual, Mr. Yankovic!

Dunoid , 4 years ago

Maybe I'm too far gone to the world of computer nerds, but "Cloud Computing" seems like it should have been in the song somewhere.

Snoo Lee , 4 years ago

The "paradigm shift" at the end of the video / song is when the corporation screws everybody at the end. Brilliantly done, Al.

A Piece Of Bread , 3 years ago

Don't forget to triangulate the automatonic business monetizer to create exceptional synergy.

GeoffryHawk , 3 years ago

There's a quote it goes something like: A politician is someone who speaks for hours while saying nothing at all. And this is exactly it and it's brilliant.

Sefie Ezephiel , 4 months ago

From the current Gamestop earnings call "address the challenges that have impacted our results, and execute both deliberately and with urgency. We believe we will transform the business and shape the strategy for the GameStop of the future. This will be driven by our go-forward leadership team that is now in place, a multi-year transformation effort underway, a commitment to focusing on the core elements of our business that are meaningful to our future, and a disciplined approach to capital allocation."" yeah Weird Al totally nailed it

Phil H , 6 months ago

"People who enjoy meetings should not be put in charge of anything." -Thomas Sowell

Laff , 3 years ago

I heard "monetize our asses" for some reason...

Brett Naylor , 4 years ago

Excuse me, but "proactive" and "paradigm"? Aren't these just buzzwords that dumb people use to sound important? Not that I'm accusing you of anything like that. [pause] I'm fired, aren't I?~George Meyer

Mark Kahn , 4 years ago

Brilliant social commentary, on how the height of 60's optimism was bastardized into corporate enthusiasm. I hope SteveJjobs got to see this.

Mark , 4 years ago

That's the strangest "Draw My Life" I've ever seen.

Δ , 17 hours ago

I watch this at least once a day to take the edge of my job search whenever I have to decipher fifteen daily want-ads claiming to seek "Hospitality Ambassadors", "Customer Satisfaction Specialists", "Brand Representatives" and "Team Commitment Associates" eventually to discover they want someone to run a cash register and sweep up.

Mike The SandbridgeKid , 5 years ago

The irony is a song about Corporate Speak in the style of tie-died, hippie-dippy CSN (+/- )Y four-part harmony. Suite Judy Blue Eyes via Almost Cut My Hair filtered through Carry On. "Fantastic" middle finger to Wall Street,The City, and the monstrous excesses of Unbridled Capitalism.

Geetar Bear , 4 years ago (edited)

This reminds me of George carlin so much

Vaugn Ripen , 2 years ago

If you understand who and what he's taking a jab at, this is one of the greatest songs and videos of all time. So spot on. This and Frank's 2000 inch tv are my favorite songs of yours. Thanks Al!

Joolz Godfree , 4 years ago

hahaha, "Client-Centric Solutions...!" (or in my case at the time, 'Customer-Centric' solutions) now THAT's a term i haven't heard/read/seen in years, since last being an office drone. =D

Miles Lacey , 4 years ago

When I interact with this musical visual medium I am motivated to conceptualize how the English language can be better compartmentalized to synergize with the client-centric requirements of the microcosmic community focussed social entities that I administrate on social media while interfacing energetically about the inherent shortcomings of the current socio-economic and geo-political order in which we co-habitate. Now does this tedium flow in an effortless stream of coherent verbalisations capable of comprehension?

Soufriere , 5 years ago

When I bought "Mandatory Fun", put it in my car, and first heard this song, I busted a gut, laughing so hard I nearly crashed. All the corporate buzzwords! (except "pivot", apparently).

[Sep 08, 2019] The Art of Defensive Programming by Diego

Dec 25, 2016 | medium.com

... ... ...

Never trust user input

Assume always you're going to receive something you don't expect. This should be your approach as a defensive programmer, against user input, or in general things coming into your system. That's because as we said we can expect the unexpected. Try to be as strict as possible. Assert that your input values are what you expect.

The best defense is a good offense

Do whitelists not blacklists, for example when validating an image extension, don't check for the invalid types but check for the valid types, excluding all the rest. In PHP however you also have an infinite number of open source validation libraries to make your job easier.

The best defense is a good offense. Be strict

Use database abstraction

The first of OWASP Top 10 Security Vulnerabilities is Injection. That means someone (a lot of people out there) is yet not using secure tools to query their databases. Please use Database Abstraction packages and libraries. In PHP you can use PDO to ensure basic injection protection .

Don't reinvent the wheel

You don't use a framework (or micro framework) ? Well you like doing extra work for no reason, congratulations! It's not only about frameworks, but also for new features where you could easily use something that's already out there, well tested, trusted by thousands of developers and stable , rather than crafting something by yourself only for the sake of it. The only reasons why you should build something by yourself is that you need something that doesn't exists or that exists but doesn't fit within your needs (bad performance, missing features etc)

That's what is used to call intelligent code reuse . Embrace it

Don't trust developers

Defensive programming can be related to something called Defensive Driving . In Defensive Driving we assume that everyone around us can potentially and possibly make mistakes. So we have to be careful even to others' behavior. The same concept applies to Defensive Programming where us, as developers shouldn't trust others developers' code . We shouldn't trust our code neither.

In big projects, where many people are involved, we can have many different ways we write and organize code. This can also lead to confusion and even more bugs. That's because why we should enforce coding styles and mess detector to make our life easier.

Write SOLID code

That's the tough part for a (defensive) programmer, writing code that doesn't suck . And this is a thing many people know and talk about, but nobody really cares or put the right amount of attention and effort into it in order to achieve SOLID code .

Let's see some bad examples

Don't: Uninitialized properties

<?phpclass BankAccount
{
    protected $currency = null;
    public function setCurrency($currency) { ... }
    public function payTo(Account $to, $amount)
    {
        // sorry for this silly example
        $this->transaction->process($to, $amount, $this->currency);
    }
}// I forgot to call $bankAccount->setCurrency('GBP');
$bankAccount->payTo($joe, 100);

In this case we have to remember that for issuing a payment we need to call first setCurrency . That's a really bad thing, a state change operation like that (issuing a payment) shouldn't be done in two steps, using two(n) public methods. We can still have many methods to do the payment, but we must have only one simple public method in order to change the status (Objects should never be in an inconsistent state) .

In this case we made it even better, encapsulating the uninitialised property into the Money object

<?phpclass BankAccount
{
    public function payTo(Account $to, Money $money) { ... }
}$bankAccount->payTo($joe, new Money(100, new Currency('GBP')));

Make it foolproof. Don't use uninitialized object properties

Don't: Leaking state outside class scope

<?phpclass Message
{
    protected $content;
    public function setContent($content)
    {
        $this->content = $content;
    }
}class Mailer
{
    protected $message;
    public function __construct(Message $message)
    {
        $this->message = $message;
    }
    public function sendMessage(){
        var_dump($this->message);
    }
}$message = new Message();
$message->setContent("bob message");
$joeMailer = new Mailer($message);$message->setContent("joe message");
$bobMailer = new Mailer($message);$joeMailer->sendMessage();
$bobMailer->sendMessage();

In this case Message is passed by reference and the result will be in both cases "joe message" . A solution would be either cloning the message object in the Mailer constructor. But what we should always try to do is to use a ( immutable ) value object instead of a plain Message mutable object. Use immutable objects when you can

<?phpclass Message
{
    protected $content;
    public function __construct($content)
    {
        $this->content = $content;
    }
}class Mailer 
{
    protected $message;
    public function __construct(Message $message)
    {
        $this->message = $message;
    }
    public function sendMessage(
    {
        var_dump($this->message);
    }
}$joeMailer = new Mailer(new Message("bob message"));
$bobMailer = new Mailer(new Message("joe message"));$joeMailer->sendMessage();
$bobMailer->sendMessage();
Write tests

We still need to say that ? Writing unit tests will help you adhering to common principles such as High Cohesion, Single Responsibility, Low Coupling and right object composition . It helps you not only testing the working small unit case but also the way you structured your object's. Indeed you'll clearly see when testing your small functions how many cases you need to test and how many objects you need to mock in order to achieve a 100% code coverage

Conclusions

Hope you liked the article. Remember those are just suggestions, it's up to you to know when, where and if to apply them.

[Sep 07, 2019] As soon as you stop writing code on a regular basis you stop being a programmer. You lose you qualification very quickly. That's a typical tragedy of talented programmers who became mediocre managers or, worse, theoretical computer scientists

Programming skills are somewhat similar to the skills of people who play violin or piano. As soon a you stop playing violin or piano still start to evaporate. First slowly, then quicker. In two yours you probably will lose 80%.
Notable quotes:
"... I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. ..."
Sep 07, 2019 | archive.computerhistory.org

Dijkstra said he was proud to be a programmer. Unfortunately he changed his attitude completely, and I think he wrote his last computer program in the 1980s. At this conference I went to in 1967 about simulation language, Chris Strachey was going around asking everybody at the conference what was the last computer program you wrote. This was 1967. Some of the people said, "I've never written a computer program." Others would say, "Oh yeah, here's what I did last week." I asked Edsger this question when I visited him in Texas in the 90s and he said, "Don, I write programs now with pencil and paper, and I execute them in my head." He finds that a good enough discipline.

I think he was mistaken on that. He taught me a lot of things, but I really think that if he had continued... One of Dijkstra's greatest strengths was that he felt a strong sense of aesthetics, and he didn't want to compromise his notions of beauty. They were so intense that when he visited me in the 1960s, I had just come to Stanford. I remember the conversation we had. It was in the first apartment, our little rented house, before we had electricity in the house.

We were sitting there in the dark, and he was telling me how he had just learned about the specifications of the IBM System/360, and it made him so ill that his heart was actually starting to flutter.

He intensely disliked things that he didn't consider clean to work with. So I can see that he would have distaste for the languages that he had to work with on real computers. My reaction to that was to design my own language, and then make Pascal so that it would work well for me in those days. But his response was to do everything only intellectually.

So, programming.

I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. I think of a question that I want to answer, or I have part of my book where I want to present something. But I can't just present it by reading about it in a book. As I code it, it all becomes clear in my head. It's just the discipline. The fact that I have to translate my knowledge of this method into something that the machine is going to understand just forces me to make that crystal-clear in my head. Then I can explain it to somebody else infinitely better. The exposition is always better if I've implemented it, even though it's going to take me more time.

[Sep 07, 2019] Knuth about computer science and money: At that point I made the decision in my life that I wasn't going to optimize my income;

Sep 07, 2019 | archive.computerhistory.org

So I had a programming hat when I was outside of Cal Tech, and at Cal Tech I am a mathematician taking my grad studies. A startup company, called Green Tree Corporation because green is the color of money, came to me and said, "Don, name your price. Write compilers for us and we will take care of finding computers for you to debug them on, and assistance for you to do your work. Name your price." I said, "Oh, okay. $100,000.", assuming that this was In that era this was not quite at Bill Gate's level today, but it was sort of out there.

The guy didn't blink. He said, "Okay." I didn't really blink either. I said, "Well, I'm not going to do it. I just thought this was an impossible number."

At that point I made the decision in my life that I wasn't going to optimize my income; I was really going to do what I thought I could do for well, I don't know. If you ask me what makes me most happy, number one would be somebody saying "I learned something from you". Number two would be somebody saying "I used your software". But number infinity would be Well, no. Number infinity minus one would be "I bought your book". It's not as good as "I read your book", you know. Then there is "I bought your software"; that was not in my own personal value. So that decision came up. I kept up with the literature about compilers. The Communications of the ACM was where the action was. I also worked with people on trying to debug the ALGOL language, which had problems with it. I published a few papers, like "The Remaining Trouble Spots in ALGOL 60" was one of the papers that I worked on. I chaired a committee called "Smallgol" which was to find a subset of ALGOL that would work on small computers. I was active in programming languages.

[Sep 07, 2019] Knuth: maybe 1 in 50 people have the "computer scientist's" type of intellect

Sep 07, 2019 | conservancy.umn.edu

Frana: You have made the comment several times that maybe 1 in 50 people have the "computer scientist's mind." Knuth: Yes. Frana: I am wondering if a large number of those people are trained professional librarians? [laughter] There is some strangeness there. But can you pinpoint what it is about the mind of the computer scientist that is....

Knuth: That is different?

Frana: What are the characteristics?

Knuth: Two things: one is the ability to deal with non-uniform structure, where you have case one, case two, case three, case four. Or that you have a model of something where the first component is integer, the next component is a Boolean, and the next component is a real number, or something like that, you know, non-uniform structure. To deal fluently with those kinds of entities, which is not typical in other branches of mathematics, is critical. And the other characteristic ability is to shift levels quickly, from looking at something in the large to looking at something in the small, and many levels in between, jumping from one level of abstraction to another. You know that, when you are adding one to some number, that you are actually getting closer to some overarching goal. These skills, being able to deal with nonuniform objects and to see through things from the top level to the bottom level, these are very essential to computer programming, it seems to me. But maybe I am fooling myself because I am too close to it.

Frana: It is the hardest thing to really understand that which you are existing within.

Knuth: Yes.

[Sep 07, 2019] Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together

Sep 07, 2019 | conservancy.umn.edu

Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together. I can see that I could be viewed as a scholar that does his best to check out sources of material, so that people get credit where it is due. And to check facts over, not just to look at the abstract of something, but to see what the methods were that did it and to fill in holes if necessary. I look at my role as being able to understand the motivations and terminology of one group of specialists and boil it down to a certain extent so that people in other parts of the field can use it. I try to listen to the theoreticians and select what they have done that is important to the programmer on the street; to remove technical jargon when possible.

But I have never been good at any kind of a role that would be making policy, or advising people on strategies, or what to do. I have always been best at refining things that are there and bringing order out of chaos. I sometimes raise new ideas that might stimulate people, but not really in a way that would be in any way controlling the flow. The only time I have ever advocated something strongly was with literate programming; but I do this always with the caveat that it works for me, not knowing if it would work for anybody else.

When I work with a system that I have created myself, I can always change it if I don't like it. But everybody who works with my system has to work with what I give them. So I am not able to judge my own stuff impartially. So anyway, I have always felt bad about if anyone says, 'Don, please forecast the future,'...

[Sep 07, 2019] The idea of literate programming is that I'm talking to, I'm writing a program for, a human being to read rather than a computer to read. This is probably not enough

Knuth description is convoluted and not very convincing. Essntially Perl POD implements that idea of literate programming inside the Perl interpteter, alloing long fragments of documentation to be mixed with the test of the program. But this is not enough. Essentially Knuth simply adaped TeX to provide high level description of what program is doing. But mixing the description and text has one important problem. While it helps to understand the logic of the program, the program itself become more difficult to debug as it speds into way too many pages.
So there should be an additional step that provide the capability to separate the documentation and the program in the programming editor, folding all documentation (or folding all programming text). You need the capability to see see alternately just documentation of just program preserving the original line numbers. This issue evades Knuth, who probably mostly works with paper anyway.
Sep 07, 2019 | archive.computerhistory.org
Feigenbaum: I'd like to do that, to move on to the third period. You've already mentioned one of them, the retirement issue, and let's talk about that. The second one you mentioned quite early on, which is the birth in your mind of literate programming, and that's another major development. Before I quit my little monologue here I also would like to talk about random graphs, because I think that's a stunning story that needs to be told. Let's talk about either the retirement or literate programming.

Knuth: I'm glad you brought up literate programming, because it was in my mind the greatest spinoff of the TeX project. I'm not the best person to judge, but in some ways, certainly for my own life, it was the main plus I got out of the TeX project was that I learned a new way to program.

I love programming, but I really love literate programming. The idea of literate programming is that I'm talking to, I'm writing a program for, a human being to read rather than a computer to read. It's still a program and it's still doing the stuff, but I'm a teacher to a person. I'm addressing my program to a thinking being, but I'm also being exact enough so that a computer can understand it as well.

And that made me think. I'm not sure if I mentioned last week, but I think I did mention last week, that the genesis of literate programming was that Tony Hoare was interested in publishing source code for programs. This was a challenge, to find a way to do this, and literate programming was my answer to this question. That is, if I had to take a large program like TeX or METAFONT, fairly large, it's 5 or 600 pages of a book--how would you do that?

The answer was to present it as sort of a hypertext, where you have a lot of simple things connected in simple ways in order to understand the whole. Once I realized that this was a good way to write programs, then I had this strong urge to go through and take every program I'd ever written in my life and make it literate. It's so much better than the next best way, I can't imagine trying to write a program any other way. On the other hand, the next best way is good enough that people can write lots and lots of very great programs without using literate programming. So it's not essential that they do. But I do have the gut feeling that if some company would start using literate programming for all of its software that I would be much more inclined to buy that software than any other.

Feigenbaum: Just a couple of things about that that you have mentioned to me in the past. One is your feeling that programs can be beautiful, and therefore they ought to be read like poetry. The other one is a heuristic that you told me about, which is if you want to get across an idea, you got to present it two ways: a kind of intuitive way, and a formal way, and that fits in with literate programming.

Knuth: Right.

Feigenbaum: Do you want to comment on those?

Knuth: Yeah. That's the key idea that I realized as I'm writing The Art of Computer Programming, the textbook. That the key to good exposition is to say everything twice, or three times, where I say something informally and formally. The reader gets to lodge it in his brain in two different ways, and they reinforce each other. All the time I'm giving in my textbooks I'm saying not only that I'm.. Well, let's see. I'm giving a formula, but I'm also interpreting the formula as to what it's good for. I'm giving a definition, and immediately I apply the definition to a simple case, so that the person learns not only the output of the definition -- what it means -- but also to internalize, using it once in your head. Describing a computer program, it's natural to say everything in the program twice. You say it in English, what the goals of this part of the program are, but then you say in your computer language -- in the formal language, whatever language you're using, if it's LISP or Pascal or Fortran or whatever, C, Java -- you give it in the computer language.

You alternate between the informal and the formal.

Literate programming enforces this idea. It has very interesting effects. I find that, for example, writing a system program, I did examples with literate programming where I took device drivers that I received from Sun Microsystems. They had device drivers for one of my printers, and I rewrote the device driver so that I could combine my laser printer with a previewer that would get exactly the same raster image. I took this industrial strength software and I redid it as a literate program. I found out that the literate version was actually a lot better in several other ways that were completely unexpected to me, because it was more robust.

When you're writing a subroutine in the normal way, a good system program, a subroutine, is supposed to check that its parameters make sense, or else it's going to crash the machine.

If they don't make sense it tries to do a reasonable error recovery from the bad data. If you're writing the subroutine in the ordinary way, just start the subroutine, and then all the code.

Then at the end, if you do a really good job of this testing and error recovery, it turns out that your subroutine ends up having 30 lines of code for error recovery and checking, and five lines of code for what the real purpose of the subroutine is. It doesn't look right to you. You're looking at the subroutine and it looks the purpose of the subroutine is to write certain error messages out, or something like this.

Since it doesn't quite look right, a programmer, as he's writing it, is suddenly unconsciously encouraged to minimize the amount of error checking that's going on, and get it done in some elegant fashion so that you can see what the real purpose of the subroutine is in these five lines. Okay.

But now with literate programming, you start out, you write the subroutine, and you put a line in there to say, "Check for errors," and then you do your five lines.

The subroutine looks good. Now you turn the page. On the next page it says, "Check for errors." Now you're encouraged.

As you're writing the next page, it looks really right to do a good checking for errors. This kind of thing happened over and over again when I was looking at the industrial software. This is part of what I meant by some of the effects of it.

But the main point of being able to combine the informal and the formal means that a human being can understand the code much better than just looking at one or the other, or just looking at an ordinary program with sprinkled comments. It's so much easier to maintain the program. In the comments you also explain what doesn't work, or any subtleties. Or you can say, "Now note the following. Here is the tricky part in line 5, and it works because of this." You can explain all of the things that a maintainer needs to know.

I'm the maintainer too, but after a year I've forgotten totally what I was thinking when I wrote the program. All this goes in as part of the literate program, and makes the program easier to debug, easier to maintain, and better in quality. It does better error messages and things like that, because of the other effects. That's why I'm so convinced that literate programming is a great spinoff of the TeX project.

Feigenbaum: Just one other comment. As you describe this, it's the kind of programming methodology you wish were being used on, let's say, the complex system that controls an aircraft. But Boeing isn't using it.

Knuth: Yeah. Well, some companies do, but the small ones. Hewlett-Packard had a group in Boise that was sold on it for a while. I keep getting I got a letter from Korea not so long ago. The guy says he thinks it's wonderful; he just translated the CWEB manual into Korean. A lot of people like it, but it doesn't take over. It doesn't get to a critical mass. I think the reason is that a lot of people don't enjoy writing the English parts. A lot of good programmers don't enjoy writing the English parts. Two percent of the world's population is born to be programmers. I don't know what percent is born to be writers, but you have to be in the intersection in order to be really happy with literate programming. I tried it with Stanford students. I had seven undergraduates. We did a project leading to the Stanford GraphBase. Six of the seven did very well with it, and the seventh one hated it.

Feigenbaum: Don, I want to get on to other topics, but you mentioned GWEB. Can you talk about WEB and GWEB, just because we're trying to be complete?

Knuth: Yeah. It's CWEB. The original WEB language was invented before the [world wide] web of the internet, but it was the only pronounceable three-letter acronym that hadn't been used at the time. It described nicely the hypertext idea, which now is why we often refer to the internet as a web too. CWEB is the version that Silvio Levy ported from the original Pascal. English and Pascal was WEB. English and C is CWEB. Now it works also with C++. Then there's FWEB for Fortran, and there's noweb that works with any language. There's all kinds of spinoffs. There's the one for Lisp. People have written books where they have their own versions of CWEB too. I got this wonderful book from Germany a year ago that goes through the entire MP3 standard. The book is not only a textbook that you can use in an undergraduate course, but it's also a program that will read an MP3 file. The book itself will tell exactly what's in the MP3 file, including its header and its redundancy check mechanism, plus all the ways to play the audio, and algorithms for synthesizing music. All of it a part of a textbook, all part of a literate program. In other words, I see the idea isn't dying. But it's just not taking over.

Feigenbaum: We've been talking about, as we've been moving toward the third Stanford period which includes the work on literate programming even though that originated earlier. There was another event that you told me about which you described as probably your best contribution to mathematics, the subject of random graphs. It involved a discovery story which I think is very interesting. If you could sort of wander us through random graphs and what this discovery was.

[Sep 06, 2019] Knuth: Programming and architecture are interrelated and it is impossible to create good architecure wthout actually programming at least of a prototype

Notable quotes:
"... When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?" ..."
"... When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this." ..."
Sep 06, 2019 | archive.computerhistory.org

...I showed the second version of this design to two of my graduate students, and I said, "Okay, implement this, please, this summer. That's your summer job." I thought I had specified a language. I had to go away. I spent several weeks in China during the summer of 1977, and I had various other obligations. I assumed that when I got back from my summer trips, I would be able to play around with TeX and refine it a little bit. To my amazement, the students, who were outstanding students, had not competed [it]. They had a system that was able to do about three lines of TeX. I thought, "My goodness, what's going on? I thought these were good students." Well afterwards I changed my attitude to saying, "Boy, they accomplished a miracle."

Because going from my specification, which I thought was complete, they really had an impossible task, and they had succeeded wonderfully with it. These students, by the way, [were] Michael Plass, who has gone on to be the brains behind almost all of Xerox's Docutech software and all kind of things that are inside of typesetting devices now, and Frank Liang, one of the key people for Microsoft Word.

He did important mathematical things as well as his hyphenation methods which are quite used in all languages now. These guys were actually doing great work, but I was amazed that they couldn't do what I thought was just sort of a routine task. Then I became a programmer in earnest, where I had to do it. The reason is when you're doing programming, you have to explain something to a computer, which is dumb.

When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?"

It just didn't occur to the person writing the design specification. When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this."

If I hadn't been in China they would've scheduled an appointment with me and stopped their programming for a day. Then they would come in at the designated hour and we would talk. They would take 15 minutes to present to me what the problem was, and then I would think about it for a while, and then I'd say, "Oh yeah, do this. " Then they would go home and they would write code for another five minutes and they'd have to schedule another appointment.

I'm probably exaggerating, but this is why I think Bob Floyd's Chiron compiler never got going. Bob worked many years on a beautiful idea for a programming language, where he designed a language called Chiron, but he never touched the programming himself. I think this was actually the reason that he had trouble with that project, because it's so hard to do the design unless you're faced with the low-level aspects of it, explaining it to a machine instead of to another person.

Forsythe, I think it was, who said, "People have said traditionally that you don't understand something until you've taught it in a class. The truth is you don't really understand something until you've taught it to a computer, until you've been able to program it." At this level, programming was absolutely important

[Sep 06, 2019] Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered

Sep 06, 2019 | conservancy.umn.edu

Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered. I can cope with learning about one new technique per day, but I can't take ten in a day all at once. So conferences are depressing; it means I have so much more work to do. If I hide myself from the truth I am much happier.

[Sep 06, 2019] How TAOCP was hatched

Notable quotes:
"... Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill. ..."
"... But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly. ..."
Sep 06, 2019 | archive.computerhistory.org

Knuth: This is, of course, really the story of my life, because I hope to live long enough to finish it. But I may not, because it's turned out to be such a huge project. I got married in the summer of 1961, after my first year of graduate school. My wife finished college, and I could use the money I had made -- the $5000 on the compiler -- to finance a trip to Europe for our honeymoon.

We had four months of wedded bliss in Southern California, and then a man from Addison-Wesley came to visit me and said "Don, we would like you to write a book about how to write compilers."

The more I thought about it, I decided "Oh yes, I've got this book inside of me."

I sketched out that day -- I still have the sheet of tablet paper on which I wrote -- I sketched out 12 chapters that I thought ought to be in such a book. I told Jill, my wife, "I think I'm going to write a book."

As I say, we had four months of bliss, because the rest of our marriage has all been devoted to this book. Well, we still have had happiness. But really, I wake up every morning and I still haven't finished the book. So I try to -- I have to -- organize the rest of my life around this, as one main unifying theme. The book was supposed to be about how to write a compiler. They had heard about me from one of their editorial advisors, that I knew something about how to do this. The idea appealed to me for two main reasons. One is that I did enjoy writing. In high school I had been editor of the weekly paper. In college I was editor of the science magazine, and I worked on the campus paper as copy editor. And, as I told you, I wrote the manual for that compiler that we wrote. I enjoyed writing, number one.

Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill.

Another very important reason at the time was that I knew that there was a great need for a book about compilers, because there were a lot of people who even in 1962 -- this was January of 1962 -- were starting to rediscover the wheel. The knowledge was out there, but it hadn't been explained. The people who had discovered it, though, were scattered all over the world and they didn't know of each other's work either, very much. I had been following it. Everybody I could think of who could write a book about compilers, as far as I could see, they would only give a piece of the fabric. They would slant it to their own view of it. There might be four people who could write about it, but they would write four different books. I could present all four of their viewpoints in what I would think was a balanced way, without any axe to grind, without slanting it towards something that I thought would be misleading to the compiler writer for the future. I considered myself as a journalist, essentially. I could be the expositor, the tech writer, that could do the job that was needed in order to take the work of these brilliant people and make it accessible to the world. That was my motivation. Now, I didn't have much time to spend on it then, I just had this page of paper with 12 chapter headings on it. That's all I could do while I'm a consultant at Burroughs and doing my graduate work. I signed a contract, but they said "We know it'll take you a while." I didn't really begin to have much time to work on it until 1963, my third year of graduate school, as I'm already finishing up on my thesis. In the summer of '62, I guess I should mention, I wrote another compiler. This was for Univac; it was a FORTRAN compiler. I spent the summer, I sold my soul to the devil, I guess you say, for three months in the summer of 1962 to write a FORTRAN compiler. I believe that the salary for that was $15,000, which was much more than an assistant professor. I think assistant professors were getting eight or nine thousand in those days.

Feigenbaum: Well, when I started in 1960 at [University of California] Berkeley, I was getting $7,600 for the nine-month year.

Knuth: Knuth: Yeah, so you see it. I got $15,000 for a summer job in 1962 writing a FORTRAN compiler. One day during that summer I was writing the part of the compiler that looks up identifiers in a hash table. The method that we used is called linear probing. Basically you take the variable name that you want to look up, you scramble it, like you square it or something like this, and that gives you a number between one and, well in those days it would have been between 1 and 1000, and then you look there. If you find it, good; if you don't find it, go to the next place and keep on going until you either get to an empty place, or you find the number you're looking for. It's called linear probing. There was a rumor that one of Professor Feller's students at Princeton had tried to figure out how fast linear probing works and was unable to succeed. This was a new thing for me. It was a case where I was doing programming, but I also had a mathematical problem that would go into my other [job]. My winter job was being a math student, my summer job was writing compilers. There was no mix. These worlds did not intersect at all in my life at that point. So I spent one day during the summer while writing the compiler looking at the mathematics of how fast does linear probing work. I got lucky, and I solved the problem. I figured out some math, and I kept two or three sheets of paper with me and I typed it up. ["Notes on 'Open' Addressing', 7/22/63] I guess that's on the internet now, because this became really the genesis of my main research work, which developed not to be working on compilers, but to be working on what they call analysis of algorithms, which is, have a computer method and find out how good is it quantitatively. I can say, if I got so many things to look up in the table, how long is linear probing going to take. It dawned on me that this was just one of many algorithms that would be important, and each one would lead to a fascinating mathematical problem. This was easily a good lifetime source of rich problems to work on. Here I am then, in the middle of 1962, writing this FORTRAN compiler, and I had one day to do the research and mathematics that changed my life for my future research trends. But now I've gotten off the topic of what your original question was.

Feigenbaum: We were talking about sort of the.. You talked about the embryo of The Art of Computing. The compiler book morphed into The Art of Computer Programming, which became a seven-volume plan.

Knuth: Exactly. Anyway, I'm working on a compiler and I'm thinking about this. But now I'm starting, after I finish this summer job, then I began to do things that were going to be relating to the book. One of the things I knew I had to have in the book was an artificial machine, because I'm writing a compiler book but machines are changing faster than I can write books. I have to have a machine that I'm totally in control of. I invented this machine called MIX, which was typical of the computers of 1962.

In 1963 I wrote a simulator for MIX so that I could write sample programs for it, and I taught a class at Caltech on how to write programs in assembly language for this hypothetical computer. Then I started writing the parts that dealt with sorting problems and searching problems, like the linear probing idea. I began to write those parts, which are part of a compiler, of the book. I had several hundred pages of notes gathering for those chapters for The Art of Computer Programming. Before I graduated, I've already done quite a bit of writing on The Art of Computer Programming.

I met George Forsythe about this time. George was the man who inspired both of us [Knuth and Feigenbaum] to come to Stanford during the '60s. George came down to Southern California for a talk, and he said, "Come up to Stanford. How about joining our faculty?" I said "Oh no, I can't do that. I just got married, and I've got to finish this book first." I said, "I think I'll finish the book next year, and then I can come up [and] start thinking about the rest of my life, but I want to get my book done before my son is born." Well, John is now 40-some years old and I'm not done with the book. Part of my lack of expertise is any good estimation procedure as to how long projects are going to take. I way underestimated how much needed to be written about in this book. Anyway, I started writing the manuscript, and I went merrily along writing pages of things that I thought really needed to be said. Of course, it didn't take long before I had started to discover a few things of my own that weren't in any of the existing literature. I did have an axe to grind. The message that I was presenting was in fact not going to be unbiased at all. It was going to be based on my own particular slant on stuff, and that original reason for why I should write the book became impossible to sustain. But the fact that I had worked on linear probing and solved the problem gave me a new unifying theme for the book. I was going to base it around this idea of analyzing algorithms, and have some quantitative ideas about how good methods were. Not just that they worked, but that they worked well: this method worked 3 times better than this method, or 3.1 times better than this method. Also, at this time I was learning mathematical techniques that I had never been taught in school. I found they were out there, but they just hadn't been emphasized openly, about how to solve problems of this kind.

So my book would also present a different kind of mathematics than was common in the curriculum at the time, that was very relevant to analysis of algorithm. I went to the publishers, I went to Addison Wesley, and said "How about changing the title of the book from 'The Art of Computer Programming' to 'The Analysis of Algorithms'." They said that will never sell; their focus group couldn't buy that one. I'm glad they stuck to the original title, although I'm also glad to see that several books have now come out called "The Analysis of Algorithms", 20 years down the line.

But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly.

I've got The Art of Computer Programming started out, and I'm working on my 12 chapters. I finish a rough draft of all 12 chapters by, I think it was like 1965. I've got 3,000 pages of notes, including a very good example of what you mentioned about seeing holes in the fabric. One of the most important chapters in the book is parsing: going from somebody's algebraic formula and figuring out the structure of the formula. Just the way I had done in seventh grade finding the structure of English sentences, I had to do this with mathematical sentences.

Chapter ten is all about parsing of context-free language, [which] is what we called it at the time. I covered what people had published about context-free languages and parsing. I got to the end of the chapter and I said, well, you can combine these ideas and these ideas, and all of a sudden you get a unifying thing which goes all the way to the limit. These other ideas had sort of gone partway there. They would say "Oh, if a grammar satisfies this condition, I can do it efficiently." "If a grammar satisfies this condition, I can do it efficiently." But now, all of a sudden, I saw there was a way to say I can find the most general condition that can be done efficiently without looking ahead to the end of the sentence. That you could make a decision on the fly, reading from left to right, about the structure of the thing. That was just a natural outgrowth of seeing the different pieces of the fabric that other people had put together, and writing it into a chapter for the first time. But I felt that this general concept, well, I didn't feel that I had surrounded the concept. I knew that I had it, and I could prove it, and I could check it, but I couldn't really intuit it all in my head. I knew it was right, but it was too hard for me, really, to explain it well.

So I didn't put in The Art of Computer Programming. I thought it was beyond the scope of my book. Textbooks don't have to cover everything when you get to the harder things; then you have to go to the literature. My idea at that time [is] I'm writing this book and I'm thinking it's going to be published very soon, so any little things I discover and put in the book I didn't bother to write a paper and publish in the journal because I figure it'll be in my book pretty soon anyway. Computer science is changing so fast, my book is bound to be obsolete.

It takes a year for it to go through editing, and people drawing the illustrations, and then they have to print it and bind it and so on. I have to be a little bit ahead of the state-of-the-art if my book isn't going to be obsolete when it comes out. So I kept most of the stuff to myself that I had, these little ideas I had been coming up with. But when I got to this idea of left-to-right parsing, I said "Well here's something I don't really understand very well. I'll publish this, let other people figure out what it is, and then they can tell me what I should have said." I published that paper I believe in 1965, at the end of finishing my draft of the chapter, which didn't get as far as that story, LR(k). Well now, textbooks of computer science start with LR(k) and take off from there. But I want to give you an idea of

[Sep 06, 2019] Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping).

Notable quotes:
"... Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping). ..."
Sep 06, 2019 | news.ycombinator.com
fhars on Mar 29, 2011
Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping).

The only widely used OO language (for sufficiently narrow values of wide and wide values of OO) to get that right used to be Objective Caml, and recently its stepchildren F# and scala. So it is actually FP that helps you with the classification.

Xurinos on Mar 29, 2011

This is a very interesting point and should be highlighted. You said implementation artifacts (especially in reference to reducing code duplication), and for clarity, I think you are referring to the definition of operators on data (class methods, friend methods, and so on).

I agree with you that subclassing (for the purpose of reusing behavior), traits (for adding behavior), and the like can be confused with classification to such an extent that modern designs tend to depart from type systems and be used for mere code organization.

ajays on Mar 29, 2011
"was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you, Java)?"

Far be it for me to defend Java (I hate the damn thing), but: main is just a function in a class. The class is the entry point, as specified in the command line; main is just what the OS looks for, by convention. You could have a "main" in each class, but only the one in the specified class will be the entry point.

GrooveStomp on Mar 29, 2011
The way of the theorist is to tell any non-theorist that the non-theorist is wrong, then leave without any explanation. Or, simply hand-wave the explanation away, claiming it as "too complex" too fully understand without years of rigorous training. Of course I jest. :)

[Sep 04, 2019] 737 MAX - Boeing Insults International Safety Regulators As New Problems Cause Longer Grounding

The 80286 Intel processors: The Intel 80286[3] (also marketed as the iAPX 286[4] and often called Intel 286) is a 16-bit microprocessor that was introduced on February 1, 1982. The 80286 was employed for the IBM PC/AT, introduced in 1984, and then widely used in most PC/AT compatible computers until the early 1990s.
Notable quotes:
"... The fate of Boeing's civil aircraft business hangs on the re-certification of the 737 MAX. The regulators convened an international meeting to get their questions answered and Boeing arrogantly showed up without having done its homework. The regulators saw that as an insult. Boeing was sent back to do what it was supposed to do in the first place: provide details and analysis that prove the safety of its planes. ..."
"... In recent weeks, Boeing and the FAA identified another potential flight-control computer risk requiring additional software changes and testing, according to two of the government and pilot officials. ..."
"... Any additional software changes will make the issue even more complicated. The 80286 Intel processors the FCC software is running on is limited in its capacity. All the extras procedures Boeing now will add to them may well exceed the system's capabilities. ..."
"... The old architecture was possible because the plane could still be flown without any computer. It was expected that the pilots would detect a computer error and would be able to intervene. The FAA did not require a high design assurance level (DAL) for the system. The MCAS accidents showed that a software or hardware problem can now indeed crash a 737 MAX plane. That changes the level of scrutiny the system will have to undergo. ..."
"... Flight safety regulators know of these complexities. That is why they need to take a deep look into such systems. That Boeing's management was not prepared to answer their questions shows that the company has not learned from its failure. Its culture is still one of finance orientated arrogance. ..."
"... I also want to add that Boeing's focus on profit over safety is not restricted to the 737 Max but undoubtedly permeates the manufacture of spare parts for the rest of the their plane line and all else they make.....I have no intention of ever flying in another Boeing airplane, given the attitude shown by Boeing leadership. ..."
"... So again, Boeing mgmt. mirrors its Neoliberal government officials when it comes to arrogance and impudence. ..."
"... Arrogance? When the money keeps flowing in anyway, it comes naturally. ..."
"... In the neoliberal world order governments, regulators and the public are secondary to corporate profits. ..."
"... I am surprised that none of the coverage has mentioned the fact that, if China's CAAC does not sign off on the mods, it will cripple, if not doom the MAX. ..."
"... I am equally surprised that we continue to sabotage China's export leader, as the WSJ reports today: "China's Huawei Technologies Co. accused the U.S. of "using every tool at its disposal" to disrupt its business, including launching cyberattacks on its networks and instructing law enforcement to "menace" its employees. ..."
"... Boeing is backstopped by the Murkan MIC, which is to say the US taxpayer. ..."
"... Military Industrial Complex welfare programs, including wars in Syria and Yemen, are slowly winding down. We are about to get a massive bill from the financiers who already own everything in this sector, because what they have left now is completely unsustainable, with or without a Third World War. ..."
"... In my mind, the fact that Boeing transferred its head office from Seattle (where the main manufacturing and presumable the main design and engineering functions are based) to Chicago (centre of the neoliberal economic universe with the University of Chicago being its central shrine of worship, not to mention supply of future managers and administrators) in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance. ..."
"... For many amerikans, a good "offensive" is far preferable than a good defense even if that only involves an apology. Remember what ALL US presidents say.. We will never apologize.. ..."
"... Actually can you show me a single place in the US where ethics are considered a bastion of governorship? ..."
"... You got to be daft or bribed to use intel cpu's in embedded systems. Going from a motorolla cpu, the intel chips were dinosaurs in every way. ..."
"... Initially I thought it was just the new over-sized engines they retro-fitted. A situation that would surely have been easier to get around by just going back to the original engines -- any inefficiencies being less $costly than the time the planes have been grounded. But this post makes the whole rabbit warren 10 miles deeper. ..."
"... That is because the price is propped up by $9 billion share buyback per year . Share buyback is an effective scheme to airlift all the cash out of a company towards the major shareholders. I mean, who wants to develop reliable airplanes if you can funnel the cash into your pockets? ..."
"... If Boeing had invested some of this money that it blew on share buybacks to design a new modern plane from ground up to replace the ancient 737 airframe, these tragedies could have been prevented, and Boeing wouldn't have this nightmare on its hands. But the corporate cost-cutters and financial engineers, rather than real engineers, had the final word. ..."
"... Markets don't care about any of this. They don't care about real engineers either. They love corporate cost-cutters and financial engineers. They want share buybacks, and if something bad happens, they'll overlook the $5 billion to pay for the fallout because it's just a "one-time item." ..."
"... Overall, Boeing buy-backs exceeded 40 billion dollars, one could guess that half or quarter of that would suffice to build a plane that logically combines the latest technologies. E.g. the entire frame design to fit together with engines, processors proper for the information processing load, hydraulics for steering that satisfy force requirements in almost all circumstances etc. New technologies also fail because they are not completely understood, but when the overall design is logical with margins of safety, the faults can be eliminated. ..."
"... Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics. ..."
"... The problem is not new, and it is well understood. What computer modelling is is cheap, and easy to fudge, and that is why it is popular with people who care about money a lot. Much of what is called "AI" is very similar in its limitations, a complicated way to fudge up the results you want, or something close enough for casual examination. ..."
Sep 04, 2019 | www.moonofalabama.org

United Airline and American Airlines further prolonged the grounding of their Boeing 737 MAX airplanes. They now schedule the plane's return to the flight line in December. But it is likely that the grounding will continue well into the next year.

After Boeing's shabby design and lack of safety analysis of its Maneuver Characteristics Augmentation System (MCAS) led to the death of 347 people, the grounding of the type and billions of losses, one would expect the company to show some decency and humility. Unfortunately Boeing behavior demonstrates none.

There is still little detailed information on how Boeing will fix MCAS. Nothing was said by Boeing about the manual trim system of the 737 MAX that does not work when it is needed . The unprotected rudder cables of the plane do not meet safety guidelines but were still certified. The planes flight control computers can be overwhelmed by bad data and a fix will be difficult to implement. Boeing continues to say nothing about these issues.

International flight safety regulators no longer trust the Federal Aviation Administration (FAA) which failed to uncover those problems when it originally certified the new type. The FAA was also the last regulator to ground the plane after two 737 MAX had crashed. The European Aviation Safety Agency (EASA) asked Boeing to explain and correct five major issues it identified. Other regulators asked additional questions.

Boeing needs to regain the trust of the airlines, pilots and passengers to be able to again sell those planes. Only full and detailed information can achieve that. But the company does not provide any.

As Boeing sells some 80% of its airplanes abroad it needs the good will of the international regulators to get the 737 MAX back into the air. This makes the arrogance it displayed in a meeting with those regulators inexplicable:

Friction between Boeing Co. and international air-safety authorities threatens a new delay in bringing the grounded 737 MAX fleet back into service, according to government and pilot union officials briefed on the matter.

The latest complication in the long-running saga, these officials said, stems from a Boeing briefing in August that was cut short by regulators from the U.S., Europe, Brazil and elsewhere, who complained that the plane maker had failed to provide technical details and answer specific questions about modifications in the operation of MAX flight-control computers.

The fate of Boeing's civil aircraft business hangs on the re-certification of the 737 MAX. The regulators convened an international meeting to get their questions answered and Boeing arrogantly showed up without having done its homework. The regulators saw that as an insult. Boeing was sent back to do what it was supposed to do in the first place: provide details and analysis that prove the safety of its planes.

What did the Boeing managers think those regulatory agencies are? Hapless lapdogs like the FAA managers`who signed off on Boeing 'features' even after their engineers told them that these were not safe?

Buried in the Wall Street Journal piece quoted above is another little shocker:

In recent weeks, Boeing and the FAA identified another potential flight-control computer risk requiring additional software changes and testing, according to two of the government and pilot officials.

The new issue must be going beyond the flight control computer (FCC) issues the FAA identified in June .

Boeing's original plan to fix the uncontrolled activation of MCAS was to have both FCCs active at the same time and to switch MCAS off when the two computers disagree. That was already a huge change in the general architecture which so far consisted of one active and one passive FCC system that could be switched over when a failure occurred.

Any additional software changes will make the issue even more complicated. The 80286 Intel processors the FCC software is running on is limited in its capacity. All the extras procedures Boeing now will add to them may well exceed the system's capabilities.

Changing software in a delicate environment like a flight control computer is extremely difficult. There will always be surprising side effects or regressions where already corrected errors unexpectedly reappear.

The old architecture was possible because the plane could still be flown without any computer. It was expected that the pilots would detect a computer error and would be able to intervene. The FAA did not require a high design assurance level (DAL) for the system. The MCAS accidents showed that a software or hardware problem can now indeed crash a 737 MAX plane. That changes the level of scrutiny the system will have to undergo.

All procedures and functions of the software will have to be tested in all thinkable combinations to ensure that they will not block or otherwise influence each other. This will take months and there is a high chance that new issues will appear during these tests. They will require more software changes and more testing.

Flight safety regulators know of these complexities. That is why they need to take a deep look into such systems. That Boeing's management was not prepared to answer their questions shows that the company has not learned from its failure. Its culture is still one of finance orientated arrogance.

Building safe airplanes requires engineers who know that they may make mistakes and who have the humility to allow others to check and correct their work. It requires open communication about such issues. Boeing's say-nothing strategy will prolong the grounding of its planes. It will increases the damage to Boeing's financial situation and reputation.

--- Previous Moon of Alabama posts on Boeing 737 MAX issues:

Posted by b on September 3, 2019 at 18:05 UTC | Permalink


Choderlos de Laclos , Sep 3 2019 18:15 utc | 1

"The 80286 Intel processors the FCC software is running on is limited in its capacity." You must be joking, right? If this is the case, the problem is unfixable: you can't find two competent software engineers who can program these dinosaur 16-bit processors.
b , Sep 3 2019 18:22 utc | 2
You must be joking, right? If this is the case, the problem is unfixable: you can't find two competent software engineers who can program these dinosaur 16-bit processors.

One of the two is writing this.

Half-joking aside. The 737 MAX FCC runs on 80286 processors. There are ten thousands of programmers available who can program them though not all are qualified to write real-time systems. That resource is not a problem. The processors inherent limits are one.

Meshpal , Sep 3 2019 18:24 utc | 3
Thanks b for the fine 737 max update. Others news sources seem to have dropped coverage. It is a very big deal that this grounding has lasted this long. Things are going to get real bad for Boeing if this bird does not get back in the air soon. In any case their credibility is tarnished if not down right trashed.
BraveNewWorld , Sep 3 2019 18:35 utc | 4
@1 Choderlos de Laclos

What ever software language these are programmed in (my guess is C) the compilers still exist for it and do the translation from the human readable code to the machine code for you. Of course the code could be assembler but writing assembly code for a 286 is far easier than writing it for say an i9 becuase the CPU is so much simpler and has a far smaller set of instructions to work with.

Choderlos de Laclos , Sep 3 2019 18:52 utc | 5
@b: It was a hyperbole. I might be another one, but left them behind as fast as I could. The last time I had to deal with it was an embedded system in 1998-ish. But I am also retiring, and so are thousands of others. The problems with support of a legacy system are a legend.
psychohistorian , Sep 3 2019 18:56 utc | 6
Thanks for the demise of Boeing update b

I commented when you first started writing about this that it would take Boeing down and still believe that to be true. To the extent that Boeing is stonewalling the international safety regulators says to me that upper management and big stock holders are being given time to minimize their exposure before the axe falls.

I also want to add that Boeing's focus on profit over safety is not restricted to the 737 Max but undoubtedly permeates the manufacture of spare parts for the rest of the their plane line and all else they make.....I have no intention of ever flying in another Boeing airplane, given the attitude shown by Boeing leadership.

This is how private financialization works in the Western world. Their bottom line is profit, not service to the flying public. It is in line with the recent public statement by the CEO's from the Business Roundtable that said that they were going to focus more on customer satisfaction over profit but their actions continue to say profit is their primary motive.

The God of Mammon private finance religion can not end soon enough for humanity's sake. It is not like we all have to become China but their core public finance example is well worth following.

karlof1 , Sep 3 2019 19:13 utc | 7
So again, Boeing mgmt. mirrors its Neoliberal government officials when it comes to arrogance and impudence. IMO, Boeing shareholders's hair ought to be on fire given their BoD's behavior and getting ready to litigate.

As b notes, Boeing's international credibility's hanging by a very thin thread. A year from now, Boeing could very well see its share price deeply dive into the Penny Stock category--its current P/E is 41.5:1 which is massively overpriced. Boeing Bombs might come to mean something vastly different from its initial meaning.

bjd , Sep 3 2019 19:22 utc | 8
Arrogance? When the money keeps flowing in anyway, it comes naturally.
What did I just read , Sep 3 2019 19:49 utc | 10
Such seemingly archaic processors are the norm in aerospace. If the planes flight characteristics had been properly engineered from the start the processor wouldn't be an issue. You can't just spray perfume on a garbage pile and call it a rose.
VietnamVet , Sep 3 2019 20:31 utc | 12
In the neoliberal world order governments, regulators and the public are secondary to corporate profits. This is the same belief system that is suspending the British Parliament to guarantee the chaos of a no deal Brexit. The irony is that globalist, Joe Biden's restart the Cold War and nationalist Donald Trump's Trade Wars both assure that foreign regulators will closely scrutinize the safety of the 737 Max. Even if ignored by corporate media and cleared by the FAA to fly in the USA, Boeing and Wall Street's Dow Jones average are cooked gooses with only 20% of the market. Taking the risk of flying the 737 Max on their family vacation or to their next business trip might even get the credentialed class to realize that their subservient service to corrupt Plutocrats is deadly in the long term.
jared , Sep 3 2019 20:55 utc | 14
It doesn't get any TBTF'er than Boing. Bail-out is only phone-call away. With down-turn looming, the line is forming.
Piotr Berman , Sep 3 2019 21:11 utc | 15
"The latest complication in the long-running saga, these officials said, stems from a Boeing BA, -2.66% briefing in August that was cut short by regulators from the U.S., Europe, Brazil and elsewhere, who complained that the plane maker had failed to provide technical details and answer specific questions about modifications in the operation of MAX flight-control computers."

It seems to me that Boeing had no intention to insult anybody, but it has an impossible task. After decades of applying duct tape and baling wire with much success, they finally designed an unfixable plane, and they can either abandon this line of business (narrow bodied airliners) or start working on a new design grounded in 21st century technologies.

Ken Murray , Sep 3 2019 21:12 utc | 16
Boeing's military sales are so much more significant and important to them, they are just ignoring/down-playing their commercial problem with the 737 MAX. Follow the real money.
Arata , Sep 3 2019 21:57 utc | 17
That is unblievable FLight Control comptuer is based on 80286! A control system needs Real Time operation, at least some pre-emptive task operation, in terms of milisecond or microsecond. What ever way you program 80286 you can not achieve RT operation on 80286. I do not think that is the case. My be 80286 is doing some pripherial work, other than control.
Bemildred , Sep 3 2019 22:11 utc | 18
It is quite likely (IMHO) that they are no longer able to provide the requested information, but of course they cannot say that.

I once wrote a keyboard driver for an 80286, part of an editor, in assembler, on my first PC type computer, I still have it around here somewhere I think, the keyboard driver, but I would be rusty like the Titanic when it comes to writing code. I wrote some things in DEC assembler too, on VAXen.

Peter AU 1 , Sep 3 2019 22:14 utc | 19
Arata 16

The spoiler system is fly by wire.

Bemildred , Sep 3 2019 22:17 utc | 20
arata @16: 80286 does interrupts just fine, but you have to grok asynchronous operation, and most coders don't really, I see that every day in Linux and my browser. I wish I could get that box back, it had DOS, you could program on the bare wires, but God it was slow.
Tod , Sep 3 2019 22:28 utc | 21
Boeing will just need to press the TURBO button on the 286 processor. Problem solved.
karlof1 , Sep 3 2019 22:43 utc | 23
Ken Murray @15--

Boeing recently lost a $6+Billion weapons contract thanks to its similar Q&A in that realm of its business. Its annual earnings are due out in October. Plan to short-sell soon!

Godfree Roberts , Sep 3 2019 22:56 utc | 24
I am surprised that none of the coverage has mentioned the fact that, if China's CAAC does not sign off on the mods, it will cripple, if not doom the MAX.

I am equally surprised that we continue to sabotage China's export leader, as the WSJ reports today: "China's Huawei Technologies Co. accused the U.S. of "using every tool at its disposal" to disrupt its business, including launching cyberattacks on its networks and instructing law enforcement to "menace" its employees.

The telecommunications giant also said law enforcement in the U.S. have searched, detained and arrested Huawei employees and its business partners, and have sent FBI agents to the homes of its workers to pressure them to collect information on behalf of the U.S."

https://www.wsj.com/articles/huawei-accuses-the-u-s-of-cyberattacks-threatening-its-employees-11567500484?mod=hp_lead_pos2

Arioch , Sep 3 2019 23:18 utc | 25
I wonder how much blind trust in Boeing is intertwined into the fabric of civic aviation all around the world.

I mean something like this: Boeing publishes some research into failure statistics, solid materials aging or something. One that is really hard and expensive to proceed with. Everything take the results for granted without trying to independently reproduce and verify, because The Boeing!

Some later "derived" researches being made, upon the foundation of some prior works *including* that old Boeing research. Then FAA and similar company institutions around the world make some official regulations and guidelines deriving from the research which was in part derived form original Boeing work. Then insurance companies calculate their tarifs and rate plans, basing their estimation upon those "government standards", and when governments determine taxation levels they use that data too. Then airline companies and airliner leasing companies make their business plans, take huge loans in the banks (and banks do make their own plans expecting those loans to finally be paid back), and so on and so forth, building the cards-deck house, layer after layer.

And among the very many of the cornerstones - there would be dust covered and god-forgotten research made by Boeing 10 or maybe 20 years ago when no one even in drunk delirium could ever imagine questioning Boeing's verdicts upon engineering and scientific matters.

Now, the longevity of that trust is slowly unraveled. Like, the so universally trusted 737NG generation turned out to be inherently unsafe, and while only pilots knew it before, and even of them - only most curious and pedantic pilots, today it becomes public knowledge that 737NG are tainted.

Now, when did this corruption started? Wheat should be some deadline cast into the past, that since the day every other technical data coming from Boeing should be considered unreliable unless passing full-fledged independent verification? Should that day be somewhere in 2000-s? 1990-s? Maybe even 1970-s?

And ALL THE BODY of civic aviation industry knowledge that was accumulated since that date can NO MORE BE TRUSTED and should be almost scrapped and re-researched new! ALL THE tacit INPUT that can be traced back to Boeing and ALL THE DERIVED KNOWLEDGE now has to be verified in its entirety.

Miss Lacy , Sep 3 2019 23:19 utc | 26
Boeing is backstopped by the Murkan MIC, which is to say the US taxpayer. Until the lawsuits become too enormous. I wonder how much that will cost. And speaking of rigged markets - why do ya suppose that Trumpilator et al have been so keen to make huge sales to the Saudis, etc. etc. ? Ya don't suppose they had an inkling of trouble in the wind do ya? Speaking of insiders, how many million billions do ya suppose is being made in the Wall Street "trade war" roller coaster by peeps, munchkins not muppets, who have access to the Tweeter-in-Chief?
C I eh? , Sep 3 2019 23:25 utc | 27
@6 psychohistorian
I commented when you first started writing about this that it would take Boeing down and still believe that to be true. To the extent that Boeing is stonewalling the international safety regulators says to me that upper management and big stock holders are being given time to minimize their exposure before the axe falls.

Have you considered the costs of restructuring versus breaking apart Boeing and selling it into little pieces; to the owners specifically?

The MIC is restructuring itself - by first creating the political conditions to make the transformation highly profitable. It can only be made highly profitable by forcing the public to pay the associated costs of Rape and Pillage Incorporated.

Military Industrial Complex welfare programs, including wars in Syria and Yemen, are slowly winding down. We are about to get a massive bill from the financiers who already own everything in this sector, because what they have left now is completely unsustainable, with or without a Third World War.

It is fine that you won't fly Boeing but that is not the point. You may not ever fly again since air transit is subsidized at every level and the US dollar will no longer be available to fund the world's air travel infrastructure.

You will instead be paying for the replacement of Boeing and seeing what google is planning it may not be for the renewal of the airline business but rather for dedicated ground transportation, self driving cars and perhaps 'aerospace' defense forces, thank you Russia for setting the trend.

Lochearn , Sep 3 2019 23:45 utc | 30
As readers may remember I made a case study of Boeing for a fairly recent PHD. The examiners insisted that this case study be taken out because it was "speculative." I had forecast serious problems with the 787 and the 737 MAX back in 2012. I still believe the 787 is seriously flawed and will go the way of the MAX. I came to admire this once brilliant company whose work culminated in the superb 777.

America really did make some excellent products in the 20th century - with the exception of cars. Big money piled into GM from the early 1920s, especially the ultra greedy, quasi fascist Du Pont brothers, with the result that GM failed to innovate. It produced beautiful cars but technically they were almost identical to previous models.

The only real innovation over 40 years was automatic transmission. Does this sound reminiscent of the 737 MAX? What glued together GM for more than thirty years was the brilliance of CEO Alfred Sloan who managed to keep the Du Ponts (and J P Morgan) more or less happy while delegating total responsibility for production to divisional managers responsible for the different GM brands. When Sloan went the company started falling apart and the memoirs of bad boy John DeLorean testify to the complete disfunctionality of senior management.

At Ford the situation was perhaps even worse in the 1960s and 1970s. Management was at war with the workers, faulty transmissions were knowingly installed. All this is documented in an excellent book by ex-Ford supervisor Robert Dewar in his book "A Savage Factory."

dus7 , Sep 3 2019 23:53 utc | 32
Well, the first thing that came to mind upon reading about Boeing's apparent arrogance overseas - silly, I know - was that Boeing may be counting on some weird Trump sanctions for anyone not cooperating with the big important USian corporation! The U.S. has influence on European and many other countries, but it can only be stretched so far, and I would guess messing with Euro/internation airline regulators, especially in view of the very real fatal accidents with the 737MAX, would be too far.
david , Sep 4 2019 0:09 utc | 34
Please read the following article to get further info about how the 5 big Funds that hold 67% of Boeing stocks are working hard with the big banks to keep the stock high. Meanwhile Boeing is also trying its best to blackmail US taxpayers through Pentagon, for example, by pretending to walk away from a competitive bidding contract because it wants the Air Force to provide better cost formula.

https://www.theamericanconservative.com/articles/despite-devastating-737-crashes-boeing-stocks-fly-high/

So basically, Boeing is being kept afloat by US taxpayers because it is "too big to fail" and an important component of Dow. Please tell. Who is the biggest suckers here?

chu teh , Sep 4 2019 0:13 utc | 36
re Piotr Berman | Sep 3 2019 21:11 utc [I have a tiny bit of standing in this matter based on experience with an amazingly similar situation that has not heretofore been mentioned. More at end. Thus I offer my opinion.] Indeed, an impossible task to design a workable answer and still maintain the fiction that 737MAX is a hi-profit-margin upgrade requiring minimal training of already-trained 737-series pilots , either male or female. Turning-off autopilot to bypass runaway stabilizer necessitates : [1]

the earlier 737-series "rollercoaster" procedure to overcome too-high aerodynamic forces must be taught and demonstrated as a memory item to all pilots.

The procedure was designed for early Model 737-series, not the 737MAX which has uniquely different center-of-gravity and pitch-up problem requiring MCAS to auto-correct, especially on take-off. [2] but the "rollercoaster" procedure does not work at all altitudes.

It causes aircraft to lose some altitude and, therefore, requires at least [about] 7,000-feet above-ground clearance to avoid ground contact. [This altitude loss consumed by the procedure is based on alleged reports of simulator demonstrations. There seems to be no known agreement on the actual amount of loss]. [3] The physical requirements to perform the "rollercoaster" procedure were established at a time when female pilots were rare.

Any 737MAX pilots, male or female, will have to pass new physical requirements demonstrating actual conditions on newly-designed flight simulators that mimic the higher load requirements of the 737MAX . Such new standards will also have to compensate for left vs right-handed pilots because the manual-trim wheel is located between the .pilot/copilot seats.

================

Now where/when has a similar situation occurred? I.e., wherein a Federal regulator agency [FAA] allowed a vendor [Boeing] to claim that a modified product did not need full inspection/review to get agency certification of performance [airworthiness]. As you may know, 2 working, nuclear, power plants were forced to shut down and be decommissioned when, in 2011, 2 newly-installed, critical components in each plant were discovered to be defective, beyond repair and not replaceable. These power plants were each producing over 1,000 megawatts of power for over 20 years. In short, the failed components were modifications of the original, successful design that claimed to need only a low-level of Federal Nuclear Regulatory Commission oversight and approval. The mods were, in fact, new and untried and yet only tested by computer modeling and theoretical estimations based on experience with smaller/different designs.

<<< The NRC had not given full inspection/oversight to the new units because of manufacturer/operator claims that the changes were not significant. The NRC did not verify the veracity of those claims. >>>

All 4 components [2 required in each plant] were essentially heat-exchangers weighing 640 tons each, having 10,000 tubes carrying radioactive water surrounded by [transferring their heat to] a separate flow of "clean" water. The tubes were progressively damaged and began leaking. The new design failed. It can not be fixed. Thus, both plants of the San Onofre Nuclear Generating Station are now a complete loss and await dismantling [as the courts will decide who pays for the fiasco].

Jen , Sep 4 2019 0:20 utc | 37
In my mind, the fact that Boeing transferred its head office from Seattle (where the main manufacturing and presumable the main design and engineering functions are based) to Chicago (centre of the neoliberal economic universe with the University of Chicago being its central shrine of worship, not to mention supply of future managers and administrators) in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance.

Phew! I barely took a breath there! :-)

Lochearn , Sep 4 2019 0:22 utc | 38
@ 32 david

Good article. Boeing is, or used to be, America's biggest manufacturing export. So you are right it cannot be allowed to fail. Boeing is also a manufacturer of military aircraft. The fact that it is now in such a pitiful state is symptomatic of America's decline and decadence and its takeover by financial predators.

jo6pac , Sep 4 2019 0:39 utc | 40
Posted by: Jen | Sep 4 2019 0:20 utc | 35

Nailed, moved to city of dead but not for gotten uncle Milton Frieman friend of aynn rand.

vk , Sep 4 2019 0:53 utc | 41
I don't think Boeing was arrogant. I think the 737 is simply unfixable and that they know that -- hence they went to the meeting with empty hands.
C I eh? , Sep 4 2019 1:14 utc | 42
They did the same with Nortel, whose share value exceeded 300 billion not long before it was scrapped. Insiders took everything while pension funds were wiped out of existence.

It is so very helpful to understand everything you read is corporate/intel propaganda, and you are always being setup to pay for the next great scam. The murder of 300+ people by boeing was yet another tragedy our sadistic elites could not let go to waste.

Walter , Sep 4 2019 3:10 utc | 43

...And to the idea that Boeing is being kept afloat by financial agencies.

Willow , Sep 4 2019 3:16 utc | 44
Aljazerra has a series of excellent investigative documentaries they did on Boeing. Here is one from 2014. https://www.aljazeera.com/investigations/boeing787/
Igor Bundy , Sep 4 2019 3:17 utc | 45
For many amerikans, a good "offensive" is far preferable than a good defense even if that only involves an apology. Remember what ALL US presidents say.. We will never apologize.. For the extermination of natives, for shooting down civilian airliners, for blowing up mosques full of worshipers, for bombing hospitals.. for reducing many countries to the stone age and using biological and chemical and nuclear weapons against the planet.. For supporting terrorists who plague the planet now. For basically being able to be unaccountable to anyone including themselves as a peculiar race of feces. So it is not the least surprising that amerikan corporations also follow the same bad manners as those they put into and pre-elect to rule them.
Igor Bundy , Sep 4 2019 3:26 utc | 46
People talk about Seattle as if its a bastion of integrity.. Its the same place Microsoft screwed up countless companies to become the largest OS maker? The same place where Amazon fashions how to screw its own employees to work longer and cheaper? There are enough examples that Seattle is not Toronto.. and will never be a bastion of ethics..

Actually can you show me a single place in the US where ethics are considered a bastion of governorship? Other than the libraries of content written about ethics, rarely do amerikans ever follow it. Yet expect others to do so.. This is getting so perverse that other cultures are now beginning to emulate it. Because its everywhere..

Remember Dallas? I watched people who saw in fascination how business can function like that. Well they cant in the long run but throw enough money and resources and it works wonders in the short term because it destroys the competition. But yea around 1998 when they got rid of the laws on making money by magic, most every thing has gone to hell.. because now there are no constraints but making money.. anywhich way.. Thats all that matters..

Igor Bundy , Sep 4 2019 3:54 utc | 47
You got to be daft or bribed to use intel cpu's in embedded systems. Going from a motorolla cpu, the intel chips were dinosaurs in every way. Requiring the cpu to be almost twice as fast to get the same thing done.. Also its interrupt control was not upto par. A simple example was how the commodore amiga could read from the disk and not stutter or slow down anything else you were doing. I never seen this fixed.. In fact going from 8Mhz to 4GHz seems to have fixed it by brute force. Yes the 8Mhz motorolla cpu worked wonders when you had music, video, IO all going at the same time. Its not just the CPU but the support chips which don't lock up the bus. Why would anyone use Intel? When there are so many specific embedded controllers designed for such specific things.
imo , Sep 4 2019 4:00 utc | 48
Initially I thought it was just the new over-sized engines they retro-fitted. A situation that would surely have been easier to get around by just going back to the original engines -- any inefficiencies being less $costly than the time the planes have been grounded. But this post makes the whole rabbit warren 10 miles deeper.

I do not travel much these days and find the cattle-class seating on these planes a major disincentive. Becoming aware of all these added technical issues I will now positively select for alternatives to 737 and bear the cost.

Joost , Sep 4 2019 4:25 utc | 50
I'm surprised Boeing stock still haven't taken nose dive

Posted by: Bob burger | Sep 3 2019 19:27 utc | 9

That is because the price is propped up by $9 billion share buyback per year . Share buyback is an effective scheme to airlift all the cash out of a company towards the major shareholders. I mean, who wants to develop reliable airplanes if you can funnel the cash into your pockets?

Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics.

Henkie , Sep 4 2019 7:04 utc | 53
Hi , I am new here in writing but not in reading.. About the 80286 , where is the coprocessor the 80287? How can the 80286 make IEEE math calculations? So how can it fly a controlled flight when it can not calculate its accuracy...... How is it possible that this system is certified? It should have at least a 80386 DX not SX!!!!
snake , Sep 4 2019 7:35 utc | 54
moved to Chicago in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance.

Jen @ 35 < ==

yes, the morally of the companies and their exclusive hold on a complicit or controlled government always defaults the government to support, enforce and encourage the principles of economic Zionism.

But it is more than just the corporate culture => the corporate fat cats 1. use the rule-making powers of the government to make law for them. Such laws create high valued assets from the pockets of the masses. The most well know of those corporate uses of government is involved with the intangible property laws (copyright, patent, and government franchise). The government generated copyright, franchise and Patent laws are monopolies. So when government subsidizes a successful outcome R&D project its findings are packaged up into a set of monopolies [copyrights, privatized government franchises which means instead of 50 companies or more competing for the next increment in technology, one gains the full advantage of that government research only one can use or abuse it. and the patented and copyrighted technology is used to extract untold billions, in small increments from the pockets of the public. 2. use of the judicial power of governments and their courts in both domestic and international settings, to police the use and to impose fake values in intangible property monopolies. Government-rule made privately owned monopoly rights (intangible property rights) generated from the pockets of the masses, do two things: they exclude, deny and prevent would be competition and their make value in a hidden revenue tax that passes to the privately held monopolist with each sale of a copyrighted, government franchised, or patented service or product. . Please note the one two nature of the "use of government law making powers to generate intangible private monopoly property rights"

Canthama , Sep 4 2019 10:37 utc | 56
There is no doubt Boeing has committed crimes on the 737MAX, its arrogance & greedy should be severely punished by the international commitment as an example to other global Corporations. It represents what is the worst of Corporate America that places profits in front of lives.
Christian J Chuba , Sep 4 2019 11:55 utc | 59
How the U.S. is keeping Russia out of the international market?

Iran and other sanctioned countries are a potential captive market and they have growth opportunities in what we sometimes call the non-aligned, emerging markets countries (Turkey, Africa, SE Asia, India, ...).

One thing I have learned is that the U.S. always games the system, we never play fair. So what did we do. Do their manufacturers use 1% U.S. made parts and they need that for international certification?

BM , Sep 4 2019 12:48 utc | 60
Ultimately all of the issues in the news these days are the same one and the same issue - as the US gets closer and closer to the brink of catastrophic collapse they get ever more desperate. As they get more and more desperate they descend into what comes most naturally to the US - throughout its entire history - frenzied violence, total absence of morality, war, murder, genocide, and everything else that the US is so well known for (by those who are not blinded by exceptionalist propaganda).

The Hong Kong violence is a perfect example - it is impossible that a self-respecting nation state could allow itself to be seen to degenerate into such idiotic degeneracy, and so grossly flaunt the most basic human decency. Ergo , the US is not a self-respecting nation state. It is a failed state.

I am certain the arrogance of Boeing reflects two things: (a) an assurance from the US government that the government will back them to the hilt, come what may, to make sure that the 737Max flies again; and (b) a threat that if Boeing fails to get the 737Max in the air despite that support, the entire top level management and board of directors will be jailed. Boeing know very well they cannot deliver. But just as the US government is desperate to avoid the inevitable collapse of the US, the Boeing top management are desperate to avoid jail. It is a charade.

It is time for international regulators to withdraw certification totally - after the problems are all fixed (I don't believe they ever will be), the plane needs complete new certification of every detail from the bottom up, at Boeing's expense, and with total openness from Boeing. The current Boeing management are not going to cooperate with that, therefore the international regulators need to demand a complete replacement of the management and board of directors as a condition for working with them.

Piotr Berman , Sep 4 2019 13:23 utc | 61
From ZeroHedge link:

If Boeing had invested some of this money that it blew on share buybacks to design a new modern plane from ground up to replace the ancient 737 airframe, these tragedies could have been prevented, and Boeing wouldn't have this nightmare on its hands. But the corporate cost-cutters and financial engineers, rather than real engineers, had the final word.

Markets don't care about any of this. They don't care about real engineers either. They love corporate cost-cutters and financial engineers. They want share buybacks, and if something bad happens, they'll overlook the $5 billion to pay for the fallout because it's just a "one-time item."

And now Boeing still has this plane, instead of a modern plane, and the history of this plane is now tainted, as is its brand, and by extension, that of Boeing. But markets blow that off too. Nothing matters.

Companies are getting away each with their own thing. There are companies that are losing a ton of money and are burning tons of cash, with no indications that they will ever make money. And market valuations are just ludicrous.

======

Thus Boeing issue is part of a much larger picture. Something systemic had to make "markets" less rational. And who is this "market"? In large part, fund managers wracking their brains how to create "decent return" while the cost of borrowing and returns on lending are super low. What remains are forms of real estate and stocks.

Overall, Boeing buy-backs exceeded 40 billion dollars, one could guess that half or quarter of that would suffice to build a plane that logically combines the latest technologies. E.g. the entire frame design to fit together with engines, processors proper for the information processing load, hydraulics for steering that satisfy force requirements in almost all circumstances etc. New technologies also fail because they are not completely understood, but when the overall design is logical with margins of safety, the faults can be eliminated.

Instead, 737 was slowly modified toward failure, eliminating safety margins one by one.

morongobill , Sep 4 2019 14:08 utc | 63

Regarding the 80286 and the 737, don't forget that the air traffic control system and the ICBM system uses old technology as well.

Seems our big systems have feet of old silicon.

Allan Bowman , Sep 4 2019 15:15 utc | 66
Boeing has apparently either never heard of, or ignores a procedure that is mandatory in satellite design and design reviews. This is FMEA or Failure Modes and Effects Analysis. This requires design engineers to document the impact of every potential failure and combination of failures thereby highlighting everthing from catastrophic effects to just annoyances. Clearly BOEING has done none of these and their troubles are a direct result. It can be assumed that their arrogant and incompetent management has not yet understood just how serious their behavior is to the future of the company.
fx , Sep 4 2019 16:08 utc | 69
Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics.

Posted by: Joost | Sep 4 2019 4:25 utc | 50

Well put!

Bemildred , Sep 4 2019 16:11 utc | 70
Computer modelling is what they are talking about in the cliche "Garbage in, garbage out".

The problem is not new, and it is well understood. What computer modelling is is cheap, and easy to fudge, and that is why it is popular with people who care about money a lot. Much of what is called "AI" is very similar in its limitations, a complicated way to fudge up the results you want, or something close enough for casual examination.

In particular cases where you have a well-defined and well-mathematized theory, then you can get some useful results with models. Like in Physics, Chemistry.

And they can be useful for "realistic" training situations, like aircraft simulators. The old story about wargame failures against Iran is another such situation. A lot of video games are big simulations in essence. But that is not reality, it's fake reality.

Trond , Sep 4 2019 17:01 utc | 79
@ SteveK9 71 "By the way, the problem was caused by Mitsubishi, who designed the heat exchangers."

Ahh. The furriners...

I once made the "mistake" of pointing out (in a comment under an article in Salon) that the reactors that exploded at Fukushima was made by GE and that GE people was still in charge of the reactors of American quality when they exploded. (The amerikans got out on one of the first planes out of the country).

I have never seen so many angry replies to one of my comments. I even got e-mails for several weeks from angry Americans.

c1ue , Sep 4 2019 19:44 utc | 80
@Henkie #53 You need floating point for scientific calculations, but I really doubt the 737 is doing any scientific research. Also, a regular CPU can do mathematical calculations. It just isn't as fast nor has the same capacity as a dedicated FPU. Another common use for FPUs is in live action shooter games - the neo-physics portions utilize scientific-like calculations to create lifelike actions. I sold computer systems in the 1990s while in school - Doom was a significant driver for newer systems (as well as hedge fund types). Again, don't see why an airplane needs this.

[Sep 02, 2019] The Joel Test 12 Steps to Better Code - Joel on Software by Joel Spolsky

Somewhat simplistic but still useful
Sep 02, 2019 | www.joelonsoftware.com
Wednesday, August 09, 2000

Have you ever heard of SEMA ? It's a fairly esoteric system for measuring how good a software team is. No, wait! Don't follow that link! It will take you about six years just to understand that stuff. So I've come up with my own, highly irresponsible, sloppy test to rate the quality of a software team. The great part about it is that it takes about 3 minutes. With all the time you save, you can go to medical school.

The Joel Test

  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?

The neat thing about The Joel Test is that it's easy to get a quick yes or no to each question. You don't have to figure out lines-of-code-per-day or average-bugs-per-inflection-point. Give your team 1 point for each "yes" answer. The bummer about The Joel Test is that you really shouldn't use it to make sure that your nuclear power plant software is safe.

A score of 12 is perfect, 11 is tolerable, but 10 or lower and you've got serious problems. The truth is that most software organizations are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time.

Of course, these are not the only factors that determine success or failure: in particular, if you have a great software team working on a product that nobody wants, well, people aren't going to want it. And it's possible to imagine a team of "gunslingers" that doesn't do any of this stuff that still manages to produce incredible software that changes the world. But, all else being equal, if you get these 12 things right, you'll have a disciplined team that can consistently deliver.

1. Do you use source control?
I've used commercial source control packages, and I've used CVS , which is free, and let me tell you, CVS is fine . But if you don't have source control, you're going to stress out trying to get programmers to work together. Programmers have no way to know what other people did. Mistakes can't be rolled back easily. The other neat thing about source control systems is that the source code itself is checked out on every programmer's hard drive -- I've never heard of a project using source control that lost a lot of code.

2. Can you make a build in one step?
By this I mean: how many steps does it take to make a shipping build from the latest source snapshot? On good teams, there's a single script you can run that does a full checkout from scratch, rebuilds every line of code, makes the EXEs, in all their various versions, languages, and #ifdef combinations, creates the installation package, and creates the final media -- CDROM layout, download website, whatever.

If the process takes any more than one step, it is prone to errors. And when you get closer to shipping, you want to have a very fast cycle of fixing the "last" bug, making the final EXEs, etc. If it takes 20 steps to compile the code, run the installation builder, etc., you're going to go crazy and you're going to make silly mistakes.

For this very reason, the last company I worked at switched from WISE to InstallShield: we required that the installation process be able to run, from a script, automatically, overnight, using the NT scheduler, and WISE couldn't run from the scheduler overnight, so we threw it out. (The kind folks at WISE assure me that their latest version does support nightly builds.)

3. Do you make daily builds?
When you're using source control, sometimes one programmer accidentally checks in something that breaks the build. For example, they've added a new source file, and everything compiles fine on their machine, but they forgot to add the source file to the code repository. So they lock their machine and go home, oblivious and happy. But nobody else can work, so they have to go home too, unhappy.

Breaking the build is so bad (and so common) that it helps to make daily builds, to insure that no breakage goes unnoticed. On large teams, one good way to insure that breakages are fixed right away is to do the daily build every afternoon at, say, lunchtime. Everyone does as many checkins as possible before lunch. When they come back, the build is done. If it worked, great! Everybody checks out the latest version of the source and goes on working. If the build failed, you fix it, but everybody can keep on working with the pre-build, unbroken version of the source.

On the Excel team we had a rule that whoever broke the build, as their "punishment", was responsible for babysitting the builds until someone else broke it. This was a good incentive not to break the build, and a good way to rotate everyone through the build process so that everyone learned how it worked.

Read more about daily builds in my article Daily Builds are Your Friend .

4. Do you have a bug database?
I don't care what you say. If you are developing code, even on a team of one, without an organized database listing all known bugs in the code, you are going to ship low quality code. Lots of programmers think they can hold the bug list in their heads. Nonsense. I can't remember more than two or three bugs at a time, and the next morning, or in the rush of shipping, they are forgotten. You absolutely have to keep track of bugs formally.

Bug databases can be complicated or simple. A minimal useful bug database must include the following data for every bug:

If the complexity of bug tracking software is the only thing stopping you from tracking your bugs, just make a simple 5 column table with these crucial fields and start using it .

For more on bug tracking, read Painless Bug Tracking .

5. Do you fix bugs before writing new code?
The very first version of Microsoft Word for Windows was considered a "death march" project. It took forever. It kept slipping. The whole team was working ridiculous hours, the project was delayed again, and again, and again, and the stress was incredible. When the dang thing finally shipped, years late, Microsoft sent the whole team off to Cancun for a vacation, then sat down for some serious soul-searching.

What they realized was that the project managers had been so insistent on keeping to the "schedule" that programmers simply rushed through the coding process, writing extremely bad code, because the bug fixing phase was not a part of the formal schedule. There was no attempt to keep the bug-count down. Quite the opposite. The story goes that one programmer, who had to write the code to calculate the height of a line of text, simply wrote "return 12;" and waited for the bug report to come in about how his function is not always correct. The schedule was merely a checklist of features waiting to be turned into bugs. In the post-mortem, this was referred to as "infinite defects methodology".

To correct the problem, Microsoft universally adopted something called a "zero defects methodology". Many of the programmers in the company giggled, since it sounded like management thought they could reduce the bug count by executive fiat. Actually, "zero defects" meant that at any given time, the highest priority is to eliminate bugs before writing any new code. Here's why.

In general, the longer you wait before fixing a bug, the costlier (in time and money) it is to fix.

For example, when you make a typo or syntax error that the compiler catches, fixing it is basically trivial.

When you have a bug in your code that you see the first time you try to run it, you will be able to fix it in no time at all, because all the code is still fresh in your mind.

If you find a bug in some code that you wrote a few days ago, it will take you a while to hunt it down, but when you reread the code you wrote, you'll remember everything and you'll be able to fix the bug in a reasonable amount of time.

But if you find a bug in code that you wrote a few months ago, you'll probably have forgotten a lot of things about that code, and it's much harder to fix. By that time you may be fixing somebody else's code, and they may be in Aruba on vacation, in which case, fixing the bug is like science: you have to be slow, methodical, and meticulous, and you can't be sure how long it will take to discover the cure.

And if you find a bug in code that has already shipped , you're going to incur incredible expense getting it fixed.

That's one reason to fix bugs right away: because it takes less time. There's another reason, which relates to the fact that it's easier to predict how long it will take to write new code than to fix an existing bug. For example, if I asked you to predict how long it would take to write the code to sort a list, you could give me a pretty good estimate. But if I asked you how to predict how long it would take to fix that bug where your code doesn't work if Internet Explorer 5.5 is installed, you can't even guess , because you don't know (by definition) what's causing the bug. It could take 3 days to track it down, or it could take 2 minutes.

What this means is that if you have a schedule with a lot of bugs remaining to be fixed, the schedule is unreliable. But if you've fixed all the known bugs, and all that's left is new code, then your schedule will be stunningly more accurate.

Another great thing about keeping the bug count at zero is that you can respond much faster to competition. Some programmers think of this as keeping the product ready to ship at all times. Then if your competitor introduces a killer new feature that is stealing your customers, you can implement just that feature and ship on the spot, without having to fix a large number of accumulated bugs.

6. Do you have an up-to-date schedule?
Which brings us to schedules. If your code is at all important to the business, there are lots of reasons why it's important to the business to know when the code is going to be done. Programmers are notoriously crabby about making schedules. "It will be done when it's done!" they scream at the business people.

Unfortunately, that just doesn't cut it. There are too many planning decisions that the business needs to make well in advance of shipping the code: demos, trade shows, advertising, etc. And the only way to do this is to have a schedule, and to keep it up to date.

The other crucial thing about having a schedule is that it forces you to decide what features you are going to do, and then it forces you to pick the least important features and cut them rather than slipping into featuritis (a.k.a. scope creep).

Keeping schedules does not have to be hard. Read my article Painless Software Schedules , which describes a simple way to make great schedules.

7. Do you have a spec?
Writing specs is like flossing: everybody agrees that it's a good thing, but nobody does it.

I'm not sure why this is, but it's probably because most programmers hate writing documents. As a result, when teams consisting solely of programmers attack a problem, they prefer to express their solution in code, rather than in documents. They would much rather dive in and write code than produce a spec first.

At the design stage, when you discover problems, you can fix them easily by editing a few lines of text. Once the code is written, the cost of fixing problems is dramatically higher, both emotionally (people hate to throw away code) and in terms of time, so there's resistance to actually fixing the problems. Software that wasn't built from a spec usually winds up badly designed and the schedule gets out of control. This seems to have been the problem at Netscape, where the first four versions grew into such a mess that management stupidly decided to throw out the code and start over. And then they made this mistake all over again with Mozilla, creating a monster that spun out of control and took several years to get to alpha stage.

My pet theory is that this problem can be fixed by teaching programmers to be less reluctant writers by sending them off to take an intensive course in writing . Another solution is to hire smart program managers who produce the written spec. In either case, you should enforce the simple rule "no code without spec".

Learn all about writing specs by reading my 4-part series .

8. Do programmers have quiet working conditions?
There are extensively documented productivity gains provided by giving knowledge workers space, quiet, and privacy. The classic software management book Peopleware documents these productivity benefits extensively.

Here's the trouble. We all know that knowledge workers work best by getting into "flow", also known as being "in the zone", where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration. This is when they get all of their productive work done. Writers, programmers, scientists, and even basketball players will tell you about being in the zone.

The trouble is, getting into "the zone" is not easy. When you try to measure it, it looks like it takes an average of 15 minutes to start working at maximum productivity. Sometimes, if you're tired or have already done a lot of creative work that day, you just can't get into the zone and you spend the rest of your work day fiddling around, reading the web, playing Tetris.

The other trouble is that it's so easy to get knocked out of the zone. Noise, phone calls, going out for lunch, having to drive 5 minutes to Starbucks for coffee, and interruptions by coworkers -- especially interruptions by coworkers -- all knock you out of the zone. If a coworker asks you a question, causing a 1 minute interruption, but this knocks you out of the zone badly enough that it takes you half an hour to get productive again, your overall productivity is in serious trouble. If you're in a noisy bullpen environment like the type that caffeinated dotcoms love to create, with marketing guys screaming on the phone next to programmers, your productivity will plunge as knowledge workers get interrupted time after time and never get into the zone.

With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed.

Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds).

Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh!

9. Do you use the best tools money can buy?
Writing code in a compiled language is one of the last things that still can't be done instantly on a garden variety home computer. If your compilation process takes more than a few seconds, getting the latest and greatest computer is going to save you time. If compiling takes even 15 seconds, programmers will get bored while the compiler runs and switch over to reading The Onion , which will suck them in and kill hours of productivity.

Debugging GUI code with a single monitor system is painful if not impossible. If you're writing GUI code, two monitors will make things much easier.

Most programmers eventually have to manipulate bitmaps for icons or toolbars, and most programmers don't have a good bitmap editor available. Trying to use Microsoft Paint to manipulate bitmaps is a joke, but that's what most programmers have to do.

At my last job , the system administrator kept sending me automated spam complaining that I was using more than ... get this ... 220 megabytes of hard drive space on the server. I pointed out that given the price of hard drives these days, the cost of this space was significantly less than the cost of the toilet paper I used. Spending even 10 minutes cleaning up my directory would be a fabulous waste of productivity.

Top notch development teams don't torture their programmers. Even minor frustrations caused by using underpowered tools add up, making programmers grumpy and unhappy. And a grumpy programmer is an unproductive programmer.

To add to all this... programmers are easily bribed by giving them the coolest, latest stuff. This is a far cheaper way to get them to work for you than actually paying competitive salaries!

10. Do you have testers?
If your team doesn't have dedicated testers, at least one for every two or three programmers, you are either shipping buggy products, or you're wasting money by having $100/hour programmers do work that can be done by $30/hour testers. Skimping on testers is such an outrageous false economy that I'm simply blown away that more people don't recognize it.

Read Top Five (Wrong) Reasons You Don't Have Testers , an article I wrote about this subject.

11. Do new candidates write code during their interview?
Would you hire a magician without asking them to show you some magic tricks? Of course not.

Would you hire a caterer for your wedding without tasting their food? I doubt it. (Unless it's Aunt Marge, and she would hate you for ever if you didn't let her make her "famous" chopped liver cake).

Yet, every day, programmers are hired on the basis of an impressive resumé or because the interviewer enjoyed chatting with them. Or they are asked trivia questions ("what's the difference between CreateDialog() and DialogBox()?") which could be answered by looking at the documentation. You don't care if they have memorized thousands of trivia about programming, you care if they are able to produce code. Or, even worse, they are asked "AHA!" questions: the kind of questions that seem easy when you know the answer, but if you don't know the answer, they are impossible.

Please, just stop doing this . Do whatever you want during interviews, but make the candidate write some code . (For more advice, read my Guerrilla Guide to Interviewing .)

12. Do you do hallway usability testing?
A hallway usability test is where you grab the next person that passes by in the hallway and force them to try to use the code you just wrote. If you do this to five people, you will learn 95% of what there is to learn about usability problems in your code.

Good user interface design is not as hard as you would think, and it's crucial if you want customers to love and buy your product. You can read my free online book on UI design , a short primer for programmers.

But the most important thing about user interfaces is that if you show your program to a handful of people, (in fact, five or six is enough) you will quickly discover the biggest problems people are having. Read Jakob Nielsen's article explaining why. Even if your UI design skills are lacking, as long as you force yourself to do hallway usability tests, which cost nothing, your UI will be much, much better.

Four Ways To Use The Joel Test

  1. Rate your own software organization, and tell me how it rates, so I can gossip.
  2. If you're the manager of a programming team, use this as a checklist to make sure your team is working as well as possible. When you start rating a 12, you can leave your programmers alone and focus full time on keeping the business people from bothering them.
  3. If you're trying to decide whether to take a programming job, ask your prospective employer how they rate on this test. If it's too low, make sure that you'll have the authority to fix these things. Otherwise you're going to be frustrated and unproductive.
  4. If you're an investor doing due diligence to judge the value of a programming team, or if your software company is considering merging with another, this test can provide a quick rule of thumb.

[Aug 31, 2019] The Substance of Style - Slashdot

Aug 31, 2019 | news.slashdot.org

Kazoo the Clown ( 644526 ) , Thursday October 16, 2003 @04:35PM ( #7233354 )

AESTHETICS of STYLE? Try CORRUPTION of GREED ( Score: 3 , Insightful)

You're looking at the downside of the "invisible hand" here, methinks.

Take anything by the Sharper Image for example. Their corporate motto is apparently "Style over Substance", though they are only one of the most blatant. A specifically good example would be their "Ionic Breeze." Selling points? Quieter than HEPA filters (that's because HEPA filters actually DO something). Empty BOXES are quiet too, and pollute your air less. Standardized tests show the Ionic Breeze's ability to remove airborne particles to be almost negligible. Tests also show it doesn't trap the particles it does catch very well such that they can be re-introduced to the environment. It produces levels of the oxidant gas ozone that accumulate over time, reportedly less than 0.05 ppm after 24 hours, but what after 48? The EPA's safe limit is 0.08, are you sure your ventilation is sufficient to keep it below that level if you have it on all the time? Do you trust the EPA's limit as being actually safe? (they dropped it to 0.08 from 0.12 in 1997 as apparently, 0.12 wasn't good enough). And what does it matter if the darn thing doesn't even remove dust and germs out of your environment worth a darn , because most dust and germs are not airborne? Oh, but it LOOKS SO SEXY.

There are countless products that people buy not because they are tuned into the brilliant aesthetics , but because the intimidation value of the brilliant marketing campaigns that convince them that if they don't have the product, they're deprived. That they need it to shallowly show off they have good taste when they really have no taste at all except that which was sold to them.

[Aug 31, 2019] Ask Slashdot How Would You Teach 'Best Practices' For Programmers - Slashdot

Aug 31, 2019 | ask.slashdot.org

Strider- ( 39683 ) , Sunday February 25, 2018 @08:43AM ( #56184459 )

Re:Back to basics ( Score: 5 , Insightful)

Oh hell no. So-called "self-documenting code" isn't. You can write the most comprehensible, clear code in the history of mankind, and that's still not good enough.

The issue is that your code only documents what the code is doing, not what it is supposed to be doing. You wouldn't believe how many subtle issues I've come across over the decades where on the face of it everything should have been good, but in reality the code was behaving slightly differently than what was intended.

JaredOfEuropa ( 526365 ) , Sunday February 25, 2018 @08:58AM ( #56184487 ) Journal
Re:Back to basics ( Score: 5 , Insightful)
The issue is that your code only documents what the code is doing, not what it is supposed to be doing

Mod this up. I aim to document my intent, i.e. what the code is supposed to do. Not only does this help catch bugs within a procedure, but it also forces me to think a little bit about the purpose of each method or function. It helps catch bugs or inconsistencies in the software architecture as well.

johnsnails ( 1715452 ) , Sunday February 25, 2018 @09:32AM ( #56184523 )
Re: Back to basics ( Score: 2 )

I agree with everything you said besides being short (whatever that is precisely). Sometimes a good comment will be a solid 4-5 line paragraph. But maybe I should fix the code instead that needs that long of a comment.

pjt33 ( 739471 ) , Sunday February 25, 2018 @04:06PM ( #56184861 )
Re: Back to basics ( Score: 2 )

I once wrote a library for (essentially) GIS which was full of comments that were 20 lines or longer. When the correctness of the code depends on theorems in non-Euclidean geometry and you can't assume that the maintainer will know any, I don't think it's a bad idea to make the proofs quite explicit.

[Aug 28, 2019] CarpAssert - executable comments - metacpan.org

Aug 28, 2019 | metacpan.org

Contents [ show hide ]

NAME

Carp::Assert - executable comments

SYNOPSIS

# Assertions are on. use Carp::Assert ; $next_sunrise_time = sunrise(); # Assert that the sun must rise in the next 24 hours. assert(( $next_sunrise_time - time ) < 24*60*60) if DEBUG; # Assert that your customer's primary credit card is active affirm { my @cards = @{ $customer ->credit_cards}; $cards [0]->is_active; }; # Assertions are off. no Carp::Assert; $next_pres = divine_next_president(); # Assert that if you predict Dan Quayle will be the next president # your crystal ball might need some polishing. However, since # assertions are off, IT COULD HAPPEN! shouldnt( $next_pres , 'Dan Quayle' ) if DEBUG;

DESCRIPTION

"We are ready for any unforseen event that may or may not occur." - Dan Quayle

Carp::Assert is intended for a purpose like the ANSI C library assert.h . If you're already familiar with assert.h, then you can probably skip this and go straight to the FUNCTIONS section.

Assertions are the explicit expressions of your assumptions about the reality your program is expected to deal with, and a declaration of those which it is not. They are used to prevent your program from blissfully processing garbage inputs (garbage in, garbage out becomes garbage in, error out) and to tell you when you've produced garbage output. (If I was going to be a cynic about Perl and the user nature, I'd say there are no user inputs but garbage, and Perl produces nothing but...)

An assertion is used to prevent the impossible from being asked of your code, or at least tell you when it does. For example:

# Take the square root of a number. sub my_sqrt { my ( $num ) = shift ; # the square root of a negative number is imaginary. assert( $num >= 0); return sqrt $num ; }

The assertion will warn you if a negative number was handed to your subroutine, a reality the routine has no intention of dealing with.

An assertion should also be used as something of a reality check, to make sure what your code just did really did happen:

open (FILE, $filename ) || die $!; @stuff = <FILE>; @stuff = do_something( @stuff ); # I should have some stuff. assert( @stuff > 0);

The assertion makes sure you have some @stuff at the end. Maybe the file was empty, maybe do_something() returned an empty list... either way, the assert() will give you a clue as to where the problem lies, rather than 50 lines down at when you wonder why your program isn't printing anything.

Since assertions are designed for debugging and will remove themelves from production code, your assertions should be carefully crafted so as to not have any side-effects, change any variables, or otherwise have any effect on your program. Here is an example of a bad assertation:

assert( $error = 1 if $king ne 'Henry' ); # Bad!

It sets an error flag which may then be used somewhere else in your program. When you shut off your assertions with the $DEBUG flag, $error will no longer be set.

Here's another example of bad use:

assert( $next_pres ne 'Dan Quayle' or goto Canada); # Bad!

This assertion has the side effect of moving to Canada should it fail. This is a very bad assertion since error handling should not be placed in an assertion, nor should it have side-effects.

In short, an assertion is an executable comment. For instance, instead of writing this

# $life ends with a '!' $life = begin_life();

you'd replace the comment with an assertion which enforces the comment.

$life = begin_life(); assert( $life =~ /!$/ );

FUNCTIONS

assert
assert(EXPR) if DEBUG; assert(EXPR, $name ) if DEBUG;

assert's functionality is effected by compile time value of the DEBUG constant, controlled by saying use Carp::Assert or no Carp::Assert . In the former case, assert will function as below. Otherwise, the assert function will compile itself out of the program. See "Debugging vs Production" for details.

Give assert an expression, assert will Carp::confess() if that expression is false, otherwise it does nothing. (DO NOT use the return value of assert for anything, I mean it... really!).

The error from assert will look something like this:

Assertion failed! Carp::Assert::assert(0) called at prog line 23 main::foo called at prog line 50

Indicating that in the file "prog" an assert failed inside the function main::foo() on line 23 and that foo() was in turn called from line 50 in the same file.

If given a $name, assert() will incorporate this into your error message, giving users something of a better idea what's going on.

assert( Dogs->isa( 'People' ), 'Dogs are people, too!' ) if DEBUG; # Result - "Assertion (Dogs are people, too!) failed!"
affirm
affirm BLOCK if DEBUG; affirm BLOCK $name if DEBUG;

Very similar to assert(), but instead of taking just a simple expression it takes an entire block of code and evaluates it to make sure its true. This can allow more complicated assertions than assert() can without letting the debugging code leak out into production and without having to smash together several statements into one.

affirm { my $customer = Customer->new( $customerid ); my @cards = $customer ->credit_cards; grep { $_ ->is_active } @cards ; } "Our customer has an active credit card" ;

affirm() also has the nice side effect that if you forgot the if DEBUG suffix its arguments will not be evaluated at all. This can be nice if you stick affirm()s with expensive checks into hot loops and other time-sensitive parts of your program.

If the $name is left off and your Perl version is 5.6 or higher the affirm() diagnostics will include the code begin affirmed.

should
shouldnt
should ( $this , $shouldbe ) if DEBUG; shouldnt( $this , $shouldntbe ) if DEBUG;

Similar to assert(), it is specially for simple "this should be that" or "this should be anything but that" style of assertions.

Due to Perl's lack of a good macro system, assert() can only report where something failed, but it can't report what failed or how . should() and shouldnt() can produce more informative error messages:

Assertion ( 'this' should be 'that' !) failed! Carp::Assert::should( 'this' , 'that' ) called at moof line 29 main::foo() called at moof line 58

So this:

should( $this , $that ) if DEBUG;

is similar to this:

assert( $this eq $that ) if DEBUG;

except for the better error message.

Currently, should() and shouldnt() can only do simple eq and ne tests (respectively). Future versions may allow regexes.

Debugging vs Production

Because assertions are extra code and because it is sometimes necessary to place them in 'hot' portions of your code where speed is paramount, Carp::Assert provides the option to remove its assert() calls from your program.

So, we provide a way to force Perl to inline the switched off assert() routine, thereby removing almost all performance impact on your production code.

no Carp::Assert; # assertions are off. assert(1==1) if DEBUG;

DEBUG is a constant set to 0. Adding the 'if DEBUG' condition on your assert() call gives perl the cue to go ahead and remove assert() call from your program entirely, since the if conditional will always be false.

# With C<no Carp::Assert> the assert() has no impact. for (1..100) { assert( do_some_really_time_consuming_check ) if DEBUG; }

If if DEBUG gets too annoying, you can always use affirm().

# Once again, affirm() has (almost) no impact with C<no Carp::Assert> for (1..100) { affirm { do_some_really_time_consuming_check }; }

Another way to switch off all asserts, system wide, is to define the NDEBUG or the PERL_NDEBUG environment variable.

You can safely leave out the "if DEBUG" part, but then your assert() function will always execute (and its arguments evaluated and time spent). To get around this, use affirm(). You still have the overhead of calling a function but at least its arguments will not be evaluated.

Differences from ANSI C

assert() is intended to act like the function from ANSI C fame. Unfortunately, due to Perl's lack of macros or strong inlining, it's not nearly as unobtrusive.

Well, the obvious one is the "if DEBUG" part. This is cleanest way I could think of to cause each assert() call and its arguments to be removed from the program at compile-time, like the ANSI C macro does.

Also, this version of assert does not report the statement which failed, just the line number and call frame via Carp::confess. You can't do assert('$a == $b') because $a and $b will probably be lexical, and thus unavailable to assert(). But with Perl, unlike C, you always have the source to look through, so the need isn't as great.

EFFICIENCY

With no Carp::Assert (or NDEBUG) and using the if DEBUG suffixes on all your assertions, Carp::Assert has almost no impact on your production code. I say almost because it does still add some load-time to your code (I've tried to reduce this as much as possible).

If you forget the if DEBUG on an assert() , should() or shouldnt() , its arguments are still evaluated and thus will impact your code. You'll also have the extra overhead of calling a subroutine (even if that subroutine does nothing).

Forgetting the if DEBUG on an affirm() is not so bad. While you still have the overhead of calling a subroutine (one that does nothing) it will not evaluate its code block and that can save a lot.

Try to remember the if DEBUG .

ENVIRONMENT


NDEBUG
Defining NDEBUG switches off all assertions. It has the same effect as changing "use Carp::Assert" to "no Carp::Assert" but it effects all code.

PERL_NDEBUG
Same as NDEBUG and will override it. Its provided to give you something which won't conflict with any C programs you might be working on at the same time.

BUGS, CAVETS and other MUSINGS

Conflicts with POSIX.pm

The POSIX module exports an assert routine which will conflict with Carp::Assert if both are used in the same namespace. If you are using both together, prevent POSIX from exporting like so:

use POSIX (); use Carp::Assert ;

Since POSIX exports way too much, you should be using it like that anyway.

affirm and $^S

affirm() mucks with the expression's caller and it is run in an eval so anything that checks $^S will be wrong.

shouldn't

Yes, there is a shouldn't routine. It mostly works, but you must put the if DEBUG after it.

missing if DEBUG

It would be nice if we could warn about missing if DEBUG .

SEE ALSO

assert.h - the wikipedia page about assert.h .

Carp::Assert::More provides a set of convenience functions that are wrappers around Carp::Assert .

Sub::Assert provides support for subroutine pre- and post-conditions. The documentation says it's slow.

PerlX::Assert provides compile-time assertions, which are usually optimised away at compile time. Currently part of the Moops distribution, but may get its own distribution sometime in 2014.

Devel::Assert also provides an assert function, for Perl >= 5.8.1.

assertions provides an assertion mechanism for Perl >= 5.9.0.

REPOSITORY

https://github.com/schwern/Carp-Assert

COPYRIGHT

Copyright 2001-2007 by Michael G Schwern <[email protected]>.

This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

See http://dev.perl.org/licenses/

AUTHOR

Michael G Schwern <[email protected]>

[Aug 27, 2019] Retire your debugger, log smartly with LogLog4perl! by Michael Schilli

This is a large currently unmaintained subsystem (last changes were in Feb 21, 2017) of questionable value for simple scripts(the main problem is overcomplexity and large amount of dependencies) . They make things way too complex for simple applications.
It still might make perfect sense for very complex applications.
Sep 11, 2002 | www.perl.com

You've rolled out an application and it produces mysterious, sporadic errors? That's pretty common, even if fairly well-tested applications are exposed to real-world data. How can you track down when and where exactly your problem occurs? What kind of user data is it caused by? A debugger won't help you there.

And you don't want to keep track of only bad cases. It's helpful to log all types of meaningful incidents while your system is running in production, in order to extract statistical data from your logs later. Or, what if a problem only happens after a certain sequence of 'good' cases? Especially in dynamic environments like the Web, anything can happen at any time and you want a footprint of every event later, when you're counting the corpses.

What you need is well-architected logging : Log statements in your code and a logging package like Log::Log4perl providing a "remote-control," which allows you to turn on previously inactive logging statements, increase or decrease their verbosity independently in different parts of the system, or turn them back off entirely. Certainly without touching your system's code – and even without restarting it.

However, with traditional logging systems, the amount of data written to the logs can be overwhelming. In fact, turning on low-level-logging on a system under heavy load can cause it to slow down to a crawl or even crash.

Log::Log4perl is different. It is a pure Perl port of the widely popular Apache/Jakarta log4j library [3] for Java, a project made public in 1999, which has been actively supported and enhanced by a team around head honcho Ceki Gülcü during the years.

The comforting facts about log4j are that it's really well thought out, it's the alternative logging standard for Java and it's been in use for years with numerous projects. If you don't like Java, then don't worry, you're not alone – the Log::Log4perl authors (yours truly among them) are all Perl hardliners who made sure Log::Log4perl is real Perl.

In the spirit of log4j , Log::Log4perl addresses the shortcomings of typical ad-hoc or homegrown logging systems by providing three mechanisms to control the amount of data being logged and where it ends up at:

In combination, these three control mechanisms turn out to be very powerful. They allow you to control the logging behavior of even the most complex applications at a granular level. However, it takes time to get used to the concept, so let's start the easy way:

Getting Your Feet Wet With Log4perl

If you've used logging before, then you're probably familiar with logging priorities or levels . Each log incident is assigned a level. If this incident level is higher than the system's logging level setting (typically initialized at system startup), then the message is logged, otherwise it is suppressed.

Log::Log4perl defines five logging levels, listed here from low to high:

    DEBUG
    INFO
    WARN
    ERROR
    FATAL

Let's assume that you decide at system startup that only messages of level WARN and higher are supposed to make it through. If your code then contains a log statement with priority DEBUG, then it won't ever be executed. However, if you choose at some point to bump up the amount of detail, then you can just set your system's logging priority to DEBUG and you will see these DEBUG messages starting to show up in your logs, too.

... ... ...

[Aug 27, 2019] perl defensive programming (die, assert, croak) - Stack Overflow

Aug 27, 2019 | stackoverflow.com

perl defensive programming (die, assert, croak) Ask Question Asked 5 years, 6 months ago Active 5 years, 6 months ago Viewed 645 times 2 0


Zaid ,Feb 23, 2014 at 17:11

What is the best (or recommended) approach to do defensive programming in perl? For example if I have a sub which must be called with a (defined) SCALAR, an ARRAYREF and an optional HASHREF.

Three of the approaches I have seen:

sub test1 {
    die if !(@_ == 2 || @_ == 3);
    my ($scalar, $arrayref, $hashref) = @_;
    die if !defined($scalar) || ref($scalar);
    die if ref($arrayref) ne 'ARRAY';
    die if defined($hashref) && ref($hashref) ne 'HASH';
    #do s.th with scalar, arrayref and hashref
}

sub test2 {
    Carp::assert(@_ == 2 || @_ == 3) if DEBUG;
    my ($scalar, $arrayref, $hashref) = @_;
    if(DEBUG) {
        Carp::assert defined($scalar) && !ref($scalar);
        Carp::assert ref($arrayref) eq 'ARRAY';
        Carp::assert !defined($hashref) || ref($hashref) eq 'HASH';
    }
    #do s.th with scalar, arrayref and hashref
}

sub test3 {
    my ($scalar, $arrayref, $hashref) = @_;
    (@_ == 2 || @_ == 3 && defined($scalar) && !ref($scalar) && ref($arrayref) eq 'ARRAY' && (!defined($hashref) || ref($hashref) eq 'HASH'))
        or Carp::croak 'usage: test3(SCALAR, ARRAYREF, [HASHREF])';
    #do s.th with scalar, arrayref and hashref
}

tobyink ,Feb 23, 2014 at 21:44

use Params::Validate qw(:all);

sub Yada {
   my (...)=validate_pos(@_,{ type=>SCALAR },{ type=>ARRAYREF },{ type=>HASHREF,optional=>1 });
   ...
}

ikegami ,Feb 23, 2014 at 17:33

I wouldn't use any of them. Aside from not not accepting many array and hash references, the checks you used are almost always redundant.
>perl -we"use strict; sub { my ($x) = @_; my $y = $x->[0] }->( 'abc' )"
Can't use string ("abc") as an ARRAY ref nda"strict refs" in use at -e line 1.

>perl -we"use strict; sub { my ($x) = @_; my $y = $x->[0] }->( {} )"
Not an ARRAY reference at -e line 1.

The only advantage to checking is that you can use croak to show the caller in the error message.


Proper way to check if you have an reference to an array:

defined($x) && eval { @$x; 1 }

Proper way to check if you have an reference to a hash:

defined($x) && eval { %$x; 1 }

Borodin ,Feb 23, 2014 at 17:23

None of the options you show display any message to give a reason for the failure, which I think is paramount.

It is also preferable to use croak instead of die from within library subroutines, so that the error is reported from the point of view of the caller.

I would replace all occurrences of if ! with unless . The former is a C programmer's habit.

I suggest something like this

sub test1 {
    croak "Incorrect number of parameters" unless @_ == 2 or @_ == 3;
    my ($scalar, $arrayref, $hashref) = @_;
    croak "Invalid first parameter" unless $scalar and not ref $scalar;
    croak "Invalid second parameter" unless $arrayref eq 'ARRAY';
    croak "Invalid third parameter" if defined $hashref and ref $hashref ne 'HASH';

    # do s.th with scalar, arrayref and hashref
}

[Aug 27, 2019] What Is Defensive Programming

Notable quotes:
"... Defensive programming is a method of prevention, rather than a form of cure. Compare this to debugging -- the act of removing bugs after they've bitten. Debugging is all about finding a cure. ..."
"... Defensive programming saves you literally hours of debugging and lets you do more fun stuff instead. Remember Murphy: If your code can be used incorrectly, it will be. ..."
"... Working code that runs properly, but ever-so-slightly slower, is far superior to code that works most of the time but occasionally collapses in a shower of brightly colored sparks ..."
"... Defensive programming avoids a large number of security problems -- a serious issue in modern software development. ..."
Aug 26, 2019 | Amazon.com

Originally from: Code Craft The Practice of Writing Excellent Code Pete Goodliffe 0689145711905 Amazon.com Gateway

Okay, defensive programming won't remove program failures altogether. But problems will become less of a hassle and easier to fix. Defensive programmers catch falling snowflakes rather than get buried under an avalanche of errors.

Defensive programming is a method of prevention, rather than a form of cure. Compare this to debugging -- the act of removing bugs after they've bitten. Debugging is all about finding a cure.

WHAT DEFENSIVE PROGRAMMING ISN'T

There are a few common misconceptions about defensive programming . Defensive programming is not:

Error checking
If there are error conditions that might arise in your code, you should be checking for them anyway. This is not defensive code. It's just plain good practice -- a part of writing correct code.
Testing
Testing your code is not defensive . It's another normal part of our development work. Test harnesses aren't defensive ; they can prove the code is correct now, but won't prove that it will stand up to future modification. Even with the best test suite in the world, anyone can make a change and slip it past untested.
Debugging
You might add some defensive code during a spell of debugging, but debugging is something you do after your program has failed. Defensive programming is something you do to prevent your program from failing in the first place (or to detect failures early before they manifest in incomprehensible ways, demanding all-night debugging sessions).

Is defensive programming really worth the hassle? There are arguments for and against:

The case against
Defensive programming consumes resources, both yours and the computer's.
  • It eats into the efficiency of your code; even a little extra code requires a little extra execution. For a single function or class, this might not matter, but when you have a system made up of 100,000 functions, you may have more of a problem.
  • Each defensive practice requires some extra work. Why should you follow any of them? You have enough to do already, right? Just make sure people use your code correctly. If they don't, then any problems are their own fault.
The case for
The counterargument is compelling.
  • Defensive programming saves you literally hours of debugging and lets you do more fun stuff instead. Remember Murphy: If your code can be used incorrectly, it will be.
  • Working code that runs properly, but ever-so-slightly slower, is far superior to code that works most of the time but occasionally collapses in a shower of brightly colored sparks.
  • We can design some defensive code to be physically removed in release builds, circumventing the performance issue. The majority of the items we'll consider here don't have any significant overhead, anyway.
  • Defensive programming avoids a large number of security problems -- a serious issue in modern software development. More on this follows.

As the market demands software that's built faster and cheaper, we need to focus on techniques that deliver results. Don't skip the bit of extra work up front that will prevent a whole world of pain and delay later.

[Aug 26, 2019] Error-Handling Techniques

Notable quotes:
"... Return a neutral value. Sometimes the best response to bad data is to continue operating and simply return a value that's known to be harmless. A numeric computation might return 0. A string operation might return an empty string, or a pointer operation might return an empty pointer. A drawing routine that gets a bad input value for color in a video game might use the default background or foreground color. A drawing routine that displays x-ray data for cancer patients, however, would not want to display a "neutral value." In that case, you'd be better off shutting down the program than displaying incorrect patient data. ..."
Aug 26, 2019 | Amazon.com

Originally from: Code Complete, Second Edition Books

Assertions are used to handle errors that should never occur in the code. How do you handle errors that you do expect to occur? Depending on the specific circumstances, you might want to return a neutral value, substitute the next piece of valid data, return the same answer as the previous time, substitute the closest legal value, log a warning message to a file, return an error code, call an error-processing routine or object, display an error message, or shut down -- or you might want to use a combination of these responses.

Here are some more details on these options:

Return a neutral value. Sometimes the best response to bad data is to continue operating and simply return a value that's known to be harmless. A numeric computation might return 0. A string operation might return an empty string, or a pointer operation might return an empty pointer. A drawing routine that gets a bad input value for color in a video game might use the default background or foreground color. A drawing routine that displays x-ray data for cancer patients, however, would not want to display a "neutral value." In that case, you'd be better off shutting down the program than displaying incorrect patient data.

Substitute the next piece of valid data. When processing a stream of data, some circumstances call for simply returning the next valid data. If you're reading records from a database and encounter a corrupted record, you might simply continue reading until you find a valid record. If you're taking readings from a thermometer 100 times per second and you don't get a valid reading one time, you might simply wait another 1/100th of a second and take the next reading.

Return the same answer as the previous time. If the thermometer-reading software doesn't get a reading one time, it might simply return the same value as last time. Depending on the application, temperatures might not be very likely to change much in 1/100th of a second. In a video game, if you detect a request to paint part of the screen an invalid color, you might simply return the same color used previously. But if you're authorizing transactions at a cash machine, you probably wouldn't want to use the "same answer as last time" -- that would be the previous user's bank account number!

Substitute the closest legal value. In some cases, you might choose to return the closest legal value, as in the Velocity example earlier. This is often a reasonable approach when taking readings from a calibrated instrument. The thermometer might be calibrated between 0 and 100 degrees Celsius, for example. If you detect a reading less than 0, you can substitute 0, which is the closest legal value. If you detect a value greater than 100, you can substitute 100. For a string operation, if a string length is reported to be less than 0, you could substitute 0. My car uses this approach to error handling whenever I back up. Since my speedometer doesn't show negative speeds, when I back up it simply shows a speed of 0 -- the closest legal value.

Log a warning message to a file. When bad data is detected, you might choose to log a warning message to a file and then continue on. This approach can be used in conjunction with other techniques like substituting the closest legal value or substituting the next piece of valid data. If you use a log, consider whether you can safely make it publicly available or whether you need to encrypt it or protect it some other way.

Return an error code. You could decide that only certain parts of a system will handle errors. Other parts will not handle errors locally; they will simply report that an error has been detected and trust that some other routine higher up in the calling hierarchy will handle the error. The specific mechanism for notifying the rest of the system that an error has occurred could be any of the following:

In this case, the specific error-reporting mechanism is less important than the decision about which parts of the system will handle errors directly and which will just report that they've occurred. If security is an issue, be sure that calling routines always check return codes.

Call an error-processing routine/object. Another approach is to centralize error handling in a global error-handling routine or error-handling object. The advantage of this approach is that error-processing responsibility can be centralized, which can make debugging easier. The tradeoff is that the whole program will know about this central capability and will be coupled to it. If you ever want to reuse any of the code from the system in another system, you'll have to drag the error-handling machinery along with the code you reuse.

This approach has an important security implication. If your code has encountered a buffer overrun, it's possible that an attacker has compromised the address of the handler routine or object. Thus, once a buffer overrun has occurred while an application is running, it is no longer safe to use this approach.

Display an error message wherever the error is encountered. This approach minimizes error-handling overhead; however, it does have the potential to spread user interface messages through the entire application, which can create challenges when you need to create a consistent user interface, when you try to clearly separate the UI from the rest of the system, or when you try to localize the software into a different language. Also, beware of telling a potential attacker of the system too much. Attackers sometimes use error messages to discover how to attack a system.

Handle the error in whatever way works best locally. Some designs call for handling all errors locally -- the decision of which specific error-handling method to use is left up to the programmer designing and implementing the part of the system that encounters the error.

This approach provides individual developers with great flexibility, but it creates a significant risk that the overall performance of the system will not satisfy its requirements for correctness or robustness (more on this in a moment). Depending on how developers end up handling specific errors, this approach also has the potential to spread user interface code throughout the system, which exposes the program to all the problems associated with displaying error messages.

Shut down. Some systems shut down whenever they detect an error. This approach is useful in safety-critical applications. For example, if the software that controls radiation equipment for treating cancer patients receives bad input data for the radiation dosage, what is its best error-handling response? Should it use the same value as last time? Should it use the closest legal value? Should it use a neutral value? In this case, shutting down is the best option. We'd much prefer to reboot the machine than to run the risk of delivering the wrong dosage.

A similar approach can be used to improve the security of Microsoft Windows. By default, Windows continues to operate even when its security log is full. But you can configure Windows to halt the server if the security log becomes full, which can be appropriate in a security-critical environment.

Robustness vs. Correctness

As the video game and x-ray examples show us, the style of error processing that is most appropriate depends on the kind of software the error occurs in. These examples also illustrate that error processing generally favors more correctness or more robustness. Developers tend to use these terms informally, but, strictly speaking, these terms are at opposite ends of the scale from each other. Correctness means never returning an inaccurate result; returning no result is better than returning an inaccurate result. Robustness means always trying to do something that will allow the software to keep operating, even if that leads to results that are inaccurate sometimes.

Safety-critical applications tend to favor correctness to robustness. It is better to return no result than to return a wrong result. The radiation machine is a good example of this principle.

Consumer applications tend to favor robustness to correctness. Any result whatsoever is usually better than the software shutting down. The word processor I'm using occasionally displays a fraction of a line of text at the bottom of the screen. If it detects that condition, do I want the word processor to shut down? No. I know that the next time I hit Page Up or Page Down, the screen will refresh and the display will be back to normal.

High-Level Design Implications of Error Processing High-Level Design Implications of Error Processing

With so many options, you need to be careful to handle invalid parameters in consistent ways throughout the program . The way in which errors are handled affects the software's ability to meet requirements related to correctness, robustness, and other nonfunctional attributes. Deciding on a general approach to bad parameters is an architectural or high-level design decision and should be addressed at one of those levels.

Once you decide on the approach, make sure you follow it consistently. If you decide to have high-level code handle errors and low-level code merely report errors, make sure the high-level code actually handles the errors! Some languages give you the option of ignoring the fact that a function is returning an error code -- in C++, you're not required to do anything with a function's return value -- but don't ignore error information! Test the function return value. If you don't expect the function ever to produce an error, check it anyway. The whole point of defensive programming is guarding against errors you don't expect.

This guideline holds true for system functions as well as for your own functions. Unless you've set an architectural guideline of not checking system calls for errors, check for error codes after each call. If you detect an error, include the error number and the description of the error.

[Aug 26, 2019] Example of correctable error

Aug 26, 2019 | www.amazon.com

Originally from: Good Habits for Great Coding Improving Programming Skills with Examples in Python Michael Stueben 9781484234587 Amazon.com

There is one danger to defensive coding: It can bury errors. Consider the following code:

def drawLine(m, b, image, start = 0, stop = WIDTH):
    step = 1
    start = int(start)
    stop =  int(stop)
    if stop-start < 0:
       step = -1
       print('WARNING: drawLine parameters were reversed.')
    for x in range(start, stop, step):
        index = int(m*x + b) * WIDTH + x
        if 0 <= index < len(image):
           image[index] = 255 # Poke in a white (= 255) pixel.

This function runs from start to stop . If stop is less than start , it just steps backward and no error is reported .

Maybe we want this kind of error to be "fixed " during the run -- buried -- but I think we should at least print a warning that the range is coming in backwards. Maybe we should abort the program .

[Aug 26, 2019] Being Defensive About Defensive Programming

Notable quotes:
"... Code installed for defensive programming is not immune to defects, and you're just as likely to find a defect in defensive-programming code as in any other code -- more likely, if you write the code casually. Think about where you need to be defensive , and set your defensive-programming priorities accordingly. ..."
Aug 26, 2019 | www.amazon.com

Originally from: Code Complete, Second Edition II. Creating High-Quality Code

8.3. Error-Handling Techniques

Too much of anything is bad, but too much whiskey is just enough. -- Mark Twain

Too much defensive programming creates problems of its own. If you check data passed as parameters in every conceivable way in every conceivable place, your program will be fat and slow.

What's worse, the additional code needed for defensive programming adds complexity to the software.

Code installed for defensive programming is not immune to defects, and you're just as likely to find a defect in defensive-programming code as in any other code -- more likely, if you write the code casually. Think about where you need to be defensive , and set your defensive-programming priorities accordingly.

Defensive Programming

General

Exceptions

Security Issues

[Aug 26, 2019] Creating High-Quality Code

Assertions as special statement is questionable approach unless there is a switch to exclude them from the code. Other then that BASH exit with condition or Perl die can serve equally well.
The main question here is which assertions should be in code only for debugging and which should be in production.
Notable quotes:
"... That an input parameter's value falls within its expected range (or an output parameter's value does) ..."
"... Many languages have built-in support for assertions, including C++, Java, and Microsoft Visual Basic. If your language doesn't directly support assertion routines, they are easy to write. The standard C++ assert macro doesn't provide for text messages. Here's an example of an improved ASSERT implemented as a C++ macro: ..."
"... Use assertions to document and verify preconditions and postconditions. Preconditions and postconditions are part of an approach to program design and development known as "design by contract" (Meyer 1997). When preconditions and postconditions are used, each routine or class forms a contract with the rest of the program . ..."
Aug 26, 2019 | www.amazon.com

Originally from: Code Complete A Practical Handbook of Software Construction, Second Edition Steve McConnell 0790145196705 Amazon.com Books

Assertions

An assertion is code that's used during development -- usually a routine or macro -- that allows a program to check itself as it runs. When an assertion is true, that means everything is operating as expected. When it's false, that means it has detected an unexpected error in the code. For example, if the system assumes that a customerinformation file will never have more than 50,000 records, the program might contain an assertion that the number of records is less than or equal to 50,000. As long as the number of records is less than or equal to 50,000, the assertion will be silent. If it encounters more than 50,000 records, however, it will loudly "assert" that an error is in the program .

Assertions are especially useful in large, complicated programs and in high-reliability programs . They enable programmers to more quickly flush out mismatched interface assumptions, errors that creep in when code is modified, and so on.

An assertion usually takes two arguments: a boolean expression that describes the assumption that's supposed to be true, and a message to display if it isn't. Here's what a Java assertion would look like if the variable denominator were expected to be nonzero:

Example 8-1. Java Example of an Assertion

assert denominator != 0 : "denominator is unexpectedly equal to 0.";

This assertion asserts that denominator is not equal to 0 . The first argument, denominator != 0 , is a boolean expression that evaluates to true or false . The second argument is a message to print if the first argument is false -- that is, if the assertion is false.

Use assertions to document assumptions made in the code and to flush out unexpected conditions. Assertions can be used to check assumptions like these:

Of course, these are just the basics, and your own routines will contain many more specific assumptions that you can document using assertions.

Normally, you don't want users to see assertion messages in production code; assertions are primarily for use during development and maintenance. Assertions are normally compiled into the code at development time and compiled out of the code for production. During development, assertions flush out contradictory assumptions, unexpected conditions, bad values passed to routines, and so on. During production, they can be compiled out of the code so that the assertions don't degrade system performance.

Building Your Own Assertion Mechanism

Many languages have built-in support for assertions, including C++, Java, and Microsoft Visual Basic. If your language doesn't directly support assertion routines, they are easy to write. The standard C++ assert macro doesn't provide for text messages. Here's an example of an improved ASSERT implemented as a C++ macro:

Cross-Reference

Building your own assertion routine is a good example of programming "into" a language rather than just programming "in" a language. For more details on this distinction, see Program into Your Language, Not in It .

Example 8-2. C++ Example of an Assertion Macro

#define ASSERT( condition, message ) {       \
   if ( !(condition) ) {                     \
      LogError( "Assertion failed: ",        \
          #condition, message );             \
      exit( EXIT_FAILURE );                  \
   }                                         \
}

Guidelines for Using Assertions

Here are some guidelines for using assertions:

Use error-handling code for conditions you expect to occur; use assertions for conditions that should. never occur Assertions check for conditions that should never occur. Error-handling code checks for off-nominal circumstances that might not occur very often, but that have been anticipated by the programmer who wrote the code and that need to be handled by the production code. Error handling typically checks for bad input data; assertions check for bugs in the code.

If error-handling code is used to address an anomalous condition, the error handling will enable the program to respond to the error gracefully. If an assertion is fired for an anomalous condition, the corrective action is not merely to handle an error gracefully -- the corrective action is to change the program's source code, recompile, and release a new version of the software.

A good way to think of assertions is as executable documentation -- you can't rely on them to make the code work, but they can document assumptions more actively than program -language comments can.

Avoid putting executable code into assertions. Putting code into an assertion raises the possibility that the compiler will eliminate the code when you turn off the assertions. Suppose you have an assertion like this:

Example 8-3. Visual Basic Example of a Dangerous Use of an Assertion

Debug.Assert( PerformAction() ) ' Couldn't perform action

Cross-Reference

You could view this as one of many problems associated with putting multiple statements on one line. For more examples, see " Using Only One Statement Per Line " in Laying Out Individual Statements .

The problem with this code is that, if you don't compile the assertions, you don't compile the code that performs the action. Put executable statements on their own lines, assign the results to status variables, and test the status variables instead. Here's an example of a safe use of an assertion:

Example 8-4. Visual Basic Example of a Safe Use of an Assertion

actionPerformed = PerformAction()
Debug.Assert( actionPerformed ) ' Couldn't perform action

Use assertions to document and verify preconditions and postconditions. Preconditions and postconditions are part of an approach to program design and development known as "design by contract" (Meyer 1997). When preconditions and postconditions are used, each routine or class forms a contract with the rest of the program .

Further Reading

For much more on preconditions and postconditions, see Object-Oriented Software Construction (Meyer 1997).

Preconditions are the properties that the client code of a routine or class promises will be true before it calls the routine or instantiates the object. Preconditions are the client code's obligations to the code it calls.

Postconditions are the properties that the routine or class promises will be true when it concludes executing. Postconditions are the routine's or class's obligations to the code that uses it.

Assertions are a useful tool for documenting preconditions and postconditions. Comments could be used to document preconditions and postconditions, but, unlike comments, assertions can check dynamically whether the preconditions and postconditions are true.

In the following example, assertions are used to document the preconditions and postcondition of the Velocity routine.

Example 8-5. Visual Basic Example of Using Assertions to Document Preconditions and Postconditions

Private Function Velocity ( _
   ByVal latitude As Single, _
   ByVal longitude As Single, _
   ByVal elevation As Single _
   ) As Single

   ' Preconditions
   Debug.Assert ( -90 <= latitude And latitude <= 90 )
   Debug.Assert ( 0 <= longitude And longitude < 360 )
   Debug.Assert ( -500 <= elevation And elevation <= 75000 )
   ...
   ' Postconditions Debug.Assert ( 0 <= returnVelocity And returnVelocity <= 600 )

   ' return value
   Velocity = returnVelocity
End Function

If the variables latitude , longitude , and elevation were coming from an external source, invalid values should be checked and handled by error-handling code rather than by assertions. If the variables are coming from a trusted, internal source, however, and the routine's design is based on the assumption that these values will be within their valid ranges, then assertions are appropriate.

For highly robust code, assert and then handle the error anyway. For any given error condition, a routine will generally use either an assertion or error-handling code, but not both. Some experts argue that only one kind is needed (Meyer 1997).

Cross-Reference

For more on robustness, see " Robustness vs. Correctness " in Error-Handling Techniques , later in this chapter.

But real-world programs and projects tend to be too messy to rely solely on assertions. On a large, long-lasting system, different parts might be designed by different designers over a period of 5–10 years or more. The designers will be separated in time, across numerous versions. Their designs will focus on different technologies at different points in the system's development. The designers will be separated geographically, especially if parts of the system are acquired from external sources. Programmers will have worked to different coding standards at different points in the system's lifetime. On a large development team, some programmers will inevitably be more conscientious than others and some parts of the code will be reviewed more rigorously than other parts of the code. Some programmers will unit test their code more thoroughly than others. With test teams working across different geographic regions and subject to business pressures that result in test coverage that varies with each release, you can't count on comprehensive, system-level regression testing, either.

In such circumstances, both assertions and error-handling code might be used to address the same error. In the source code for Microsoft Word, for example, conditions that should always be true are asserted, but such errors are also handled by error-handling code in case the assertion fails. For extremely large, complex, long-lived applications like Word, assertions are valuable because they help to flush out as many development-time errors as possible. But the application is so complex (millions of lines of code) and has gone through so many generations of modification that it isn't realistic to assume that every conceivable error will be detected and corrected before the software ships, and so errors must be handled in the production version of the system as well.

Here's an example of how that might work in the Velocity example:

Example 8-6. Visual Basic Example of Using Assertions to Document Preconditions and Postconditions

Private Function Velocity ( _
   ByRef latitude As Single, _
   ByRef longitude As Single, _
   ByRef elevation As Single _
   ) As Single

   ' Preconditions
   Debug.Assert ( -90 <= latitude And latitude <= 90 )       <-- 1
   Debug.Assert ( 0 <= longitude And longitude < 360 )         |
   Debug.Assert ( -500 <= elevation And elevation <= 75000 )       <-- 1
   ...

   ' Sanitize input data. Values should be within the ranges asserted above,
   ' but if a value is not within its valid range, it will be changed to the
   ' closest legal value
   If ( latitude < -90 ) Then       <-- 2
      latitude = -90                  |
   ElseIf ( latitude > 90 ) Then      |
      latitude = 90                   |
   End If                             |
   If ( longitude < 0 ) Then          |
      longitude = 0                   |
   ElseIf ( longitude > 360 ) Then       <-- 2
   ...

(1) Here is assertion code.

(2) Here is the code that handles bad input data at run time.

[Aug 26, 2019] Defensive Programming in C++

Notable quotes:
"... Defensive programming means always checking whether an operation succeeded. ..."
"... Exceptional usually means out of the ordinary and unusually good, but when it comes to errors, the word has a more negative meaning. The system throws an exception when some error condition happens, and if you don't catch that exception, it will give you a dialog box that says something like "your program has caused an error -- –goodbye." ..."
Aug 26, 2019 | www.amazon.com

Originally from: Amazon.com C++ by Example UnderC Learning Edition (0029236726768) Steve Donovan Gateway

There are five desirable properties of good programs : They should be robust, correct, maintainable, friendly, and efficient. Obviously, these properties can be prioritized in different orders, but generally, efficiency is less important than correctness; it is nearly always possible to optimize a well-designed program , whereas badly written "lean and mean" code is often a disaster. (Donald Knuth, the algorithms guru, says that "premature optimization is the root of all evil.")

Here I am mostly talking about programs that have to be used by non-expert users. (You can forgive programs you write for your own purposes when they behave badly: For example, many scientific number-crunching programs are like bad-tempered sports cars.) Being unbreakable is important for programs to be acceptable to users, and you, therefore, need to be a little paranoid and not assume that everything is going to work according to plan. ' Defensive programming ' means writing programs that cope with all common errors. It means things like not assuming that a file exists, or not assuming that you can write to any file (think of a CD-ROM), or always checking for divide by zero.

In the next few sections I want to show you how to 'bullet-proof' programs . First, there is a silly example to illustrate the traditional approach (check everything), and then I will introduce exception handling.

Bullet-Proofing Programs

Say you have to teach a computer to wash its hair. The problem, of course, is that computers have no common sense about these matters: "Lather, rinse, repeat" would certainly lead to a house flooded with bubbles. So you divide the operation into simpler tasks, which return true or false, and check the result of each task before going on to the next one. For example, you can't begin to wash your hair if you can't get the top off the shampoo bottle.

Defensive programming means always checking whether an operation succeeded. So the following code is full of if-else statements, and if you were trying to do something more complicated than wash hair, the code would rapidly become very ugly indeed (and the code would soon scroll off the page):


Code View: Scroll / Show All
void wash_hair()
{
  string msg = "";
  if (! find_shampoo() || ! open_shampoo()) msg = "no shampoo";
  else {
    if (! wet_hair()) msg = "no water!";
    else {
      if (! apply_shampoo()) msg = "shampoo application error";
      else {
        for(int i = 0; i < 2; i++)  // repeat twice
          if (! lather() || ! rinse()) {
                msg = "no hands!";
                break;  // break out of the loop
          }
          if (! dry_hair())  msg = "no towel!";
      }
    }
  }
  if (msg != "") cerr << "Hair error: " << msg << endl;
  // clean up after washing hair
  put_away_towel();
  put_away_shampoo();
}                                        

Part of the hair-washing process is to clean up afterward (as anybody who has a roommate soon learns). This would be a problem for the following code, now assuming that wash_hair() returns a string:

string wash_hair()
{
 ...
  if (! wet_hair()) return "no water!"
  if (! Apply_shampoo()) return "application error!";
...
}

You would need another function to call this wash_hair() , write out the message (if the operation failed), and do the cleanup. This would still be an improvement over the first wash_hair() because the code doesn't have all those nested blocks.

NOTE

Some people disapprove of returning from a function from more than one place, but this is left over from the days when cleanup had to be done manually. C++ guarantees that any object is properly cleaned up, no matter from where you return (for instance, any open file objects are automatically closed). Besides, C++ exception handling works much like a return , except that it can occur from many functions deep. The following section describes this and explains why it makes error checking easier.
Catching Exceptions

An alternative to constantly checking for errors is to let the problem (for example, division by zero, access violation) occur and then use the C++ exception-handling mechanism to gracefully recover from the problem.

Exceptional usually means out of the ordinary and unusually good, but when it comes to errors, the word has a more negative meaning. The system throws an exception when some error condition happens, and if you don't catch that exception, it will give you a dialog box that says something like "your program has caused an error -- –goodbye."

You should avoid doing that to your users -- at the very least you should give them a more reassuring and polite message.

If an exception occurs in a try block, the system tries to match the exception with one (or more) catch blocks.

try {  // your code goes inside this block
  ... problem happens - system throws exception
}
catch(Exception) {  // exception caught here
  ... handle the problem
}

It is an error to have a try without a catch and vice versa. The ON ERROR clause in Visual Basic achieves a similar goal, as do signals in C; they allow you to jump out of trouble to a place where you can deal with the problem. The example is a function div() , which does integer division. Instead of checking whether the divisor is zero, this code lets the division by zero happen but catches the exception. Any code within the try block can safely do integer division, without having to worry about the problem. I've also defined a function bad_div() that does not catch the exception, which will give a system error message when called:

int div(int i, int j)
{
 int k = 0;
 try {
   k = i/j;
   cout << "successful value " << k << endl;
 }
 catch(IntDivideByZero) {
   cout << "divide by zero\n";
 }
 return k;
}
;> int bad_div(int i,int j) {  return i/j; }
;> bad_div(10,0);
integer division by zero <main> (2)
;> div(2,1);
successful value 1
(int) 1
;> div(1,0);
divide by zero
(int) 0

This example is not how you would normally organize things. A lowly function like div() should not have to decide how an error should be handled; its job is to do a straightforward calculation. Generally, it is not a good idea to directly output error information to cout or cerr because Windows graphical user interface programs typically don't do that kind of output. Fortunately, any function call, made from within a try block, that throws an exception will have that exception caught by the catch block. The following is a little program that calls the (trivial) div() function repeatedly but catches any divide-by-zero errors:

// div.cpp
#include <iostream>
#include <uc_except.h>
using namespace std;

int div(int i, int j)
{  return i/j;   }

int main() {
 int i,j,k;
 cout << "Enter 0 0 to exit\n";
 for(;;) { // loop forever
   try {
     cout << "Give two numbers: ";
     cin >> i >> j;
     if (i == 0 && j == 0) return 0; // exit program!
     int k = div(i,j);
     cout << "i/j = " << k << endl;
   }  catch(IntDivideByZero) {
     cout << "divide by zero\n";
   }
  }
  return 0;
}

Notice two crucial things about this example: First, the error-handling code appears as a separate exceptional case, and second, the program does not crash due to divide-by-zero errors (instead, it politely tells the user about the problem and keeps going).

Note the inclusion of <uc_except.h> , which is a nonstandard extension specific to UnderC. The ISO standard does not specify any hardware error exceptions, mostly because not all platforms support them, and a standard has to work everywhere. So IntDivideByZero is not available on all systems. (I have included some library code that implements these hardware exceptions for GCC and BCC32; please see the Appendix for more details.)

How do you catch more than one kind of error? There may be more than one catch block after the try block, and the runtime system looks for the best match. In some ways, a catch block is like a function definition; you supply an argument, and you can name a parameter that should be passed as a reference. For example, in the following code, whatever do_something() does, catch_all_errors() catches it -- specifically a divide-by-zero error -- and it catches any other exceptions as well:

void catch_all_errors()
{
  try {
    do_something();
  }
  catch(IntDivideByZero) {
    cerr << "divide by zero\n";
  }
  catch(HardWareException& e) {
    cerr << "runtime error: " << e.what() << endl;
  }
  catch(Exception& e) {
    cerr << "other error " << e.what() << endl;
  }
}

The standard exceptions have a what() method, which gives more information about them. Order is important here. Exception includes HardwareException , so putting Exception first would catch just about everything. When an exception is thrown, the system picks the first catch block that would match that exception. The rule is to put the catch blocks in order of increasing generality.

Throwing Exceptions

You can throw your own exceptions, which can be of any type, including C++ strings. (In Chapter 8 , "Inheritance and Virtual Methods," you will see how you can create a hierarchy of errors, but for now, strings and integers will do fine.) It is a good idea to write an error-generating function fail() , which allows you to add extra error-tracking features later. The following example returns to the hair-washing algorithm and is even more paranoid about possible problems:

void fail(string msg)
{
  throw msg;
}

void wash_hair()
{
  try {
    if (! find_shampoo()) fail("no shampoo");
    if (! open_shampoo()) fail("can't open shampoo");
    if (! wet_hair())     fail("no water!");
    if (! apply_shampoo())fail("shampoo application error");
    for(int i = 0; i < 2; i++)  // repeat twice
      if (! lather() || ! rinse()) fail("no hands!");
    if (! dry_hair())     fail("no towel!");
  }
  catch(string err) {
    cerr << "Known Hair washing failure: " << err << endl;
  }
  catch(...) {
    cerr << "Catastropic failure\n";
  }
  // clean up after washing hair
  put_away_towel();
  put_away_shampoo();
}

In this example, the general logic is clear, and the cleanup code is always run, whatever disaster happens. This example includes a catch-all catch block at the end. It is a good idea to put one of these in your program's main() function so that it can deliver a more polite message than "illegal instruction." But because you will then have no information about what caused the problem, it's a good idea to cover a number of known cases first. Such a catch-all must be the last catch block; otherwise, it will mask more specific errors.

It is also possible to use a trick that Perl programmers use: If the fail() function returns a bool , then the following expression is valid C++ and does exactly what you want:

dry_hair() || fail("no towel");
lather() && rinse() || fail("no hands!");

If dry_hair() returns true, the or expression must be true, and there's no need to evaluate the second term. Conversely, if dry_hair() returns false, the fail() function would be evaluated and the side effect would be to throw an exception. This short-circuiting of Boolean expressions applies also to && and is guaranteed by the C++ standard.

[Aug 26, 2019] The Eight Defensive Programmer Strategies

Notable quotes:
"... Never Trust Input. Never trust the data you're given and always validate it. ..."
"... Prevent Errors. If an error is possible, no matter how probable, try to prevent it. ..."
"... Document Assumptions Clearly state the pre-conditions, post-conditions, and invariants. ..."
"... Automate everything, especially testing. ..."
Aug 26, 2019 | www.amazon.com

Originally from: Learn C the Hard Way Practical Exercises on the Computational Subjects You Keep Avoiding (Like C) by Zed Shaw

Once you've adopted this mind-set, you can then rewrite your prototype and follow a set of eight strategies to make your code as solid as possible.

While I work on the real version, I ruthlessly follow these strategies and try to remove as many errors as I can, thinking like someone who wants to break the software.

  1. Never Trust Input. Never trust the data you're given and always validate it.
  2. Prevent Errors. If an error is possible, no matter how probable, try to prevent it.
  3. Fail Early and Openly Fail early, cleanly, and openly, stating what happened, where, and how to fix it.
  4. Document Assumptions Clearly state the pre-conditions, post-conditions, and invariants.
  5. Prevention over Documentation. Don't do with documentation that which can be done with code or avoided completely.
  6. Automate Everything Automate everything, especially testing.
  7. Simplify and Clarify Always simplify the code to the smallest, cleanest form that works without sacrificing safety.
  8. Question Authority Don't blindly follow or reject rules.

These aren't the only strategies, but they're the core things I feel programmers have to focus on when trying to make good, solid code. Notice that I don't really say exactly how to do these. I'll go into each of these in more detail, and some of the exercises will actually cover them extensively.

[Aug 26, 2019] Clean Code in Python General Traits of Good Code

Notable quotes:
"... Different responsibilities should go into different components, layers, or modules of the application. Each part of the program should only be responsible for a part of the functionality (what we call its concerns) and should know nothing about the rest. ..."
"... The goal of separating concerns in software is to enhance maintainability by minimizing ripple effects. A ripple effect means the propagation of a change in the software from a starting point. This could be the case of an error or exception triggering a chain of other exceptions, causing failures that will result in a defect on a remote part of the application. It can also be that we have to change a lot of code scattered through multiple parts of the code base, as a result of a simple change in a function definition. ..."
"... Rule of thumb: Well-defined software will achieve high cohesion and low coupling. ..."
Aug 26, 2019 | www.amazon.com

Separation of concerns

This is a design principle that is applied at multiple levels. It is not just about the low-level design (code), but it is also relevant at a higher level of abstraction, so it will come up later when we talk about architecture.

Different responsibilities should go into different components, layers, or modules of the application. Each part of the program should only be responsible for a part of the functionality (what we call its concerns) and should know nothing about the rest.

The goal of separating concerns in software is to enhance maintainability by minimizing ripple effects. A ripple effect means the propagation of a change in the software from a starting point. This could be the case of an error or exception triggering a chain of other exceptions, causing failures that will result in a defect on a remote part of the application. It can also be that we have to change a lot of code scattered through multiple parts of the code base, as a result of a simple change in a function definition.

Clearly, we do not want these scenarios to happen. The software has to be easy to change. If we have to modify or refactor some part of the code that has to have a minimal impact on the rest of the application, the way to achieve this is through proper encapsulation.

In a similar way, we want any potential errors to be contained so that they don't cause major damage.

This concept is related to the DbC principle in the sense that each concern can be enforced by a contract. When a contract is violated, and an exception is raised as a result of such a violation, we know what part of the program has the failure, and what responsibilities failed to be met.

Despite this similarity, separation of concerns goes further. We normally think of contracts between functions, methods, or classes, and while this also applies to responsibilities that have to be separated, the idea of separation of concerns also applies to Python modules, packages, and basically any software component. Cohesion and coupling

These are important concepts for good software design.

On the one hand, cohesion means that objects should have a small and well-defined purpose, and they should do as little as possible. It follows a similar philosophy as Unix commands that do only one thing and do it well. The more cohesive our objects are, the more useful and reusable they become, making our design better.

On the other hand, coupling refers to the idea of how two or more objects depend on each other. This dependency poses a limitation. If two parts of the code (objects or methods) are too dependent on each other, they bring with them some undesired consequences:

Rule of thumb: Well-defined software will achieve high cohesion and low coupling.

[Aug 26, 2019] Software Development and Professional Practice by John Dooley

Notable quotes:
"... Did the read operation return anything? ..."
"... Did the write operation write anything? ..."
"... Check all values in function/method parameter lists. ..."
"... Are they all the correct type and size? ..."
"... You should always initialize variables and not depend on the system to do the initialization for you. ..."
"... taking the time to make your code readable and have the code layout match the logical structure of your design is essential to writing code that is understandable by humans and that works. Adhering to coding standards and conventions, keeping to a consistent style, and including good, accurate comments will help you immensely during debugging and testing. And it will help you six months from now when you come back and try to figure out what the heck you were thinking here. ..."
Jul 15, 2011 | www.amazon.com
Defensive Programming

By defensive programming we mean that your code should protect itself from bad data. The bad data can come from user input via the command line, a graphical text box or form, or a file. Bad data can also come from other routines in your program via input parameters like in the first example above.

How do you protect your program from bad data? Validate! As tedious as it sounds, you should always check the validity of data that you receive from outside your routine. This means you should check the following

What else should you check for? Well, here's a short list:

As an example, here's a C program that takes in a list of house prices from a file and computes the average house price from the list. The file is provided to the program from the command line.

/*
* program to compute the average selling price of a set of homes.
* Input comes from a file that is passed via the command line.

* Output is the Total and Average sale prices for
* all the homes and the number of prices in the file.
*
* jfdooley
*/
#include <stdlib.h>
#include <stdio.h>

int main(int argc, char **argv)
{
FILE *fp;
double totalPrice, avgPrice;
double price;
int numPrices;

/* check that the user entered the correct number of args */
if (argc < 2) {
fprintf(stderr,"Usage: %s <filename>\n", argv[0]);
exit(1);
}

/* try to open the input file */
fp = fopen(argv[1], "r");
if (fp == NULL) {
fprintf(stderr, "File Not Found: %s\n", argv[1]);
exit(1);
}
totalPrice = 0.0;
numPrices = 0;

while (!feof(fp)) {
fscanf(fp, "%10lf\n", &price);
totalPrice += price;
numPrices++;
}

avgPrice = totalPrice / numPrices;
printf("Number of houses is %d\n", numPrices);
printf("Total Price of all houses is $%10.2f\n", totalPrice);
printf("Average Price per house is $%10.2f\n", avgPrice);

return 0;
}

Assertions Can Be Your Friend

Defensive programming means that using assertions is a great idea if your language supports them. Java, C99, and C++ all support assertions. Assertions will test an expression that you give them and if the expression is false, it will throw an error and normally abort the program . You should use error handling code for errors you think might happen – erroneous user input, for example – and use assertions for errors that should never happen – off by one errors in loops, for example. Assertions are great for testing

your program , but because you should remove them before giving programs to customers (you don't want the program to abort on the user, right?) they aren't good to use to validate input data.

Exceptions and Error Handling

We've talked about using assertions to handle truly bad errors, ones that should never occur in production. But what about handling "normal" errors? Part of defensive programming is to handle errors in such a way that no damage is done to any data in the program or the files it uses, and so that the program stays running for as long as possible (making your program robust).

Let's look at exceptions first. You should take advantage of built-in exception handling in whatever programming language you're using. The exception handling mechanism will give you information about what bad thing has just happened. It's then up to you to decide what to do. Normally in an exception handling mechanism you have two choices, handle the exception yourself, or pass it along to whoever called you and let them handle it. What you do and how you do it depends on the language you're using and the capabilities it gives you. We'll talk about exception handling in Java later.

Error Handling

Just like with validation, you're most likely to encounter errors in input data, whether it's command line input, file handling, or input from a graphical user interface form. Here we're talking about errors that occur at run time. Compile time and testing errors are covered in the next chapter on debugging and testing. Other types of errors can be data that your program computes incorrectly, errors in other programs that interact with your program , the operating system for instance, race conditions, and interaction errors where your program is communicating with another and your program is at fault.

The main purpose of error handling is to have your program survive and run correctly for as long as possible. When it gets to a point where your program cannot continue, it needs to report what is wrong as best as it can and then exit gracefully. Exiting is the last resort for error handling. So what should you do? Well, once again we come to the "it depends" answer. What you should do depends on what your program's context is when the error occurs and what its purpose is. You won't handle an error in a video game the same way you handle one in a cardiac pacemaker. In every case, your first goal should be – try to recover.

Trying to recover from an error will have different meanings in different programs . Recovery means that your program needs to try to either ignore the bad data, fix it, or substitute something else that is valid for the bad data. See McConnell 8 for a further discussion of error handling. Here are a few examples of how to recover from errors,

__________

8 McConnell, 2004.

Exceptions in Java

Some programming languages have built-in error reporting systems that will tell you when an error occurs, and leave it up to you to handle it one way or another. These errors that would normally cause your program to die a horrible death are called exceptions . Exceptions get thrown by the code that encounters the error. Once something is thrown, it's usually a good idea if someone catches it. This is the same with exceptions. So there are two sides to exceptions that you need to be aware of when you're writing code:

Java has three different types of exceptions – checked exceptions, errors, and unchecked exceptions. Checked exceptions are those that you should catch and handle yourself using an exception handler; they are exceptions that you should anticipate and handle as you design and write your code. For example, if your code asks a user for a file name, you should anticipate that they will type it wrong and be prepared to catch the resulting FileNotFoundException . Checked exceptions must be caught.

Errors on the other hand are exceptions that usually are related to things happening outside your program and are things you can't do anything about except fail gracefully. You might try to catch the error exception and provide some output for the user, but you will still usually have to exit.

The third type of exception is the runtime exception . Runtime exceptions all result from problems within your program that occur as it runs and almost always indicate errors in your code. For example, a NullPointerException nearly always indicates a bug in your code and shows up as a runtime exception. Errors and runtime exceptions are collectively called unchecked exceptions (that would be because you usually don't try to catch them, so they're unchecked). In the program below we deliberately cause a runtime exception:

public class TestNull {
public static void main(String[] args) {
String str = null;
int len = str.length();
}
}

This program will compile just fine, but when you run it you'll get this as output:


Exception in thread "main" java.lang.NullPointerException

at TestNull.main(TestNull.java:4)


This is a classic runtime exception. There's no need to catch this exception because the only thing we can do is exit. If we do catch it, the program might look like:

public class TestNullCatch {
public static void main(String[] args) {
String str = null;

try {
int len = str.length();
} catch (NullPointerException e) {
System.out.println("Oops: " + e.getMessage());
System.exit(1);
}
}
}

which gives us the output


Oops: null

Note that the getMessage() method will return a String containing whatever error message Java deems appropriate – if there is one. Otherwise it returns a null . This is somewhat less helpful than the default stack trace above.

Let's rewrite the short C program above in Java and illustrate how to catch a checked exception .

import java.io.*;
import java.util.*;

public class FileTest

public static void main(String [] args)
{
File fd = new File("NotAFile.txt");
System.out.println("File exists " + fd.exists());

try {
FileReader fr = new FileReader(fd);
} catch (FileNotFoundException e) {
System.out.println(e.getMessage());
}
}
}

and the output we get when we execute FileTest is


File exists false

NotAFile.txt (No such file or directory)


By the way, if we don't use the try-catch block in the above program , then it won't compile. We get the compiler error message


FileTestWrong.java:11: unreported exception java.io.FileNotFoundException; must be caught or declared to be thrown

FileReader fr = new FileReader(fd);


^
1 error

Remember, checked exceptions must be caught. This type of error doesn't show up for unchecked exceptions. This is far from everything you should know about exceptions and exception handling in Java; start digging through the Java tutorials and the Java API!

The Last Word on Coding

Coding is the heart of software development. Code is what you produce. But coding is hard; translating even a good, detailed design into code takes a lot of thought, experience, and knowledge, even for small programs . Depending on the programming language you are using and the target system, programming can be a very time-consuming and difficult task.

That's why taking the time to make your code readable and have the code layout match the logical structure of your design is essential to writing code that is understandable by humans and that works. Adhering to coding standards and conventions, keeping to a consistent style, and including good, accurate comments will help you immensely during debugging and testing. And it will help you six months from now when you come back and try to figure out what the heck you were thinking here.

And finally,

I am rarely happier than when spending an entire day programming my computer to perform automatically a task that it would otherwise take me a good ten seconds to do by hand.

-- Douglas Adams, "Last Chance to See"

[Aug 26, 2019] Defensive Programming

Notable quotes:
"... How do you protect your program from bad data? Validate! As tedious as it sounds, you should always check the validity of data that you receive from outside your routine. This means you should check the following ..."
"... Check the number and type of command line arguments. ..."
Aug 26, 2019 | www.amazon.com

Originally from: Software Development and Professional Practice (Expert's Voice in Software Development) John Dooley 9781430238010 Amazon.com

By defensive programming we mean that your code should protect itself from bad data. The bad data can come from user input via the command line, a graphical text box or form, or a file. Bad data can also come from other routines in your program via input parameters like in the first example above.

How do you protect your program from bad data? Validate! As tedious as it sounds, you should always check the validity of data that you receive from outside your routine. This means you should check the following

What else should you check for? Well, here's a short list:

As an example, here's a C program that takes in a list of house prices from a file and computes the average house price from the list. The file is provided to the program from the command line.

/*
* program to compute the average selling price of a set of homes.
* Input comes from a file that is passed via the command line.

* Output is the Total and Average sale prices for
* all the homes and the number of prices in the file.
*
* jfdooley
*/
#include <stdlib.h>
#include <stdio.h>

int main(int argc, char **argv)
{
FILE *fp;
double totalPrice, avgPrice;
double price;
int numPrices;

/* check that the user entered the correct number of args */
if (argc < 2) {
fprintf(stderr,"Usage: %s <filename>\n", argv[0]);
exit(1);
}

/* try to open the input file */
fp = fopen(argv[1], "r");
if (fp == NULL) {
fprintf(stderr, "File Not Found: %s\n", argv[1]);
exit(1);
}
totalPrice = 0.0;
numPrices = 0;

while (!feof(fp)) {
fscanf(fp, "%10lf\n", &price);
totalPrice += price;
numPrices++;
}

avgPrice = totalPrice / numPrices;
printf("Number of houses is %d\n", numPrices);
printf("Total Price of all houses is $%10.2f\n", totalPrice);
printf("Average Price per house is $%10.2f\n", avgPrice);

return 0;
}

Assertions Can Be Your Friend

Defensive programming means that using assertions is a great idea if your language supports them. Java, C99, and C++ all support assertions. Assertions will test an expression that you give them and if the expression is false, it will throw an error and normally abort the program . You should use error handling code for errors you think might happen – erroneous user input, for example – and use assertions for errors that should never happen – off by one errors in loops, for example. Assertions are great for testing

your program , but because you should remove them before giving programs to customers (you don't want the program to abort on the user, right?) they aren't good to use to validate input data.

Exceptions and Error Handling

We've talked about using assertions to handle truly bad errors, ones that should never occur in production. But what about handling "normal" errors? Part of defensive programming is to handle errors in such a way that no damage is done to any data in the program or the files it uses, and so that the program stays running for as long as possible (making your program robust).

Let's look at exceptions first. You should take advantage of built-in exception handling in whatever programming language you're using. The exception handling mechanism will give you information about what bad thing has just happened. It's then up to you to decide what to do. Normally in an exception handling mechanism you have two choices, handle the exception yourself, or pass it along to whoever called you and let them handle it. What you do and how you do it depends on the language you're using and the capabilities it gives you. We'll talk about exception handling in Java later.

Error Handling

Just like with validation, you're most likely to encounter errors in input data, whether it's command line input, file handling, or input from a graphical user interface form. Here we're talking about errors that occur at run time. Compile time and testing errors are covered in the next chapter on debugging and testing. Other types of errors can be data that your program computes incorrectly, errors in other programs that interact with your program , the operating system for instance, race conditions, and interaction errors where your program is communicating with another and your program is at fault.

The main purpose of error handling is to have your program survive and run correctly for as long as possible. When it gets to a point where your program cannot continue, it needs to report what is wrong as best as it can and then exit gracefully. Exiting is the last resort for error handling. So what should you do? Well, once again we come to the "it depends" answer. What you should do depends on what your program's context is when the error occurs and what its purpose is. You won't handle an error in a video game the same way you handle one in a cardiac pacemaker. In every case, your first goal should be – try to recover.

Trying to recover from an error will have different meanings in different programs . Recovery means that your program needs to try to either ignore the bad data, fix it, or substitute something else that is valid for the bad data. See McConnell 8 for a further discussion of error handling. Here are a few examples of how to recover from errors,

__________

8 McConnell, 2004.

[Aug 26, 2019] Defensive programming the good, the bad and the ugly - Enterprise Craftsmanship

Notable quotes:
"... In any case, it's important not to allow those statements to spread across your code base. They contain domain knowledge about what makes data or an operation valid, and thus, should be kept in a single place in order to adhere to the DRY principle . ..."
"... Nulls is another source of bugs in many OO languages due to inability to distinguish nullable and non-nullable reference types. Because of that, many programmers code defensively against them. So much that in many projects almost each public method and constructor is populated by this sort of checks: ..."
"... While defensive programming is a useful technique, make sure you use it properly ..."
"... If you see duplicated pre-conditions, consider extracting them into a separate type. ..."
Aug 26, 2019 | enterprisecraftsmanship.com

Defensive programming: the good, the bad and the ugly

https://platform.twitter.com/widgets/follow_button.097c1f5038f9e8a0d62a39a892838d66.en.html#dnt=false&id=twitter-widget-0&lang=en&screen_name=vkhorikov&show_count=true&show_screen_name=true&size=m&time=1566789278322 In this post, I want to take a closer look at the practice of defensive programming. Defensive programming: pre-conditions

Defensive programming stands for the use of guard statements and assertions in your code base (actually, the definition of defensive programming is inconsistent across different sources, but I'll stick to this one). This technique is designed to ensure code correctness and reduce the number of bugs.

Pre-conditions are one of the most widely spread forms of defensive programming. They guarantee that a method can be executed only when some requirements are met. Here's a typical example:

public void CreateAppointment ( DateTime dateTime)

{

if (dateTime. Date < DateTime . Now . AddDays (1). Date )

throw new ArgumentException ( "Date is too early" );

if (dateTime. Date > DateTime . Now . AddMonths (1). Date )

throw new ArgumentException ( "Date is too late" );

/* Create an appointment */

}

Writing code like this is a good practice as it allows you to quickly react to any unexpected situations, therefore adhering to the fail fast principle .

When implementing guard statements, it's important to make sure you don't repeat them. If you find yourself constantly writing repeating code to perform some validations, it's a strong sign you fall into the trap of primitive obsession . The repeated guard clause can be as simple as checking that some integer falls into the expected range:

public void DoSomething ( int count)

{

if (count < 1 || count > 100)

throw new ArgumentException ( "Invalid count" );

/* Do something */

}

public void DoSomethingElse ( int count)

{

if (count < 1 || count > 100)

throw new ArgumentException ( "Invalid count" );

/* Do something else */

}

Or it can relate to some complex business rule which you might not even be able to verbalize yet.

In any case, it's important not to allow those statements to spread across your code base. They contain domain knowledge about what makes data or an operation valid, and thus, should be kept in a single place in order to adhere to the DRY principle .

The best way to do that is to introduce new abstractions for each piece of such knowledge you see repeated in your code base. In the sample above, you can convert the input parameter from integer into a custom type, like this:

public void DoSomething ( Count count)

{

/* Do something */

}

public void DoSomethingElse ( Count count)

{

/* Do something else */

}

public class Count

{

public int Value { get ; private set ; }

public Count ( int value)

{

if (value < 1 || value > 100)

throw new ArgumentException ( "Invalid count" );

Value = value;

}

}

With properly defined domain concepts, there's no need in duplicating pre-conditions.

Defensive programming: nulls

Nulls is another source of bugs in many OO languages due to inability to distinguish nullable and non-nullable reference types. Because of that, many programmers code defensively against them. So much that in many projects almost each public method and constructor is populated by this sort of checks:

public class Controller

{

public Controller ( ILogger logger, IEmailGateway gateway)

{

if (logger == null )

throw new ArgumentNullException ();

if (gateway == null )

throw new ArgumentNullException ();

/* */

}

public void Process ( User user, Order order)

{

if (user == null )

throw new ArgumentNullException ();

/* */

}

}

It's true that null checks are essential. If allowed to slip through, nulls can lead to obscure errors down the road. But you still can significantly reduce the number of such validations.

Do to that, you would need 2 things. First, define a special Maybe struct which would allow you to distinguish nullable and non-nullable reference types. And secondly, use the Fody.NullGuard library to introduce automatic checks for all input parameters that weren't marked with the Maybe struct.

After that, the code above can be turned into the following one:

public class Controller

{

public Controller ( ILogger logger, IEmailGateway gateway)

{

/* */

}

public void Process ( User user, Maybe < Order > order)

{

/* */

}

}

Note the absence of null checks. The null guard does all the work needed for you.

Defensive programming: assertions

Assertions is another valuable concept. It stands for checking that your assumptions about the code's execution flow are correct by introducing assert statements which would be validated at runtime. In practice, it often means validating output of 3rd party libraries that you use in your project. It's a good idea not to trust such libraries by default and always check that the result they produce falls into some expected range.

An example here can be an official library that works with a social provider, such as Facebook SDK client:

public void Register ( string facebookAccessToken)

{

FacebookResponse response = _facebookSdkClient . GetUser (facebookAccessToken);

if ( string . IsNullOrEmpty (response. Email ))

throw new InvalidOperationException ( "Invalid response from Facebook" );

/* Register the user */

}

public void SignIn ( string facebookAccessToken)

{

FacebookResponse response = _facebookSdkClient . GetUser (facebookAccessToken);

if ( string . IsNullOrEmpty (response. Email ))

throw new InvalidOperationException ( "Invalid response from Facebook" );

/* Sign in the user */

}

public class FacebookResponse // Part of the SDK

{

public string FirstName ;

public string LastName ;

public string Email ;

}

This code sample assumes that Facebook should always return an email for any registered user and validates that assumption by employing an assertion.

Just as with duplicated pre-conditions, identical assertions should not be allowed. The guideline here is to always wrap official 3rd party libraries with your own gateways which would encapsulate all the work with those libraries, including assertions.

In our case, it would look like this:

public void Register ( string facebookAccessToken)

{

UserInfo user = _facebookGateway . GetUser (facebookAccessToken);

/* Register the user */

}

public void SignIn ( string facebookAccessToken)

{

UserInfo user = _facebookGateway . GetUser (facebookAccessToken);

/* Sign in the user */

}

public class FacebookGateway

{

public UserInfo GetUser ( string facebookAccessToken)

{

FacebookResponse response = _facebookSdkClient . GetUser (facebookAccessToken);

if ( string . IsNullOrEmpty (response. Email ))

throw new InvalidOperationException ( "Invalid response from Facebook" );

/* Convert FacebookResponse into UserInfo */

}

}

public class UserInfo // Our own class

{

public Maybe < string > FirstName ;

public Maybe < string > LastName ;

public string Email ;

}

Note that along with the assertion, we also convert the object of type FacebookResponse which is a built-in class from the official SDK to our own UserInfo type. This way, we can be sure that the information about the user always resides in a valid state because we validated and converted it ourselves.

Summary

While defensive programming is a useful technique, make sure you use it properly.

[Jul 23, 2019] Object-Oriented Programming -- The Trillion Dollar Disaster

While OO critique is good (althoth most point are far from new) and up to the point the proposed solution is not. There is no universal opener for creating elegant reliable programs.
Notable quotes:
"... Object-Oriented Programming has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization. There's no objective and open evidence that OOP is better than plain procedural programming. ..."
"... The bitter truth is that OOP fails at the only task it was intended to address. It looks good on paper -- we have clean hierarchies of animals, dogs, humans, etc. However, it falls flat once the complexity of the application starts increasing. Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard. ..."
"... C++ is a horrible [object-oriented] language And limiting your project to C means that people don't screw things up with any idiotic "object model" c&@p. -- Linus Torvalds, the creator of Linux ..."
"... Many dislike speed limits on the roads, but they're essential to help prevent people from crashing to death. Similarly, a good programming framework should provide mechanisms that prevent us from doing stupid things. ..."
"... Unfortunately, OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though OOP promises to address modularity and improve reusability, it fails to deliver on its promises (more on this later). OOP code encourages the use of shared mutable state, which has been proven to be unsafe time and time again. OOP typically requires a lot of boilerplate code (low signal-to-noise ratio). ..."
Jul 23, 2019 | medium.com

The ultimate goal of every software developer should be to write reliable code. Nothing else matters if the code is buggy and unreliable. And what is the best way to write code that is reliable? Simplicity . Simplicity is the opposite of complexity . Therefore our first and foremost responsibility as software developers should be to reduce code complexity.

Disclaimer

I'll be honest, I'm not a raving fan of object-orientation. Of course, this article is going to be biased. However, I have good reasons to dislike OOP.

I also understand that criticism of OOP is a very sensitive topic -- I will probably offend many readers. However, I'm doing what I think is right. My goal is not to offend, but to raise awareness of the issues that OOP introduces.

I'm not criticizing Alan Kay's OOP -- he is a genius. I wish OOP was implemented the way he designed it. I'm criticizing the modern Java/C# approach to OOP.

I will also admit that I'm angry. Very angry. I think that it is plain wrong that OOP is considered the de-facto standard for code organization by many people, including those in very senior technical positions. It is also wrong that many mainstream languages don't offer any other alternatives to code organization other than OOP.

Hell, I used to struggle a lot myself while working on OOP projects. And I had no single clue why I was struggling this much. Maybe I wasn't good enough? I had to learn a couple more design patterns (I thought)! Eventually, I got completely burned out.

This post sums up my first-hand decade-long journey from Object-Oriented to Functional programming. I've seen it all. Unfortunately, no matter how hard I try, I can no longer find use cases for OOP. I have personally seen OOP projects fail because they become too complex to maintain.


TLDR

Object oriented programs are offered as alternatives to correct ones -- Edsger W. Dijkstra , pioneer of computer science

<img src="https://miro.medium.com/max/1400/1*MTb-Xx5D0H6LUJu_cQ9fMQ.jpeg" width="700" height="467"/>
Photo by Sebastian Herrmann on Unsplash

Object-Oriented Programming has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization. There's no objective and open evidence that OOP is better than plain procedural programming.

The bitter truth is that OOP fails at the only task it was intended to address. It looks good on paper -- we have clean hierarchies of animals, dogs, humans, etc. However, it falls flat once the complexity of the application starts increasing. Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard.

Some might disagree with me, but the truth is that modern OOP has never been properly designed. It never came out of a proper research institution (in contrast with Haskell/FP). I do not consider Xerox or another enterprise to be a "proper research institution". OOP doesn't have decades of rigorous scientific research to back it up. Lambda calculus offers a complete theoretical foundation for Functional Programming. OOP has nothing to match that. OOP mainly "just happened".

Using OOP is seemingly innocent in the short-term, especially on greenfield projects. But what are the long-term consequences of using OOP? OOP is a time bomb, set to explode sometime in the future when the codebase gets big enough.

Projects get delayed, deadlines get missed, developers get burned-out, adding in new features becomes next to impossible. The organization labels the codebase as the "legacy codebase" , and the development team plans a rewrite .

OOP is not natural for the human brain, our thought process is centered around "doing" things -- go for a walk, talk to a friend, eat pizza. Our brains have evolved to do things, not to organize the world into complex hierarchies of abstract objects.

OOP code is non-deterministic -- unlike with functional programming, we're not guaranteed to get the same output given the same inputs. This makes reasoning about the program very hard. As an oversimplified example, the output of 2+2 or calculator.Add(2, 2) mostly is equal to four, but sometimes it might become equal to three, five, and maybe even 1004. The dependencies of the Calculator object might change the result of the computation in subtle, but profound ways.


The Need for a Resilient Framework

I know, this may sound weird, but as programmers, we shouldn't trust ourselves to write reliable code. Personally, I am unable to write good code without a strong framework to base my work on. Yes, there are frameworks that concern themselves with some very particular problems (e.g. Angular or ASP.Net).

I'm not talking about the software frameworks. I'm talking about the more abstract dictionary definition of a framework: "an essential supporting structure " -- frameworks that concern themselves with the more abstract things like code organization and tackling code complexity. Even though Object-Oriented and Functional Programming are both programming paradigms, they're also both very high-level frameworks.

Limiting our choices

C++ is a horrible [object-oriented] language And limiting your project to C means that people don't screw things up with any idiotic "object model" c&@p. -- Linus Torvalds, the creator of Linux

Linus Torvalds is widely known for his open criticism of C++ and OOP. One thing he was 100% right about is limiting programmers in the choices they can make. In fact, the fewer choices programmers have, the more resilient their code becomes. In the quote above, Linus Torvalds highly recommends having a good framework to base our code upon.

<img src="https://miro.medium.com/max/1400/1*ujt2PMrbhCZuGhufoxfr5w.jpeg" width="700" height="465"/>
Photo by specphotops on Unsplash

Many dislike speed limits on the roads, but they're essential to help prevent people from crashing to death. Similarly, a good programming framework should provide mechanisms that prevent us from doing stupid things.

A good programming framework helps us to write reliable code. First and foremost, it should help reduce complexity by providing the following things:

  1. Modularity and reusability
  2. Proper state isolation
  3. High signal-to-noise ratio

Unfortunately, OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though OOP promises to address modularity and improve reusability, it fails to deliver on its promises (more on this later). OOP code encourages the use of shared mutable state, which has been proven to be unsafe time and time again. OOP typically requires a lot of boilerplate code (low signal-to-noise ratio).

... ... ...

Messaging

Alan Kay coined the term "Object Oriented Programming" in the 1960s. He had a background in biology and was attempting to make computer programs communicate the same way living cells do.

<img src="https://miro.medium.com/max/1400/1*bzRsnzakR7O4RMbDfEZ1sA.jpeg" width="700" height="467"/>
Photo by Muukii on Unsplash

Alan Kay's big idea was to have independent programs (cells) communicate by sending messages to each other. The state of the independent programs would never be shared with the outside world (encapsulation).

That's it. OOP was never intended to have things like inheritance, polymorphism, the "new" keyword, and the myriad of design patterns.

OOP in its purest form

Erlang is OOP in its purest form. Unlike more mainstream languages, it focuses on the core idea of OOP -- messaging. In Erlang, objects communicate by passing immutable messages between objects.

Is there proof that immutable messages are a superior approach compared to method calls?

Hell yes! Erlang is probably the most reliable language in the world. It powers most of the world's telecom (and hence the internet) infrastructure. Some of the systems written in Erlang have reliability of 99.9999999% (you read that right -- nine nines). Code Complexity

With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder to modify and maintain.

-- Richard Mansfield

The most important aspect of software development is keeping the code complexity down. Period. None of the fancy features matter if the codebase becomes impossible to maintain. Even 100% test coverage is worth nothing if the codebase becomes too complex and unmaintainable .

What makes the codebase complex? There are many things to consider, but in my opinion, the top offenders are: shared mutable state, erroneous abstractions, and low signal-to-noise ratio (often caused by boilerplate code). All of them are prevalent in OOP.


The Problems of State

<img src="https://miro.medium.com/max/1400/1*1WeuR9OoKyD5EvtT9KjXOA.jpeg" width="700" height="467"/>
Photo by Mika Baumeister on Unsplash

What is state? Simply put, state is any temporary data stored in memory. Think variables or fields/properties in OOP. Imperative programming (including OOP) describes computation in terms of the program state and changes to that state . Declarative (functional) programming describes the desired results instead, and don't specify changes to the state explicitly.

... ... ...

To make the code more efficient, objects are passed not by their value, but by their reference . This is where "dependency injection" falls flat.

Let me explain. Whenever we create an object in OOP, we pass references to its dependencies to the constructor . Those dependencies also have their own internal state. The newly created object happily stores references to those dependencies in its internal state and is then happy to modify them in any way it pleases. And it also passes those references down to anything else it might end up using.

This creates a complex graph of promiscuously shared objects that all end up changing each other's state. This, in turn, causes huge problems since it becomes almost impossible to see what caused the program state to change. Days might be wasted trying to debug such state changes. And you're lucky if you don't have to deal with concurrency (more on this later).

Methods/Properties

The methods or properties that provide access to particular fields are no better than changing the value of a field directly. It doesn't matter whether you mutate an object's state by using a fancy property or method -- the result is the same: mutated state.

Some people say that OOP tries to model the real world. This is simply not true -- OOP has nothing to relate to in the real world. Trying to model programs as objects probably is one of the biggest OOP mistakes.

The real world is not hierarchical

OOP attempts to model everything as a hierarchy of objects. Unfortunately, that is not how things work in the real world. Objects in the real world interact with each other using messages, but they mostly are independent of each other.

Inheritance in the real world

OOP inheritance is not modeled after the real world. The parent object in the real world is unable to change the behavior of child objects at run-time. Even though you inherit your DNA from your parents, they're unable to make changes to your DNA as they please. You do not inherit "behaviors" from your parents, you develop your own behaviors. And you're unable to "override" your parents' behaviors.

The real world has no methods

Does the piece of paper you're writing on have a "write" method ? No! You take an empty piece of paper, pick up a pen, and write some text. You, as a person, don't have a "write" method either -- you make the decision to write some text based on outside events or your internal thoughts.


The Kingdom of Nouns

Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds.

-- Joe Armstrong , creator of Erlang

Objects (or nouns) are at the very core of OOP. A fundamental limitation of OOP is that it forces everything into nouns. And not everything should be modeled as nouns. Operations (functions) should not be modeled as objects. Why are we forced to create a Multiplier class when all we need is a function that multiplies two numbers? Simply have a Multiply function, let data be data and let functions be functions!

In non-OOP languages, doing trivial things like saving data to a file is straightforward -- very similar to how you would describe an action in plain English.

Real-world example, please!

Sure, going back to the painter example, the painter owns a PaintingFactory . He has hired a dedicated BrushManager , ColorManager , a CanvasManager and a MonaLisaProvider . His good friend zombie makes use of a BrainConsumingStrategy . Those objects, in turn, define the following methods: CreatePainting , FindBrush , PickColor , CallMonaLisa , and ConsumeBrainz .

Of course, this is plain stupidity, and could never have happened in the real world. How much unnecessary complexity has been created for the simple act of drawing a painting?

There's no need to invent strange concepts to hold your functions when they're allowed to exist separately from the objects.


Unit Testing
<img src="https://miro.medium.com/max/1400/1*xGn4uGgVyrRAXnqSwTF69w.jpeg" width="700" height="477"/>
Photo by Ani Kolleshi on Unsplash

Automated testing is an important part of the development process and helps tremendously in preventing regressions (i.e. bugs being introduced into existing code). Unit Testing plays a huge role in the process of automated testing.

Some might disagree, but OOP code is notoriously difficult to unit test. Unit Testing assumes testing things in isolation, and to make a method unit-testable:

  1. Its dependencies have to be extracted into a separate class.
  2. Create an interface for the newly created class.
  3. Declare fields to hold the instance of the newly created class.
  4. Make use of a mocking framework to mock the dependencies.
  5. Make use of a dependency-injection framework to inject the dependencies.

How much more complexity has to be created just to make a piece of code testable? How much time was wasted just to make some code testable?

> PS we'd also have to instantiate the entire class in order to test a single method. This will also bring in the code from all of its parent classes.

With OOP, writing tests for legacy code is even harder -- almost impossible. Entire companies have been created ( TypeMock ) around the issue of testing legacy OOP code.

Boilerplate code

Boilerplate code is probably the biggest offender when it comes to the signal-to-noise ratio. Boilerplate code is "noise" that is required to get the program to compile. Boilerplate code takes time to write and makes the codebase less readable because of the added noise.

While "program to an interface, not to an implementation" is the recommended approach in OOP, not everything should become an interface. We'd have to resort to using interfaces in the entire codebase, for the sole purpose of testability. We'd also probably have to make use of dependency injection, which further introduced unnecessary complexity.

Testing private methods

Some people say that private methods shouldn't be tested I tend to disagree, unit testing is called "unit" for a reason -- test small units of code in isolation. Yet testing of private methods in OOP is nearly impossible. We shouldn't be making private methods internal just for the sake of testability.

In order to achieve testability of private methods, they usually have to be extracted into a separate object. This, in turn, introduces unnecessary complexity and boilerplate code.


Refactoring

Refactoring is an important part of a developer's day-to-day job. Ironically, OOP code is notoriously hard to refactor. Refactoring is supposed to make the code less complex, and more maintainable. On the contrary, refactored OOP code becomes significantly more complex -- to make the code testable, we'd have to make use of dependency injection, and create an interface for the refactored class. Even then, refactoring OOP code is really hard without dedicated tools like Resharper.

https://medium.com/media/b557e5152569ad4569e250d2c2ba21b6

In the simple example above, the line count has more than doubled just to extract a single method. Why does refactoring create even more complexity, when the code is being refactored in order to decrease complexity in the first place?

Contrast this to a similar refactor of non-OOP code in JavaScript:

https://medium.com/media/36d6f2f2e78929c6bcd783f12c929f90

The code has literally stayed the same -- we simply moved the isValidInput function to a different file and added a single line to import that function. We've also added _isValidInput to the function signature for the sake of testability.

This is a simple example, but in practice the complexity grows exponentially as the codebase gets bigger.

And that's not all. Refactoring OOP code is extremely risky . Complex dependency graphs and state scattered all over OOP codebase, make it impossible for the human brain to consider all of the potential issues.


The Band-aids
<img src="https://miro.medium.com/max/1400/1*JOtbVvacgu-nH3ZR4mY2Og.jpeg" width="700" height="567"/>
Image source: Photo by Pixabay from Pexels

What do we do when something is not working? It is simple, we only have two options -- throw it away or try fixing it. OOP is something that can't be thrown away easily, millions of developers are trained in OOP. And millions of organizations worldwide are using OOP.

You probably see now that OOP doesn't really work , it makes our code complex and unreliable. And you're not alone! People have been thinking hard for decades trying to address the issues prevalent in OOP code. They've come up with a myriad of design patterns.

Design patterns

OOP provides a set of guidelines that should theoretically allow developers to incrementally build larger and larger systems: SOLID principle, dependency injection, design patterns, and others.

Unfortunately, the design patterns are nothing other than band-aids. They exist solely to address the shortcomings of OOP. A myriad of books has even been written on the topic. They wouldn't have been so bad, had they not been responsible for the introduction of enormous complexity to our codebases.

The problem factory

In fact, it is impossible to write good and maintainable Object-Oriented code.

On one side of the spectrum we have an OOP codebase that is inconsistent and doesn't seem to adhere to any standards. On the other side of the spectrum, we have a tower of over-engineered code, a bunch of erroneous abstractions built one on top of one another. Design patterns are very helpful in building such towers of abstractions.

Soon, adding in new functionality, and even making sense of all the complexity, gets harder and harder. The codebase will be full of things like SimpleBeanFactoryAwareAspectInstanceFactory , AbstractInterceptorDrivenBeanDefinitionDecorator , TransactionAwarePersistenceManagerFactoryProxy or RequestProcessorFactoryFactory .

Precious brainpower has to be wasted trying to understand the tower of abstractions that the developers themselves have created. The absence of structure is in many cases better than having bad structure (if you ask me).

<img src="https://miro.medium.com/max/1400/1*_xDSrTC0F2lke6OYtkRm8g.png" width="700" height="308"/>
Image source: https://www.reddit.com/r/ProgrammerHumor/comments/418x95/theory_vs_reality/

Further reading: FizzBuzzEnterpriseEdition

[Jul 22, 2019] Is Object-Oriented Programming a Trillion Dollar Disaster - Slashdot

Jul 22, 2019 | developers.slashdot.org

Is Object-Oriented Programming a Trillion Dollar Disaster? (medium.com) Posted by EditorDavid on Monday July 22, 2019 @01:04AM from the OOPs dept. Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay calling object-oriented programming "a trillion dollar disaster." Precious time and brainpower are being spent thinking about "abstractions" and "design patterns" instead of solving real-world problems... Object-Oriented Programming (OOP) has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization . There's no objective and open evidence that OOP is better than plain procedural programming ... Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard...

Using OOP is seemingly innocent in the short-term, especially on greenfield projects. But what are the long-term consequences of using OOP? OOP is a time bomb, set to explode sometime in the future when the codebase gets big enough. Projects get delayed, deadlines get missed, developers get burned-out, adding in new features becomes next to impossible . The organization labels the codebase as the " legacy codebase ", and the development team plans a rewrite .... OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though OOP promises to address modularity and improve reusability, it fails to deliver on its promises...

I'm not criticizing Alan Kay's OOP -- he is a genius. I wish OOP was implemented the way he designed it. I'm criticizing the modern Java/C# approach to OOP... I think that it is plain wrong that OOP is considered the de-facto standard for code organization by many people, including those in very senior technical positions. It is also wrong that many mainstream languages don't offer any other alternatives to code organization other than OOP.

The essay ultimately blames Java for the popularity of OOP, citing Alan Kay's comment that Java "is the most distressing thing to happen to computing since MS-DOS." It also quotes Linus Torvalds's observation that "limiting your project to C means that people don't screw things up with any idiotic 'object model'."

And it ultimately suggests Functional Programming as a superior alternative, making the following assertions about OOP:

"OOP code encourages the use of shared mutable state, which has been proven to be unsafe time and time again... [E]ncapsulation, in fact, is glorified global state." "OOP typically requires a lot of boilerplate code (low signal-to-noise ratio)." "Some might disagree, but OOP code is notoriously difficult to unit test... [R]efactoring OOP code is really hard without dedicated tools like Resharper." "It is impossible to write good and maintainable Object-Oriented code."

segedunum ( 883035 ) , Monday July 22, 2019 @05:36AM ( #58964224 )

Re:Not Tiresome, Hilariously Hypocritical ( Score: 4 , Informative)
There's no objective and open evidence that OOP is better than plain procedural programming...

...which is followed by the author's subjective opinions about why procedural programming is better than OOP. There's no objective comparison of the pros and cons of OOP vs procedural just a rant about some of OOP's problems.

We start from the point-of-view that OOP has to prove itself. Has it? Has any project or programming exercise ever taken less time because it is object-oriented?

Precious time and brainpower are being spent thinking about "abstractions" and "design patterns" instead of solving real-world problems...

...says the person who took the time to write a 6,000 word rant on "why I hate OOP".

Sadly, that was something you hallucinated. He doesn't say that anywhere.

mfnickster ( 182520 ) , Monday July 22, 2019 @10:54AM ( #58965660 )
Re:Tiresome ( Score: 5 , Interesting)

Inheritance, while not "inherently" bad, is often the wrong solution. See: Why extends is evil [javaworld.com]

Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny little anecdote in Cocoa Programming for Mac OS X [google.com]:

"Once upon a time, there was a company called Taligent. Taligent was created by IBM and Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the peak of its mindshare, I met one of its engineers at a trade show. I asked him to create a simple application for me: A window would appear with a button, and when the button was clicked, the words 'Hello, World!' would appear in a text field. The engineer created a project and started subclassing madly: subclassing the window and the button and the event handler. Then he started generating code: dozens of lines to get the button and the text field onto the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew that the company was doomed. A couple of years later, Taligent quietly closed its doors forever."

Darinbob ( 1142669 ) , Monday July 22, 2019 @03:00AM ( #58963760 )
Re:The issue ( Score: 5 , Insightful)

Almost every programming methodology can be abused by people who really don't know how to program well, or who don't want to. They'll happily create frameworks, implement new development processes, and chart tons of metrics, all while avoiding the work of getting the job done. In some cases the person who writes the most code is the same one who gets the least amount of useful work done.

So, OOP can be misused the same way. Never mind that OOP essentially began very early and has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially an object oriented system. It's just data encapsulation and separating work into manageable modules. That's how it was before anyone ever came up with the dumb name "full-stack developer".

cardpuncher ( 713057 ) , Monday July 22, 2019 @04:06AM ( #58963948 )
Re:The issue ( Score: 5 , Insightful)

As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the rise of OOP with some curiosity. I think there's a general consensus that abstraction and re-usability are good things - they're the reason subroutines exist - the issue is whether they are ends in themselves.

I struggle with the whole concept of "design patterns". There are clearly common themes in software, but there seems to be a great deal of pressure these days to make your implementation fit some pre-defined template rather than thinking about the application's specific needs for state and concurrency. I have seen some rather eccentric consequences of "patternism".

Correctly written, OOP code allows you to encapsulate just the logic you need for a specific task and to make that specific task available in a wide variety of contexts by judicious use of templating and virtual functions that obviate the need for "refactoring". Badly written, OOP code can have as many dangerous side effects and as much opacity as any other kind of code. However, I think the key factor is not the choice of programming paradigm, but the design process. You need to think first about what your code is intended to do and in what circumstances it might be reused. In the context of a larger project, it means identifying commonalities and deciding how best to implement them once. You need to document that design and review it with other interested parties. You need to document the code with clear information about its valid and invalid use. If you've done that, testing should not be a problem.

Some people seem to believe that OOP removes the need for some of that design and documentation. It doesn't and indeed code that you intend to be reused needs *more* design and documentation than the glue that binds it together in any one specific use case. I'm still a firm believer that coding begins with a pencil, not with a keyboard. That's particularly true if you intend to design abstract interfaces that will serve many purposes. In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the costs - and that usually means you not only know your code will be genuinely reusable but will also genuinely be reused.

ImdatS ( 958642 ) , Monday July 22, 2019 @04:43AM ( #58964070 ) Homepage
Re:The issue ( Score: 5 , Insightful)
[...] I'm still a firm believer that coding begins with a pencil, not with a keyboard. [...]

This!
In fact, even more: I'm a firm believer that coding begins with a pencil designing the data model that you want to implement.

Everything else is just code that operates on that data model. Though I agree with most of what you say, I believe the classical "MVC" design-pattern is still valid. And, you know what, there is a reason why it is called "M-V-C": Start with the Model, continue with the View and finalize with the Controller. MVC not only stood for Model-View-Controller but also for the order of the implementation of each.

And preferably, as you stated correctly, "... start with pencil & paper ..."

Rockoon ( 1252108 ) , Monday July 22, 2019 @05:23AM ( #58964192 )
Re:The issue ( Score: 5 , Insightful)
I struggle with the whole concept of "design patterns".

Because design patterns are stupid.

A reasonable programmer can understand reasonable code so long as the data is documented even when the code isnt documented, but will struggle immensely if it were the other way around. Bad programmers create objects for objects sake, and because of that they have to follow so called "design patterns" because no amount of code commenting makes the code easily understandable when its a spaghetti web of interacting "objects" The "design patterns" dont make the code easier the read, just easier to write.

Those OOP fanatics, if they do "document" their code, add comments like "// increment the index" which is useless shit.

The big win of OOP is only in the encapsulation of the data with the code, and great code treats objects like data structures with attached subroutines, not as "objects" , and document the fuck out of the contained data, while more or less letting the code document itself. and keep OO elements to a minimum. As it turns out, OOP is just much more effort than procedural and it rarely pays off to invest that effort, at least for me.

Z00L00K ( 682162 ) , Monday July 22, 2019 @05:14AM ( #58964162 ) Homepage
Re:The issue ( Score: 4 , Insightful)

The problem isn't the object orientation paradigm itself, it's how it's applied.

The big problem in any project is that you have to understand how to break down the final solution into modules that can be developed independently of each other to a large extent and identify the items that are shared. But even when you have items that are apparently identical don't mean that they will be that way in the long run, so shared code may even be dangerous because future developers don't know that by fixing problem A they create problems B, C, D and E.

Futurepower(R) ( 558542 ) writes: < MJennings.USA@NOT_any_of_THISgmail.com > on Monday July 22, 2019 @06:03AM ( #58964326 ) Homepage
Eternal September? ( Score: 4 , Informative)

Eternal September [wikipedia.org]

gweihir ( 88907 ) , Monday July 22, 2019 @07:48AM ( #58964672 )
Re:The issue ( Score: 3 )
Any time you make something easier, you lower the bar as well and now have a pack of idiots that never could have been hired if it weren't for a programming language that stripped out a lot of complexity for them.

Exactly. There are quite a few aspects of writing code that are difficult regardless of language and there the difference in skill and insight really matters.

Joce640k ( 829181 ) , Monday July 22, 2019 @04:14AM ( #58963972 ) Homepage
Re:The issue ( Score: 2 )

OO programming doesn't have any real advantages for small projects.

ImdatS ( 958642 ) , Monday July 22, 2019 @04:36AM ( #58964040 ) Homepage
Re:The issue ( Score: 5 , Insightful)

I have about 35+ years of software development experience, including with procedural, OOP and functional programming languages.

My experience is: The question "is procedural better than OOP or functional?" (or vice-versa) has a single answer: "it depends".

Like in your cases above, I would exactly do the same: use some procedural language that solves my problem quickly and easily.

In large-scale applications, I mostly used OOP (having learned OOP with Smalltalk & Objective-C). I don't like C++ or Java - but that's a matter of personal preference.

I use Python for large-scale scripts or machine learning/AI tasks.

I use Perl for short scripts that need to do a quick task.

Procedural is in fact easier to grasp for beginners as OOP and functional require a different way of thinking. If you start developing software, after a while (when the project gets complex enough) you will probably switch to OOP or functional.

Again, in my opinion neither is better than the other (procedural, OOP or functional). It just depends on the task at hand (and of course on the experience of the software developer).

spazmonkey ( 920425 ) , Monday July 22, 2019 @01:22AM ( #58963430 )
its the way OOP is taught ( Score: 5 , Interesting)

There is nothing inherently wrong with some of the functionality it offers, its the way OOP is abused as a substitute for basic good programming practices. I was helping interns - students from a local CC - deal with idiotic assignments like making a random number generator USING CLASSES, or displaying text to a screen USING CLASSES. Seriously, WTF? A room full of career programmers could not even figure out how you were supposed to do that, much less why. What was worse was a lack of understanding of basic programming skill or even the use of variables, as the kids were being taught EVERY program was to to be assembled solely by sticking together bits of libraries. There was no coding, just hunting for snippets of preexisting code to glue together. Zero idea they could add their own, much less how to do it. OOP isn't the problem, its the idea that it replaces basic programming skills and best practice.

sjames ( 1099 ) , Monday July 22, 2019 @02:30AM ( #58963680 ) Homepage Journal
Re:its the way OOP is taught ( Score: 5 , Interesting)

That and the obsession with absofrackinglutely EVERYTHING just having to be a formally declared object including the while program being an object with a run() method.

Some things actually cry out to be objects, some not so much.Generally, I find that my most readable and maintainable code turns out to be a procedural program that manipulates objects.

Even there, some things just naturally want to be a struct or just an array of values.

The same is true of most ingenious ideas in programming. It's one thing if code is demonstrating a particular idea, but production code is supposed to be there to do work, not grind an academic ax.

For example, slavish adherence to "patterns". They're quite useful for thinking about code and talking about code, but they shouldn't be the end of the discussion. They work better as a starting point. Some programs seem to want patterns to be mixed and matched.

In reality those problems are just cargo cult programming one level higher.

I suspect a lot of that is because too many developers barely grasp programming and never learned to go beyond the patterns they were explicitly taught.

When all you have is a hammer, the whole world looks like a nail.

bradley13 ( 1118935 ) , Monday July 22, 2019 @02:15AM ( #58963622 ) Homepage
It depends... ( Score: 5 , Insightful)

There are a lot of mediocre programmers who follow the principle "if you have a hammer, everything looks like a nail". They know OOP, so they think that every problem must be solved in an OOP way. In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modelling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm.

In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules. For example, I am working on a natural language system that is supposed to generate textual answers to user inquiries. What "object" am I supposed to create to do this task? An "Answer" object that generates itself? Yes, that would work, but an imperative, static "generate answer" method makes at least as much sense.

There are different ways of thinking, different ways of modelling a problem. I get tired of the purists who think that OO is the only possible answer. The world is not a nail.

Beechmere ( 538241 ) , Monday July 22, 2019 @02:31AM ( #58963684 )
Class? Object? ( Score: 5 , Interesting)

I'm approaching 60, and I've been coding in COBOL, VB, FORTRAN, REXX, SQL for almost 40 years. I remember seeing Object Oriented Programming being introduced in the 80s, and I went on a course once (paid by work). I remember not understanding the concept of "Classes", and my impression was that the software we were buying was just trying to invent stupid new words for old familiar constructs (eg: Files, Records, Rows, Tables, etc). So I never transitioned away from my reliable mainframe programming platform. I thought the phrase OOP had dies out long ago, along with "Client Server" (whatever that meant). I'm retiring in a few years, and the mainframe will outlive me. Everything else is buggy.

cb88 ( 1410145 ) , Monday July 22, 2019 @03:11AM ( #58963794 )
Going back to Torvald's quote.... ( Score: 5 , Funny)

"limiting your project to C means that people don't screw things up with any idiotic 'object model'."

GTK .... hold by beer... it is not a good argument against OOP languages. But first, lets see how OOP came into place. OOP was designed to provide encapsulation, like components, support reuse and code sharing. It was the next step coming from modules and units, which where better than libraries, as functions and procedures had namespaces, which helped structuring code. OOP is a great idea when writing UI toolkits or similar stuff, as you can as

DrXym ( 126579 ) , Monday July 22, 2019 @04:57AM ( #58964116 )
No ( Score: 3 )

Like all things OO is fine in moderation but it's easy to go completely overboard, decomposing, normalizing, producing enormous inheritance trees. Yes your enormous UML diagram looks impressive, and yes it will be incomprehensible, fragile and horrible to maintain.

That said, it's completely fine in moderation. The same goes for functional programming. Most programmers can wrap their heads around things like functions, closures / lambdas, streams and so on. But if you mean true functional programming then forget it.

As for the kernel's choice to use C, that really boils down to the fact that a kernel needs to be lower level than a typical user land application. It has to do its own memory allocation and other things that were beyond C++ at the time. STL would have been usable, so would new / delete, and exceptions & unwinding. And at that point why even bother? That doesn't mean C is wonderful or doesn't inflict its own pain and bugs on development. But at the time, it was the only sane choice.

[Jul 22, 2019] Almost right

Jul 22, 2019 | developers.slashdot.org
Tough Love ( 215404 ), Monday July 22, 2019 @01:27AM ( #58963442 )

The entire software world is a multi-trillion dollar disaster.

Agile, Waterfall, Oop, fucking Javascript or worse its wannabe spawn of the devil Node. C, C++, Java wankers, did I say wankers? Yes wankers.

IT architects, pundit of the week, carpetbaggers, Aspies, total incompetents moving from job to job, you name it.

Disaster, complete and utter. Anybody who doesn't know this hasn't been paying attention.

About the only bright spot is a few open source projects like Linux Kernel, Postgres, Samba, Squid etc, totally outnumbered by wankers and posers.

[Jul 01, 2019] I worked twenty years in commercial software development including in aviation for UA and while Indian software developers are capable, their corporate culture is completely different as is based on feudal workplace relations of subordinates and management that results with extreme cronyism

Notable quotes:
"... Being powerless within calcifies totalitarian corporate culture ..."
"... ultimately promoted wide spread culture of obscurantism and opportunism what amounts to extreme office politics of covering their own butts often knowing that entire development strategy is flawed, as long as they are not personally blamed or if they in fact benefit by collapse of the project. ..."
"... As I worked side by side and later as project manager with Indian developers I can attest to that culture which while widely spread also among American developers reaches extremes among Indian corporations which infect often are engaged in fraud to be blamed on developers. ..."
Jul 01, 2019 | www.moonofalabama.org
dh-mtl , Jun 30, 2019 3:51:11 PM | 29

@Kalen , Jun 30, 2019 12:58:14 PM | 13

The programmers in India are well capable of writing good software. The difficulty lies in communicating the design requirements for the software. If they do not know in detail how air planes are engineered, they will implement the design to the letter but not to its intent.

I worked twenty years in commercial software development including in aviation for UA and while Indian software developers are capable, their corporate culture is completely different as is based on feudal workplace relations of subordinates and management that results with extreme cronyism, far exceeding that in the US as such relations are not only based on extreme exploitation (few jobs hundreds of qualified candidates) but on personal almost paternal like relations that preclude required independence of judgment and practically eliminates any major critical discussions about efficacy of technological solutions and their risks.

Being powerless within calcifies totalitarian corporate culture facing alternative of hurting family-like relations with bosses' and their feelings, who emotionally and in financial terms committed themselves to certain often wrong solutions dictated more by margins than technological imperatives, ultimately promoted wide spread culture of obscurantism and opportunism what amounts to extreme office politics of covering their own butts often knowing that entire development strategy is flawed, as long as they are not personally blamed or if they in fact benefit by collapse of the project.

As I worked side by side and later as project manager with Indian developers I can attest to that culture which while widely spread also among American developers reaches extremes among Indian corporations which infect often are engaged in fraud to be blamed on developers.

In fact it is shocking contrast with German culture that practically prevents anyone engaging in any project as it is almost always, in its entirety, discussed, analyzed, understood and fully supported by every member of the team, otherwise they often simply refused to work on project citing professional ethics. High quality social welfare state and handsome unemployment benefits definitely supported such ethical stand back them

While what I describe happened over twenty years ago it is still applicable I believe.

[Jun 30, 2019] Design Genius Jony Ive Leaves Apple, Leaving Behind Crapified Products That Cannot Be Repaired naked capitalism

Notable quotes:
"... Honestly, since 2015 feels like Apple wants to abandon it's PC business but just doesn't know how so ..."
"... The new line seems like a valid refresh, but the prices are higher than ever, and remember young people are earning less than ever, so I still think they are looking for a way out of the PC trade, maybe this refresh is to just buy time for an other five years before they close up. ..."
"... I wonder how much those tooling engineers in the US make compared to their Chinese competitors? It seems like a neoliberal virtuous circle: loot/guts education, then find skilled labor from places that still support education, by moving abroad or importing workers, reducing wages and further undermining the local skill base. ..."
"... I sympathize with y'all. It's not uncommon for good products to become less useful and more trouble as the original designers, etc., get arrogant from their success and start to believe that every idea they have is a useful improvement. Not even close. Too much of fixing things that aren't broken and gilding lilies. ..."
Jun 30, 2019 | www.nakedcapitalism.com

As iFixit notes :

The iPod, the iPhone, the MacBook Air, the physical Apple Store, even the iconic packaging of Apple products -- these products changed how we view and use their categories, or created new categories, and will be with us a long time.

But the title of that iFixit post, Jony Ive's Fragmented Legacy: Unreliable, Unrepairable, Beautiful Gadgets , makes clear that those beautiful products carried with them considerable costs- above and beyond their high prices. They're unreliable, and difficult to repair.

Ironically. both Jobs and Ive were inspired by Dieter Rams – whom iFixit calls "the legendary industrial designer renowned for functional and simple consumer products." And unlike Apple. Rams believed that good design didn't have to come at the expense of either durability or the environment:

Rams loves durable products that are environmentally friendly. That's one of his 10 principles for good design : "Design makes an important contribution to the preservation of the environment." But Ive has never publicly discussed the dissonance between his inspiration and Apple's disposable, glued-together products. For years, Apple has openly combated green standards that would make products easier to repair and recycle, stating that they need "complete design flexibility" no matter the impact on the environment.

Complete Design Flexibility Spells Environmental Disaster

In fact, that complete design flexibility – at least as practiced by Ive – has resulted in crapified products that are an environmental disaster. Their lack of durability means they must be repaired to be functional, and the lack of repairability means many of these products end up being tossed prematurely – no doubt not a bug, but a feature. As Vice recounts :

But history will not be kind to Ive, to Apple, or to their design choices. While the company popularized the smartphone and minimalistic, sleek, gadget design, it also did things like create brand new screws designed to keep consumers from repairing their iPhones.

Under Ive, Apple began gluing down batteries inside laptops and smartphones (rather than screwing them down) to shave off a fraction of a millimeter at the expense of repairability and sustainability.

It redesigned MacBook Pro keyboards with mechanisms that are, again, a fraction of a millimeter thinner, but that are easily defeated by dust and crumbs (the computer I am typing on right now -- which is six months old -- has a busted spacebar and 'r' key). These keyboards are not easily repairable, even by Apple, and many MacBook Pros have to be completely replaced due to a single key breaking. The iPhone 6 Plus had a design flaw that led to its touch screen spontaneously breaking -- it then told consumers there was no problem for months before ultimately creating a repair program . Meanwhile, Apple's own internal tests showed those flaws . He designed AirPods, which feature an unreplaceable battery that must be physically destroyed in order to open .

Vice also notes that in addition to Apple's products becoming "less modular, less consumer friendly, less upgradable, less repairable, and, at times, less functional than earlier models", Apple's design decisions have not been confined to Apple. Instead, "Ive's influence is obvious in products released by Samsung, HTC, Huawei, and others, which have similarly traded modularity for sleekness."

Right to Repair

As I've written before, Apple is leading opponent of giving consumers a right to repair. Nonetheless, there's been some global progress on this issue (see Global Gains on Right to Repair ). And we've also seen a widening of support in the US for such a right. The issue has arisen in the current presidential campaign, with Elizabeth Warren throwing down the gauntlet by endorsing a right to repair for farm tractors. The New York Times has also taken up the cause more generally (see Right to Repair Initiatives Gain Support in US ). More than twenty states are considering enacting right to repair statutes.


samhill , June 30, 2019 at 5:41 pm

I've been using Apple since 1990, I concur with the article about h/w and add that from Snow Leopard to Sierra the OSX was buggy as anything from the Windows world if not more so. Got better with High Sierra but still not up to the hype. I haven't lived with Mojave. I use Apple out of habit, haven't felt the love from them since Snow Leopard, exactly when they became a cell phone company. People think Apple is Mercedes and PCs are Fords, but for a long time now in practical use, leaving aside the snazzy aesthetics, under the hood it's GM vs Ford. I'm not rich enough to buy a $1500 non-upgradable, non-repairable product so the new T2 protected computers can't be for me.

The new Dell XPS's are tempting, they got the right idea, if you go to their service page you can dl complete service instructions, diagrams, and blow ups. They don't seem at all worried about my hurting myself.

In the last few years PCs offer what before I could only get from Apple; good screen, back lit keyboard, long battery life, trim size.

Honestly, since 2015 feels like Apple wants to abandon it's PC business but just doesn't know how so it's trying to drive off all the old legacy power users, the creative people that actually work hard for their money, exchanging them for rich dilettantes, hedge fund managers, and status seekers – an easier crowd to finally close up shop on.

The new line seems like a valid refresh, but the prices are higher than ever, and remember young people are earning less than ever, so I still think they are looking for a way out of the PC trade, maybe this refresh is to just buy time for an other five years before they close up.

When you start thinking like this about a company you've been loyal to for 30 years something is definitely wrong.

TG , June 30, 2019 at 6:09 pm

The reason that Apple moved the last of its production to China is, quite simply, that China now has basically the entire industrial infrastructure that we used to have. We have been hollowed out, and are now essentially third-world when it comes to industry. The entire integrated supply chain that defines an industrial power, is now gone.

The part about China no longer being a low-wage country is correct. China's wages have been higher than Mexico's for some time. But the part about the skilled workers is a slap in the face.

How can US workers be skilled at manufacturing, when there are no longer any jobs here where they can learn or use those skills?

fdr-fan , June 30, 2019 at 6:10 pm

A thin rectangle isn't more beautiful than a thick rectangle. They're both just rectangles.

Skip Intro , June 30, 2019 at 2:14 pm

I wonder how much those tooling engineers in the US make compared to their Chinese competitors? It seems like a neoliberal virtuous circle: loot/guts education, then find skilled labor from places that still support education, by moving abroad or importing workers, reducing wages and further undermining the local skill base.

EMtz , June 30, 2019 at 4:08 pm

They lost me when they made the iMac so thin it couldn't play a CD – and had the nerve to charge $85 for an Apple player. Bought another brand for $25. I don't care that it's not as pretty. I do care that I had to buy it at all.

I need a new cellphone. You can bet it won't be an iPhone.

John Zelnicker , June 30, 2019 at 4:24 pm

Jerri-Lynn – Indeed, a great article.

Although I have never used an Apple product, I sympathize with y'all. It's not uncommon for good products to become less useful and more trouble as the original designers, etc., get arrogant from their success and start to believe that every idea they have is a useful improvement. Not even close. Too much of fixing things that aren't broken and gilding lilies.

Charles Leseau , June 30, 2019 at 5:13 pm

Worst computer I've ever owned: Apple Macbook Pro, c. 2011 or so.

Died within 2 years, and also more expensive than the desktops I've built since that absolutely crush it in every possible performance metric (and last longer).

Meanwhile, I also still use a $300 Best Buy Toshiba craptop that has now lasted for 8 straight years.

Never again.

Alfred , June 30, 2019 at 5:23 pm

"Beautiful objects" – aye, there's the rub. In point of fact, the goal of industrial design is not to create beautiful objects. It is the goal of the fine arts to create beautiful objects. The goal of design is to create useful things that are easy to use and are effective at their tasks. Some -- including me -- would add to those most basic goals, the additional goals of being safe to use, durable, and easy to repair; perhaps even easy to adapt or suitable for recycling, or conservative of precious materials. The principles of good product design are laid out admirably in the classic book by Donald A. Norman, The Design of Everyday Things (1988). So this book was available to Jony Ive (born 1967) during his entire career (which overlapped almost exactly the wonder years of Postmodernism – and therein lies a clue). It would indeed be astonishing to learn that Ive took no notice of it. Yet Norman's book can be used to show that Ive's Apple violated so many of the principles of good design, so habitually, as to raise the suspicion that the company was not engaged in "product design" at all. The output Apple in the Ive era, I'd say, belongs instead to the realm of so-called "commodity aesthetics," which aims to give manufactured items a sufficiently seductive appearance to induce their purchase – nothing more. Aethetics appears as Dieter Rams's principle 3, as just one (and the only purely commercial) function in his 10; so in a theoretical context that remains ensconced within a genuine, Modernist functionalism. But in the Apple dispensation that single (aesthetic) principle seems to have subsumed the entire design enterprise – precisely as one would expect from "the cultural logic of late capitalism" (hat tip to Mr Jameson). Ive and his staff of formalists were not designing industrial products, or what Norman calls "everyday things," let alone devices; they were aestheticizing products in ways that first, foremost, and almost only enhanced their performance as expressions of a brand. Their eyes turned away from the prosaic prize of functionality to focus instead on the more profitable prize of sales -- to repeat customers, aka the devotees of 'iconic' fetishism. Thus did they serve not the masses but Mammon, and they did so as minions of minimalism. Nor was theirs the minimalism of the Frankfurt kitchen, with its deep roots in ethics and ergonomics. It was only superficially Miesian. Bauhaus-inspired? Oh, please. Only the more careless readers of Tom Wolfe and Wikipedia could believe anything so preposterous. Surely Steve Jobs, he of the featureless black turtleneck by Issey Miyake, knew better. Anyone who has so much as walked by an Apple Store, ever, should know better. And I guess I should know how to write shorter

[Jun 29, 2019] Hiring aircraft computer engineers at $9/hr by Boeing is a great idea. Who could argue with smart cost saving?

Jun 29, 2019 | www.zerohedge.com

Anonymous IX , 3 minutes ago link

I love it. A company which fell in love so much with their extraordinary profits that they sabatoged their design and will now suffer enormous financial consequences. They're lucky to have all their defense/military contracts.

[Jun 29, 2019] Boeing Outsourced Its 737 MAX Software To $9-Per-Hour Engineers

Jun 29, 2019 | www.zerohedge.com

The software at the heart of the Boeing 737 MAX crisis was developed at a time when the company was laying off experienced engineers and replacing them with temporary workers making as little as $9 per hour, according to Bloomberg .

In an effort to cut costs, Boeing was relying on subcontractors making paltry wages to develop and test its software. Often times, these subcontractors would be from countries lacking a deep background in aerospace, like India.

Boeing had recent college graduates working for Indian software developer HCL Technologies Ltd. in a building across from Seattle's Boeing Field, in flight test groups supporting the MAX. The coders from HCL designed to specifications set by Boeing but, according to Mark Rabin, a former Boeing software engineer, "it was controversial because it was far less efficient than Boeing engineers just writing the code."

Rabin said: "...it took many rounds going back and forth because the code was not done correctly."

In addition to cutting costs, the hiring of Indian companies may have landed Boeing orders for the Indian military and commercial aircraft, like a $22 billion order received in January 2017 . That order included 100 737 MAX 8 jets and was Boeing's largest order ever from an Indian airline. India traditionally orders from Airbus.

HCL engineers helped develop and test the 737 MAX's flight display software while employees from another Indian company, Cyient Ltd, handled the software for flight test equipment. In 2011, Boeing named Cyient, then known as Infotech, to a list of its "suppliers of the year".

One HCL employee posted online: "Provided quick workaround to resolve production issue which resulted in not delaying flight test of 737-Max (delay in each flight test will cost very big amount for Boeing) ."

But Boeing says the company didn't rely on engineers from HCL for the Maneuvering Characteristics Augmentation System, which was linked to both last October's crash and March's crash. The company also says it didn't rely on Indian companies for the cockpit warning light issue that was disclosed after the crashes.

A Boeing spokesperson said: "Boeing has many decades of experience working with supplier/partners around the world. Our primary focus is on always ensuring that our products and services are safe, of the highest quality and comply with all applicable regulations."

HCL, on the other hand, said: "HCL has a strong and long-standing business relationship with The Boeing Company, and we take pride in the work we do for all our customers. However, HCL does not comment on specific work we do for our customers. HCL is not associated with any ongoing issues with 737 Max."

Recent simulator tests run by the FAA indicate that software issues on the 737 MAX run deeper than first thought. Engineers who worked on the plane, which Boeing started developing eight years ago, complained of pressure from managers to limit changes that might introduce extra time or cost.

Rick Ludtke, a former Boeing flight controls engineer laid off in 2017, said: "Boeing was doing all kinds of things, everything you can imagine, to reduce cost , including moving work from Puget Sound, because we'd become very expensive here. All that's very understandable if you think of it from a business perspective. Slowly over time it appears that's eroded the ability for Puget Sound designers to design."

Rabin even recalled an incident where senior software engineers were told they weren't needed because Boeing's productions were mature. Rabin said: "I was shocked that in a room full of a couple hundred mostly senior engineers we were being told that we weren't needed."

Any given jetliner is made up of millions of parts and millions of lines of code. Boeing has often turned over large portions of the work to suppliers and subcontractors that follow its blueprints. But beginning in 2004 with the 787 Dreamliner, Boeing sought to increase profits by providing high-level specs and then asking suppliers to design more parts themselves.

Boeing also promised to invest $1.7 billion in Indian companies as a result of an $11 billion order in 2005 from Air India. This investment helped HCL and other software developers.

For the 787, HCL offered a price to Boeing that they couldn't refuse, either: free. HCL "took no up-front payments on the 787 and only started collecting payments based on sales years later".

Rockwell Collins won the MAX contract for cockpit displays and relied in part on HCL engineers and contract engineers from Cyient to test flight test equipment.

Charles LoveJoy, a former flight-test instrumentation design engineer at the company, said: "We did have our challenges with the India team. They met the requirements, per se, but you could do it better."


Anonymous IX , 2 minutes ago link

I love it. A company which fell in love so much with their extraordinary profits that they sabatoged their design and will now suffer enormous financial consequences. They're lucky to have all their defense/military contracts.

scraping_by , 4 minutes ago link

Oftentimes, it's the cut-and-paste code that's the problem. If you don't have a good appreciation for what every line does, you're never going to know what the sub or entire program does.

vienna_proxy , 7 minutes ago link

hahahaha non-technical managers making design decisions are complete **** ups wherever they go and here it blew up in their faces rofl

Ignorance is bliss , 2 minutes ago link

I see this all the time, and a lot of the time these non-technical decision makers are women.

hispanicLoser , 13 minutes ago link

By 2002 i could not sit down with any developers without hearing at least one story about how they had been in a code review meeting and seen absolute garbage turned out by H-1B workers.

Lots of people have known about this problem for many years now.

brazilian , 11 minutes ago link

May the gods damn all financial managers! One of the two professions, along with bankers, which have absolutely no social value whatsoever. There should be open hunting season on both!

scraping_by , 15 minutes ago link

Shifting to high-level specs puts more power in the hands of management/accounting types, since it doesn't require engineering knowledge to track a deadline. Indeed, this whole story is the wet dream of business school, the idea of being able to accomplish technical tasks purely by demand. A lot of public schools teach kids science is magic so when they grow up, the think they can just give directions and technology appears.

pops , 20 minutes ago link

In this country, one must have a license from the FAA to work on commercial aircraft. That means training and certification that usually results in higher pay for those qualified to perform the repairs to the aircraft your family will fly on.

In case you're not aware, much of the heavy stuff like D checks (overhaul) have been outsourced by the airlines to foreign countries where the FAA has nothing to say about it. Those contractors can hire whoever they wish for whatever they'll accept. I have worked with some of those "mechanics" who cannot even read.

Keep that in mind next time the TSA perv is fondling your junk. That might be your last sexual encounter.

Klassenfeind , 22 minutes ago link

Boeing Outsourced Its 737 MAX Software To $9-Per-Hour Engineers

Long live the free market, right Tylers?

You ZH guys always rally against minimum wage here, well there you go: $9/hr aircraft 'engineers!' Happy now?

asteroids , 25 minutes ago link

You gotta be kidding. You let kids straight out of school write mission critical code? How ******* stupid are you BA?

reader2010 , 20 minutes ago link

Go to India. There are many outsourcing companies that only hire new college graduates for work and they are paid less than $2 an hour for the job.

For the DoD contractors, they have to bring them to the US to work. There are tons of H1B guys from India working for defense contractors.

[Jun 29, 2019] If you have to be told that H-1B code in critical aircraft software might be not reliable you are too stupid to live

Jun 29, 2019 | www.zerohedge.com

hispanicLoser , 25 minutes ago link

If you have to be told that H-1B code in aircraft software is not reliable you are too stupid to live.

zob2020 , 16 minutes ago link

Or this online shop designed back in 1997. It was supposed to take over all internet shopping that didn't really exist back then yet. And they used Indian doctors to code. Well sure they ended up with a site... but one so heavy with pictures it took 30min to open one page, another 20min to even click on a product to read its text etc-. This with good university internet.

Unsurprisingly i don't think they ever managed to sell anything. But they gave out free movie tickets to every registered customer... so me & friend each registered some 80 accounts and went to free movies for a good bit over a year.

mailman must have had fun delivering 160 letters to random names in the same student apartment :D

[May 17, 2019] Shareholder Capitalism, the Military, and the Beginning of the End for Boeing

Highly recommended!
Notable quotes:
"... Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve). ..."
"... The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally. ..."
"... "Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. ..."
"... If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can. ..."
"... It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism ..."
"... When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. ..."
May 17, 2019 | www.nakedcapitalism.com

The fall of the Berlin Wall and the corresponding end of the Soviet Empire gave the fullest impetus imaginable to the forces of globalized capitalism, and correspondingly unfettered access to the world's cheapest labor. What was not to like about that? It afforded multinational corporations vastly expanded opportunities to fatten their profit margins and increase the bottom line with seemingly no risk posed to their business model.

Or so it appeared. In 2000, aerospace engineer L.J. Hart-Smith's remarkable paper, sardonically titled "Out-Sourced Profits – The Cornerstone of Successful Subcontracting," laid out the case against several business practices of Hart-Smith's previous employer, McDonnell Douglas, which had incautiously ridden the wave of outsourcing when it merged with the author's new employer, Boeing. Hart-Smith's intention in telling his story was a cautionary one for the newly combined Boeing, lest it follow its then recent acquisition down the same disastrous path.

Of the manifold points and issues identified by Hart-Smith, there is one that stands out as the most compelling in terms of understanding the current crisis enveloping Boeing: The embrace of the metric "Return on Net Assets" (RONA). When combined with the relentless pursuit of cost reduction (via offshoring), RONA taken to the extreme can undermine overall safety standards.

Related to this problem is the intentional and unnecessary use of complexity as an instrument of propaganda. Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve).

All of these pernicious concepts are branches of the same poisoned tree: " shareholder capitalism ":

[A] notion best epitomized by Milton Friedman that the only social responsibility of a corporation is to increase its profits, laying the groundwork for the idea that shareholders, being the owners and the main risk-bearing participants, ought therefore to receive the biggest rewards. Profits therefore should be generated first and foremost with a view toward maximizing the interests of shareholders, not the executives or managers who (according to the theory) were spending too much of their time, and the shareholders' money, worrying about employees, customers, and the community at large. The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally.

"Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. In essence, it means maximizing the returns of those dollars deployed in the operation of the business. Applied to a corporation, it comes down to this: If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can.

It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism.

Engineering reality, however, is far more complicated than what is outlined in university MBA textbooks. For corporations like McDonnell Douglas, for example, RONA was used not as a way to prioritize new investment in the corporation but rather to justify disinvestment in the corporation. This disinvestment ultimately degraded the company's underlying profitability and the quality of its planes (which is one of the reasons the Pentagon helped to broker the merger with Boeing; in another perverse echo of the 2008 financial disaster, it was a politically engineered bailout).

RONA in Practice

When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. Productivity is diminished, even as labor-saving technologies are introduced. Precision machinery is sold off and replaced by inferior, but cheaper, machines. Engineering quality deteriorates. And the upshot is that a reliable plane like Boeing's 737, which had been a tried and true money-spinner with an impressive safety record since 1967, becomes a high-tech death trap.

The drive toward efficiency is translated into a drive to do more with less. Get more out of workers while paying them less. Make more parts with fewer machines. Outsourcing is viewed as a way to release capital by transferring investment from skilled domestic human capital to offshore entities not imbued with the same talents, corporate culture and dedication to quality. The benefits to the bottom line are temporary; the long-term pathologies become embedded as the company's market share begins to shrink, as the airlines search for less shoddy alternatives.

You must do one more thing if you are a Boeing director: you must erect barriers to bad news, because there is nothing that bursts a magic bubble faster than reality, particularly if it's bad reality.

The illusion that Boeing sought to perpetuate was that it continued to produce the same thing it had produced for decades: namely, a safe, reliable, quality airplane. But it was doing so with a production apparatus that was stripped, for cost reasons, of many of the means necessary to make good aircraft. So while the wine still came in a bottle signifying Premier Cru quality, and still carried the same price, someone had poured out the contents and replaced them with cheap plonk.

And that has become remarkably easy to do in aviation. Because Boeing is no longer subject to proper independent regulatory scrutiny. This is what happens when you're allowed to " self-certify" your own airplane , as the Washington Post described: "One Boeing engineer would conduct a test of a particular system on the Max 8, while another Boeing engineer would act as the FAA's representative, signing on behalf of the U.S. government that the technology complied with federal safety regulations."

This is a recipe for disaster. Boeing relentlessly cut costs, it outsourced across the globe to workforces that knew nothing about aviation or aviation's safety culture. It sent things everywhere on one criteria and one criteria only: lower the denominator. Make it the same, but cheaper. And then self-certify the plane, so that nobody, including the FAA, was ever the wiser.

Boeing also greased the wheels in Washington to ensure the continuation of this convenient state of regulatory affairs for the company. According to OpenSecrets.org , Boeing and its affiliates spent $15,120,000 in lobbying expenses in 2018, after spending, $16,740,000 in 2017 (along with a further $4,551,078 in 2018 political contributions, which placed the company 82nd out of a total of 19,087 contributors). Looking back at these figures over the past four elections (congressional and presidential) since 2012, these numbers represent fairly typical spending sums for the company.

But clever financial engineering, extensive political lobbying and self-certification can't perpetually hold back the effects of shoddy engineering. One of the sad byproducts of the FAA's acquiescence to "self-certification" is how many things fall through the cracks so easily.

[Feb 11, 2019] 6 most prevalent problems in the software development world

Dec 01, 2018 | www.catswhocode.com

November 20, 2018

[Dec 27, 2018] The Yoda of Silicon Valley by Siobhan Roberts

Highly recommended!
Although he is certainly a giant, Knuth will never be able to complete this monograph - the technology developed too quickly. Three volumes came out in 1963-1968 and then there was a lull. January 10, he will be 81. At this age it is difficult to work in the field of mathematics and system programming. So we will probably never see the complete fourth volume.
This inability to finish the work he devoted a large part of hi life is definitely a tragedy. The key problem here is that now it is simply impossible to cover the whole area of ​​system programming and related algorithms for one person. But the first three volumes played tremendous positive role for sure.
Also he was distracted for several years to create TeX. He needed to create a non-profit and complete this work by attracting the best minds from the outside. But he is by nature a loner, as many great scientists are, and prefer to work this way.
His other mistake is due to the fact that MIX - his emulator was too far from the IBM S/360, which became the standard de-facto in mid-60th. He then realized that this was a blunder and replaced MIX with more modem emulator MIXX, but it was "too little, too late" and it took time and effort. So the first three volumes and fragments of the fourth is all that we have now and probably forever.
Not all volumes fared equally well with time. The third volume suffered most IMHO and as of 2019 is partially obsolete. Also it was written by him in some haste and some parts of it are are far from clearly written ( it was based on earlier lectures of Floyd, so it was oriented of single CPU computers only. Now when multiprocessor machines, huge amount of RAM and SSD hard drives are the norm, the situation is very different from late 60th. It requires different sorting algorithms (the importance of mergesort increased, importance of quicksort decreased). He also got too carried away with sorting random numbers and establishing upper bound and average run time. The real data is almost never random and typically contain sorted fragments. For example, he overestimated the importance of quicksort and thus pushed the discipline in the wrong direction.
Notable quotes:
"... These days, it is 'coding', which is more like 'code-spraying'. Throw code at a problem until it kind of works, then fix the bugs in the post-release, or the next update. ..."
"... AI is a joke. None of the current 'AI' actually is. It is just another new buzz-word to throw around to people that do not understand it at all. ..."
"... One good teacher makes all the difference in life. More than one is a rare blessing. ..."
Dec 17, 2018 | www.nytimes.com

With more than one million copies in print, "The Art of Computer Programming " is the Bible of its field. "Like an actual bible, it is long and comprehensive; no other book is as comprehensive," said Peter Norvig, a director of research at Google. After 652 pages, volume one closes with a blurb on the back cover from Bill Gates: "You should definitely send me a résumé if you can read the whole thing."

The volume opens with an excerpt from " McCall's Cookbook ":

Here is your book, the one your thousands of letters have asked us to publish. It has taken us years to do, checking and rechecking countless recipes to bring you only the best, only the interesting, only the perfect.

Inside are algorithms, the recipes that feed the digital age -- although, as Dr. Knuth likes to point out, algorithms can also be found on Babylonian tablets from 3,800 years ago. He is an esteemed algorithmist; his name is attached to some of the field's most important specimens, such as the Knuth-Morris-Pratt string-searching algorithm. Devised in 1970, it finds all occurrences of a given word or pattern of letters in a text -- for instance, when you hit Command+F to search for a keyword in a document.

... ... ...

During summer vacations, Dr. Knuth made more money than professors earned in a year by writing compilers. A compiler is like a translator, converting a high-level programming language (resembling algebra) to a lower-level one (sometimes arcane binary) and, ideally, improving it in the process. In computer science, "optimization" is truly an art, and this is articulated in another Knuthian proverb: "Premature optimization is the root of all evil."

Eventually Dr. Knuth became a compiler himself, inadvertently founding a new field that he came to call the "analysis of algorithms." A publisher hired him to write a book about compilers, but it evolved into a book collecting everything he knew about how to write for computers -- a book about algorithms.

... ... ...

When Dr. Knuth started out, he intended to write a single work. Soon after, computer science underwent its Big Bang, so he reimagined and recast the project in seven volumes. Now he metes out sub-volumes, called fascicles. The next installation, "Volume 4, Fascicle 5," covering, among other things, "backtracking" and "dancing links," was meant to be published in time for Christmas. It is delayed until next April because he keeps finding more and more irresistible problems that he wants to present.

In order to optimize his chances of getting to the end, Dr. Knuth has long guarded his time. He retired at 55, restricted his public engagements and quit email (officially, at least). Andrei Broder recalled that time management was his professor's defining characteristic even in the early 1980s.

Dr. Knuth typically held student appointments on Friday mornings, until he started spending his nights in the lab of John McCarthy, a founder of artificial intelligence, to get access to the computers when they were free. Horrified by what his beloved book looked like on the page with the advent of digital publishing, Dr. Knuth had gone on a mission to create the TeX computer typesetting system, which remains the gold standard for all forms of scientific communication and publication. Some consider it Dr. Knuth's greatest contribution to the world, and the greatest contribution to typography since Gutenberg.

This decade-long detour took place back in the age when computers were shared among users and ran faster at night while most humans slept. So Dr. Knuth switched day into night, shifted his schedule by 12 hours and mapped his student appointments to Fridays from 8 p.m. to midnight. Dr. Broder recalled, "When I told my girlfriend that we can't do anything Friday night because Friday night at 10 I have to meet with my adviser, she thought, 'This is something that is so stupid it must be true.'"

... ... ...

Lucky, then, Dr. Knuth keeps at it. He figures it will take another 25 years to finish "The Art of Computer Programming," although that time frame has been a constant since about 1980. Might the algorithm-writing algorithms get their own chapter, or maybe a page in the epilogue? "Definitely not," said Dr. Knuth.

"I am worried that algorithms are getting too prominent in the world," he added. "It started out that computer scientists were worried nobody was listening to us. Now I'm worried that too many people are listening."


Scott Kim Burlingame, CA Dec. 18

Thanks Siobhan for your vivid portrait of my friend and mentor. When I came to Stanford as an undergrad in 1973 I asked who in the math dept was interested in puzzles. They pointed me to the computer science dept, where I met Knuth and we hit it off immediately. Not only a great thinker and writer, but as you so well described, always present and warm in person. He was also one of the best teachers I've ever had -- clear, funny, and interested in every student (his elegant policy was each student can only speak twice in class during a period, to give everyone a chance to participate, and he made a point of remembering everyone's names). Some thoughts from Knuth I carry with me: finding the right name for a project is half the work (not literally true, but he labored hard on finding the right names for TeX, Metafont, etc.), always do your best work, half of why the field of computer science exists is because it is a way for mathematically minded people who like to build things can meet each other, and the observation that when the computer science dept began at Stanford one of the standard interview questions was "what instrument do you play" -- there was a deep connection between music and computer science, and indeed the dept had multiple string quartets. But in recent decades that has changed entirely. If you do a book on Knuth (he deserves it), please be in touch.

IMiss America US Dec. 18

I remember when programming was art. I remember when programming was programming. These days, it is 'coding', which is more like 'code-spraying'. Throw code at a problem until it kind of works, then fix the bugs in the post-release, or the next update.

AI is a joke. None of the current 'AI' actually is. It is just another new buzz-word to throw around to people that do not understand it at all. We should be in a golden age of computing. Instead, we are cutting all corners to get something out as fast as possible. The technology exists to do far more. It is the human element that fails us.

Ronald Aaronson Armonk, NY Dec. 18

My particular field of interest has always been compiler writing and have been long awaiting Knuth's volume on that subject. I would just like to point out that among Kunth's many accomplishments is the invention of LR parsers, which are widely used for writing programming language compilers.

Edward Snowden Russia Dec. 18

Yes, \TeX, and its derivative, \LaTeX{} contributed greatly to being able to create elegant documents. It is also available for the web in the form MathJax, and it's about time the New York Times supported MathJax. Many times I want one of my New York Times comments to include math, but there's no way to do so! It comes up equivalent to: $e^{i\pi}+1$.

48 Recommend
henry pick new york Dec. 18

I read it at the time, because what I really wanted to read was volume 7, Compilers. As I understood it at the time, Professor Knuth wrote it in order to make enough money to build an organ. That apparantly happened by 3:Knuth, Searching and Sorting. The most impressive part is the mathemathics in Semi-numerical (2:Knuth). A lot of those problems are research projects over the literature of the last 400 years of mathematics.

Steve Singer Chicago Dec. 18

I own the three volume "Art of Computer Programming", the hardbound boxed set. Luxurious. I don't look at it very often thanks to time constraints, given my workload. But your article motivated me to at least pick it up and carry it from my reserve library to a spot closer to my main desk so I can at least grab Volume 1 and try to read some of it when the mood strikes. I had forgotten just how heavy it is, intellectual content aside. It must weigh more than 25 pounds.

Terry Hayes Los Altos, CA Dec. 18

I too used my copies of The Art of Computer Programming to guide me in several projects in my career, across a variety of topic areas. Now that I'm living in Silicon Valley, I enjoy seeing Knuth at events at the Computer History Museum (where he was a 1998 Fellow Award winner), and at Stanford. Another facet of his teaching is the annual Christmas Lecture, in which he presents something of recent (or not-so-recent) interest. The 2018 lecture is available online - https://www.youtube.com/watch?v=_cR9zDlvP88

Chris Tong Kelseyville, California Dec. 17

One of the most special treats for first year Ph.D. students in the Stanford University Computer Science Department was to take the Computer Problem-Solving class with Don Knuth. It was small and intimate, and we sat around a table for our meetings. Knuth started the semester by giving us an extremely challenging, previously unsolved problem. We then formed teams of 2 or 3. Each week, each team would report progress (or lack thereof), and Knuth, in the most supportive way, would assess our problem-solving approach and make suggestions for how to improve it. To have a master thinker giving one feedback on how to think better was a rare and extraordinary experience, from which I am still benefiting! Knuth ended the semester (after we had all solved the problem) by having us over to his house for food, drink, and tales from his life. . . And for those like me with a musical interest, he let us play the magnificent pipe organ that was at the center of his music room. Thank you Professor Knuth, for giving me one of the most profound educational experiences I've ever had, with such encouragement and humor!

Been there Boulder, Colorado Dec. 17

I learned about Dr. Knuth as a graduate student in the early 70s from one of my professors and made the financial sacrifice (graduate student assistantships were not lucrative) to buy the first and then the second volume of the Art of Computer Programming. Later, at Bell Labs, when I was a bit richer, I bought the third volume. I have those books still and have used them for reference for years. Thank you Dr, Knuth. Art, indeed!

Gianni New York Dec. 18

@Trerra In the good old days, before Computer Science, anyone could take the Programming Aptitude Test. Pass it and companies would train you. Although there were many mathematicians and scientists, some of the best programmers turned out to be music majors. English, Social Sciences, and History majors were represented as well as scientists and mathematicians. It was a wonderful atmosphere to work in . When I started to look for a job as a programmer, I took Prudential Life Insurance's version of the Aptitude Test. After the test, the interviewer was all bent out of shape because my verbal score was higher than my math score; I was a physics major. Luckily they didn't hire me and I got a job with IBM.

M Martínez Miami Dec. 17

In summary, "May the force be with you" means: Did you read Donald Knuth's "The Art of Computer Programming"? Excellent, we loved this article. We will share it with many young developers we know.

mds USA Dec. 17

Dr. Knuth is a great Computer Scientist. Around 25 years ago, I met Dr. Knuth in a small gathering a day before he was awarded a honorary Doctorate in a university. This is my approximate recollection of a conversation. I said-- " Dr. Knuth, you have dedicated your book to a computer (one with which he had spent a lot of time, perhaps a predecessor to PDP-11). Isn't it unusual?". He said-- "Well, I love my wife as much as anyone." He then turned to his wife and said --"Don't you think so?". It would be nice if scientists with the gift of such great minds tried to address some problems of ordinary people, e.g. a model of economy where everyone can get a job and health insurance, say, like Dr. Paul Krugman.

Nadine NYC Dec. 17

I was in a training program for women in computer systems at CUNY graduate center, and they used his obtuse book. It was one of the reasons I dropped out. He used a fantasy language to describe his algorithms in his book that one could not test on computers. I already had work experience as a programmer with algorithms and I know how valuable real languages are. I might as well have read Animal Farm. It might have been different if he was the instructor.

Doug McKenna Boulder Colorado Dec. 17

Don Knuth's work has been a curious thread weaving in and out of my life. I was first introduced to Knuth and his The Art of Computer Programming back in 1973, when I was tasked with understanding a section of the then-only-two-volume Book well enough to give a lecture explaining it to my college algorithms class. But when I first met him in 1981 at Stanford, he was all-in on thinking about typography and this new-fangled system of his called TeX. Skip a quarter century. One day in 2009, I foolishly decided kind of on a whim to rewrite TeX from scratch (in my copious spare time), as a simple C library, so that its typesetting algorithms could be put to use in other software such as electronic eBook's with high-quality math typesetting and interactive pictures. I asked Knuth for advice. He warned me, prepare yourself, it's going to consume five years of your life. I didn't believe him, so I set off and tried anyway. As usual, he was right.

Baddy Khan San Francisco Dec. 17

I have signed copied of "Fundamental Algorithms" in my library, which I treasure. Knuth was a fine teacher, and is truly a brilliant and inspiring individual. He taught during the same period as Vint Cerf, another wonderful teacher with a great sense of humor who is truly a "father of the internet". One good teacher makes all the difference in life. More than one is a rare blessing.

Indisk Fringe Dec. 17

I am a biologist, specifically a geneticist. I became interested in LaTeX typesetting early in my career and have been either called pompous or vilified by people at all levels for wanting to use. One of my PhD advisors famously told me to forget LaTeX because it was a thing of the past. I have now forgotten him completely. I still use LaTeX almost every day in my work even though I don't generally typeset with equations or algorithms. My students always get trained in using proper typesetting. Unfortunately, the publishing industry has all but largely given up on TeX. Very few journals in my field accept TeX manuscripts, and most of them convert to word before feeding text to their publishing software. Whatever people might argue against TeX, the beauty and elegance of a property typeset document is unparalleled. Long live LaTeX

PaulSFO San Francisco Dec. 17

A few years ago Severo Ornstein (who, incidentally, did the hardware design for the first router, in 1969), and his wife Laura, hosted a concert in their home in the hills above Palo Alto. During a break a friend and I were chatting when a man came over and *asked* if he could chat with us (a high honor, indeed). His name was Don. After a few minutes I grew suspicious and asked "What's your last name?" Friendly, modest, brilliant; a nice addition to our little chat.

Tim Black Wilmington, NC Dec. 17

When I was a physics undergraduate (at Trinity in Hartford), I was hired to re-write professor's papers into TeX. Seeing the beauty of TeX, I wrote a program that re-wrote my lab reports (including graphs!) into TeX. My lab instructors were amazed! How did I do it? I never told them. But I just recognized that Knuth was a genius and rode his coat-tails, as I have continued to do for the last 30 years!

Jack512 Alexandria VA Dec. 17

A famous quote from Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it." Anyone who has ever programmed a computer will feel the truth of this in their bones.

[Dec 11, 2018] Software "upgrades" require workers to constantly relearn the same task because some young "genius" observed that a carefully thought out interface "looked tired" and glitzed it up.

Dec 11, 2018 | www.ianwelsh.net

S Brennan permalink April 24, 2016

My grandfather, in the early 60's could board a 707 in New York and arrive in LA in far less time than I can today. And no, I am not counting 4 hour layovers with the long waits to be "screened", the jets were 50-70 knots faster, back then your time was worth more, today less.

Not counting longer hours AT WORK, we spend far more time commuting making for much longer work days, back then your time was worth more, today less!

Software "upgrades" require workers to constantly relearn the same task because some young "genius" observed that a carefully thought out interface "looked tired" and glitzed it up. Think about the almost perfect Google Maps driver interface being redesigned by people who take private buses to work. Way back in the '90's your time was worth more than today!

Life is all the "time" YOU will ever have and if we let the elite do so, they will suck every bit of it out of you.

[Nov 05, 2018] Revisiting the Unix philosophy in 2018 Opensource.com by Michael Hausenblas

Nov 05, 2018 | opensource.com

Revisiting the Unix philosophy in 2018 The old strategy of building small, focused applications is new again in the modern microservices environment.

Program Design in the Unix Environment " in the AT&T Bell Laboratories Technical Journal, in which they argued the Unix philosophy, using the example of BSD's cat -v implementation. In a nutshell that philosophy is: Build small, focused programs -- in whatever language -- that do only one thing but do this thing well, communicate via stdin / stdout , and are connected through pipes.

Sound familiar?

Yeah, I thought so. That's pretty much the definition of microservices offered by James Lewis and Martin Fowler:

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.

While one *nix program or one microservice may be very limited or not even very interesting on its own, it's the combination of such independently working units that reveals their true benefit and, therefore, their power.

*nix vs. microservices

The following table compares programs (such as cat or lsof ) in a *nix environment against programs in a microservices environment.

*nix Microservices
Unit of execution program using stdin / stdout service with HTTP or gRPC API
Data flow Pipes ?
Configuration & parameterization Command-line arguments,
environment variables, config files
JSON/YAML docs
Discovery Package manager, man, make DNS, environment variables, OpenAPI

Let's explore each line in slightly greater detail.

Unit of execution

More on Microservices

The unit of execution in *nix (such as Linux) is an executable file (binary or interpreted script) that, ideally, reads input from stdin and writes output to stdout . A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you'll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens. Data flow

Traditionally, *nix programs could communicate via pipes. In other words, thanks to Doug McIlroy , you don't need to create temporary files to pass around and each can process virtually endless streams of data between processes. To my knowledge, there is nothing comparable to a pipe standardized in microservices, besides my little Apache Kafka-based experiment from 2017 .

Configuration and parameterization

How do you configure a program or service -- either on a permanent or a by-call basis? Well, with *nix programs you essentially have three options: command-line arguments, environment variables, or full-blown config files. In microservices, you typically deal with YAML (or even worse, JSON) documents, defining the layout and configuration of a single microservice as well as dependencies and communication, storage, and runtime settings. Examples include Kubernetes resource definitions , Nomad job specifications , or Docker Compose files. These may or may not be parameterized; that is, either you have some templating language, such as Helm in Kubernetes, or you find yourself doing an awful lot of sed -i commands.

Discovery

How do you know what programs or services are available and how they are supposed to be used? Well, in *nix, you typically have a package manager as well as good old man; between them, they should be able to answer all the questions you might have. In a microservices setup, there's a bit more automation in finding a service. In addition to bespoke approaches like Airbnb's SmartStack or Netflix's Eureka , there usually are environment variable-based or DNS-based approaches that allow you to discover services dynamically. Equally important, OpenAPI provides a de-facto standard for HTTP API documentation and design, and gRPC does the same for more tightly coupled high-performance cases. Last but not least, take developer experience (DX) into account, starting with writing good Makefiles and ending with writing your docs with (or in?) style .

Pros and cons

Both *nix and microservices offer a number of challenges and opportunities

Composability

It's hard to design something that has a clear, sharp focus and can also play well with others. It's even harder to get it right across different versions and to introduce respective error case handling capabilities. In microservices, this could mean retry logic and timeouts -- maybe it's a better option to outsource these features into a service mesh? It's hard, but if you get it right, its reusability can be enormous.

Observability

In a monolith (in 2018) or a big program that tries to do it all (in 1984), it's rather straightforward to find the culprit when things go south. But, in a

yes | tr \\n x | head -c 450m | grep n

or a request path in a microservices setup that involves, say, 20 services, how do you even start to figure out which one is behaving badly? Luckily we have standards, notably OpenCensus and OpenTracing . Observability still might be the biggest single blocker if you are looking to move to microservices.

Global state

While it may not be such a big issue for *nix programs, in microservices, global state remains something of a discussion. Namely, how to make sure the local (persistent) state is managed effectively and how to make the global state consistent with as little effort as possible.

Wrapping up

In the end, the question remains: Are you using the right tool for a given task? That is, in the same way a specialized *nix program implementing a range of functions might be the better choice for certain use cases or phases, it might be that a monolith is the best option for your organization or workload. Regardless, I hope this article helps you see the many, strong parallels between the Unix philosophy and microservices -- maybe we can learn something from the former to benefit the latter.

Michael Hausenblas is a Developer Advocate for Kubernetes and OpenShift at Red Hat where he helps appops to build and operate apps. His background is in large-scale data processing and container orchestration and he's experienced in advocacy and standardization at W3C and IETF. Before Red Hat, Michael worked at Mesosphere, MapR and in two research institutions in Ireland and Austria. He contributes to open source software incl. Kubernetes, speaks at conferences and user groups, and shares good practices...

[Nov 05, 2018] The Linux Philosophy for SysAdmins And Everyone Who Wants To Be One eBook by David Both

Nov 05, 2018 | www.amazon.com

Elegance is one of those things that can be difficult to define. I know it when I see it, but putting what I see into a terse definition is a challenge. Using the Linux diet
command, Wordnet provides one definition of elegance as, "a quality of neatness and ingenious simplicity in the solution of a problem (especially in science or mathematics); 'the simplicity and elegance of his invention.'"

In the context of this book, I think that elegance is a state of beauty and simplicity in the design and working of both hardware and software. When a design is elegant,
software and hardware work better and are more efficient. The user is aided by simple, efficient, and understandable tools.

Creating elegance in a technological environment is hard. It is also necessary. Elegant solutions produce elegant results and are easy to maintain and fix. Elegance does not happen by accident; you must work for it.

The quality of simplicity is a large part of technical elegance. So large, in fact that it deserves a chapter of its own, Chapter 18, "Find the Simplicity," but we do not ignore it here. This chapter discusses what it means for hardware and software to be elegant.

Hardware Elegance

Yes, hardware can be elegant -- even beautiful, pleasing to the eye. Hardware that is well designed is more reliable as well. Elegant hardware solutions improve reliability'.

[Oct 27, 2018] One issue with Microsoft (not just Microsoft) is that their business model (not the benefit of the users) requires frequent changes in the systems, so bugs are introduced at the steady clip.

Oct 27, 2018 | www.moonofalabama.org

Piotr Berman , Oct 26, 2018 2:55:29 PM | 5 ">link

"Even Microsoft, the biggest software company in the world, recently screwed up..."

Isn't it rather logical than the larger a company is, the more screw ups it can make? After all, Microsofts has armies of programmers to make those bugs.

Once I created a joke that the best way to disable missile defense would be to have a rocket that can stop in mid-air, thus provoking the software to divide be zero and crash. One day I told that joke to a military officer who told me that something like that actually happened, but it was in the Navy and it involved a test with a torpedo. Not only the program for "torpedo defense" went down but the system crashed too and the engine of the ship stopped working as well. I also recall explanations that a new complex software system typically has all major bugs removed after being used for a year. And the occasion was Internal Revenue Service changing hardware and software leading to widely reported problems.

One issue with Microsoft (not just Microsoft) is that their business model (not the benefit of the users) requires frequent changes in the systems, so bugs are introduced at the steady clip. Of course, they do not make money on bugs per se, but on new features that in time make it impossible to use older versions of the software and hardware.

[Sep 21, 2018] 'It Just Seems That Nobody is Interested in Building Quality, Fast, Efficient, Lasting, Foundational Stuff Anymore'

Sep 21, 2018 | tech.slashdot.org

Nikita Prokopov, a software programmer and author of Fira Code, a popular programming font, AnyBar, a universal status indicator, and some open-source Clojure libraries, writes :

Remember times when an OS, apps and all your data fit on a floppy? Your desktop todo app is probably written in Electron and thus has userland driver for Xbox 360 controller in it, can render 3d graphics and play audio and take photos with your web camera. A simple text chat is notorious for its load speed and memory consumption. Yes, you really have to count Slack in as a resource-heavy application. I mean, chatroom and barebones text editor, those are supposed to be two of the less demanding apps in the whole world. Welcome to 2018.

At least it works, you might say. Well, bigger doesn't imply better. Bigger means someone has lost control. Bigger means we don't know what's going on. Bigger means complexity tax, performance tax, reliability tax. This is not the norm and should not become the norm . Overweight apps should mean a red flag. They should mean run away scared. 16Gb Android phone was perfectly fine 3 years ago. Today with Android 8.1 it's barely usable because each app has become at least twice as big for no apparent reason. There are no additional functions. They are not faster or more optimized. They don't look different. They just...grow?

iPhone 4s was released with iOS 5, but can barely run iOS 9. And it's not because iOS 9 is that much superior -- it's basically the same. But their new hardware is faster, so they made software slower. Don't worry -- you got exciting new capabilities like...running the same apps with the same speed! I dunno. [...] Nobody understands anything at this point. Neither they want to. We just throw barely baked shit out there, hope for the best and call it "startup wisdom." Web pages ask you to refresh if anything goes wrong. Who has time to figure out what happened? Any web app produces a constant stream of "random" JS errors in the wild, even on compatible browsers.

[...] It just seems that nobody is interested in building quality, fast, efficient, lasting, foundational stuff anymore. Even when efficient solutions have been known for ages, we still struggle with the same problems: package management, build systems, compilers, language design, IDEs. Build systems are inherently unreliable and periodically require full clean, even though all info for invalidation is there. Nothing stops us from making build process reliable, predictable and 100% reproducible. Just nobody thinks its important. NPM has stayed in "sometimes works" state for years.


K. S. Kyosuke ( 729550 ) , Friday September 21, 2018 @11:32AM ( #57354556 )

Re:Why should they? ( Score: 4 , Insightful)

Less resource use to accomplish the required tasks? Both in manufacturing (more chips from the same amount of manufacturing input) and in operation (less power used)?

K. S. Kyosuke ( 729550 ) writes: on Friday September 21, 2018 @11:58AM ( #57354754 )
Re:Why should they? ( Score: 2 )

Ehm...so for example using smaller cars with better mileage to commute isn't more environmentally friendly either, according to you?https://slashdot.org/comments.pl?sid=12644750&cid=57354556#

DontBeAMoran ( 4843879 ) writes: on Friday September 21, 2018 @12:04PM ( #57354826 )
Re:Why should they? ( Score: 2 )

iPhone 4S used to be the best and could run all the applications.

Today, the same power is not sufficient because of software bloat. So you could say that all the iPhones since the iPhone 4S are devices that were created and then dumped for no reason.

It doesn't matter since we can't change the past and it doesn't matter much since improvements are slowing down so people are changing their phones less often.

Mark of the North ( 19760 ) , Friday September 21, 2018 @01:02PM ( #57355296 )
Re:Why should they? ( Score: 5 , Interesting)

Can you really not see the connection between inefficient software and environmental harm? All those computers running code that uses four times as much data, and four times the number crunching, as is reasonable? That excess RAM and storage has to be built as well as powered along with the CPU. Those material and electrical resources have to come from somewhere.

But the calculus changes completely when the software manufacturer hosts the software (or pays for the hosting) for their customers. Our projected AWS bill motivated our management to let me write the sort of efficient code I've been trained to write. After two years of maintaining some pretty horrible legacy code, it is a welcome change.

The big players care a great deal about efficiency when they can't outsource inefficiency to the user's computing resources.

eth1 ( 94901 ) , Friday September 21, 2018 @11:45AM ( #57354656 )
Re:Why should they? ( Score: 5 , Informative)
We've been trained to be a consuming society of disposable goods. The latest and greatest feature will always be more important than something that is reliable and durable for the long haul.

It's not just consumer stuff.

The network team I'm a part of has been dealing with more and more frequent outages, 90% of which are due to bugs in software running our devices. These aren't fly-by-night vendors either, they're the "no one ever got fired for buying X" ones like Cisco, F5, Palo Alto, EMC, etc.

10 years ago, outages were 10% bugs, and 90% human error, now it seems to be the other way around. Everyone's chasing features, because that's what sells, so there's no time for efficiency/stability/security any more.

LucasBC ( 1138637 ) , Friday September 21, 2018 @12:05PM ( #57354836 )
Re:Why should they? ( Score: 3 , Interesting)

Poor software engineering means that very capable computers are no longer capable of running modern, unnecessarily bloated software. This, in turn, leads to people having to replace computers that are otherwise working well, solely for the reason to keep up with software that requires more and more system resources for no tangible benefit. In a nutshell -- sloppy, lazy programming leads to more technology waste. That impacts the environment. I have a unique perspective in this topic. I do web development for a company that does electronics recycling. I have suffered the continued bloat in software in the tools I use (most egregiously, Adobe), and I see the impact of technological waste in the increasing amount of electronics recycling that is occurring. Ironically, I'm working at home today because my computer at the office kept stalling every time I had Photoshop and Illustrator open at the same time. A few years ago that wasn't a problem.

arglebargle_xiv ( 2212710 ) writes:
Re: ( Score: 3 )

There is one place where people still produce stuff like the OP wants, and that's embedded. Not IoT wank, but real embedded, running on CPUs clocked at tens of MHz with RAM in two-digit kilobyte (not megabyte or gigabyte) quantities. And a lot of that stuff is written to very exacting standards, particularly where something like realtime control and/or safety is involved.

The one problem in this area is the endless battle with standards morons who begin each standard with an implicit "assume an infinitely

commodore64_love ( 1445365 ) , Friday September 21, 2018 @03:58PM ( #57356680 ) Journal
Re:Why should they? ( Score: 3 )

> Poor software engineering means that very capable computers are no longer capable of running modern, unnecessarily bloated software.

Not just computers.

You can add Smart TVs, settop internet boxes, Kindles, tablets, et cetera that must be thrown-away when they become too old (say 5 years) to run the latest bloatware. Software non-engineering is causing a lot of working hardware to be landfilled, and for no good reason.

[Sep 21, 2018] Fast, cheap (efficient) and reliable (robust, long lasting): pick 2

Sep 21, 2018 | tech.slashdot.org

JoeDuncan ( 874519 ) , Friday September 21, 2018 @12:58PM ( #57355276 )

Obligatory ( Score: 2 )

Fast, cheap (efficient) and reliable (robust, long lasting): pick 2.

roc97007 ( 608802 ) , Friday September 21, 2018 @12:16PM ( #57354946 ) Journal
Re:Bloat = growth ( Score: 2 )

There's probably some truth to that. And it's a sad commentary on the industry.

[Sep 21, 2018] Since Moore's law appears to have stalled since at least five years ago, it will be interesting to see if we start to see algorithm research or code optimization techniques coming to the fore again.

Sep 21, 2018 | tech.slashdot.org

Anonymous Coward , Friday September 21, 2018 @11:26AM ( #57354512 )

Moore's law ( Score: 5 , Interesting)

When the speed of your processor doubles every two year along with a concurrent doubling of RAM and disk space, then you can get away with bloatware.

Since Moore's law appears to have stalled since at least five years ago, it will be interesting to see if we start to see algorithm research or code optimization techniques coming to the fore again.

[Nov 29, 2017] Take This GUI and Shove It

Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.
Notable quotes:
"... Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI. ..."
"... What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers. ..."
"... AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a script, copy elsewhere, ... I liked SMIT. ..."
"... Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI. ..."
"... Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands through the GUI, and learn how you could have done it in PowerShell instead. ..."
"... Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location. ..."
"... I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL. Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script. ..."
Slashdot

Deep End's Paul Venezia speaks out against the overemphasis on GUIs in today's admin tools, saying that GUIs are fine and necessary in many cases, but only after a complete CLI is in place, and that they cannot interfere with the use of the CLI, only complement it. Otherwise, the GUI simply makes easy things easy and hard things much harder. He writes, 'If you have to make significant, identical changes to a bunch of Linux servers, is it easier to log into them one-by-one and run through a GUI or text-menu tool, or write a quick shell script that hits each box and either makes the changes or simply pulls down a few new config files and restarts some services? And it's not just about conservation of effort - it's also about accuracy. If you write a script, you're certain that the changes made will be identical on each box. If you're doing them all by hand, you aren't.'"

alain94040 (785132)

Here is a Link to the print version of the article [infoworld.com] (that conveniently fits on 1 page instead of 3).

Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.

A bad GUI with no CLI is the worst of both worlds, the author of the article got that right. The 80/20 rule applies: 80% of the work is common to everyone, and should be offered with a GUI. And the 20% that is custom to each sysadmin, well use the CLI.

maxwell demon:

What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers.

0123456 (636235) writes:

What would be nice is if the GUI could automatically create a shell script doing the change.

While it's not quite the same thing, our GUI-based home router has an option to download the config as a text file so you can automatically reconfigure it from that file if it has to be reset to defaults. You could presumably use sed to change IP addresses, etc, and copy it to a different router. Of course it runs Linux.

Alain Williams:

AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a script, copy elsewhere, ... I liked SMIT.

Ephemeriis:

What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers.

Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI.

We've just started working with Aruba hardware. Installed a mobility controller last week. They've got a GUI that does something similar. It's all a pretty web-based front-end, but it again generates CLI commands and a human-readable configuration. I'm still very new to the platform, but I'm already learning about their CLI through the GUI. And getting work done that I wouldn't be able to if I had to look up the CLI commands for everything.

Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands through the GUI, and learn how you could have done it in PowerShell instead.

Anpheus:

Just about every Microsoft tool newer than 2007 does this. Virtual machine manager, SQL Server has done it for ages, I think almost all the system center tools do, etc.

It's a huge improvement.

PoV:

All good admins document their work (don't they? DON'T THEY?). With a CLI or a script that's easy: it comes down to "log in as user X, change to directory Y, run script Z with arguments A B and C - the output should look like D". Try that when all you have is a GLUI (like a GUI, but you get stuck): open this window, select that option, drag a slider, check these boxes, click Yes, three times. The output might look a little like this blurry screen shot and the only record of a successful execution is a window that disappears as soon as the application ends.

I suppose the Linux community should be grateful that windows made the fundemental systems design error of making everything graphic. Without that basic failure, Linux might never have even got the toe-hold it has now.

skids:

I think this is a stronger point than the OP: GUIs do not lead to good documentation. In fact, GUIs pretty much are limited to procedural documentation like the example you gave.

The best they can do as far as actual documentation, where the precise effect of all the widgets is explained, is a screenshot with little quote bubbles pointing to each doodad. That's a ridiculous way to document.

This is as opposed to a command reference which can organize, usually in a pretty sensible fashion, exact descriptions of what each command does.

Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location.

Not that even good command references are mandatory by today's pathetic standards. Even the big boys like Cisco have shown major degradation in the quality of their documentation during the last decade.

pedantic bore:

I think the author might not fully understand who most admins are. They're people who couldn't write a shell script if their lives depended on it, because they've never had to. GUI-dependent users become GUI-dependent admins.

As a percentage of computer users, people who can actually navigate a CLI are an ever-diminishing group.

arth1: /etc/resolv.conf

/etc/init.d/NetworkManager stop
chkconfig NetworkManager off
chkconfig network on
vi /etc/sysconfig/network
vi /etc/sysconfig/network-scripts/eth0

At least they named it NetworkManager, so experienced admins could recognize it as a culprit. Anything named in CamelCase is almost invariably written by new school programmers who don't grok the Unix toolbox concept and write applications instead of tools, and the bloated drivel is usually best avoided.

Darkness404 (1287218) writes: on Monday October 04, @07:21PM (#33789446)

There are more and more small businesses (5, 10 or so employees) realizing that they can get things done easier if they had a server. Because the business can't really afford to hire a sysadmin or a full-time tech person, its generally the employee who "knows computers" (you know, the person who has to help the boss check his e-mail every day, etc.) and since they don't have the knowledge of a skilled *Nix admin, a GUI makes their administration a lot easier.

So with the increasing use of servers among non-admins, it only makes sense for a growth in GUI-based solutions.

Svartalf (2997) writes: Ah... But the thing is... You don't NEED the GUI with recent Linux systems- you do with Windows.

oatworm (969674) writes: on Monday October 04, @07:38PM (#33789624) Homepage

Bingo. Realistically, if you're a company with less than a 100 employees (read: most companies), you're only going to have a handful of servers in house and they're each going to be dedicated to particular roles. You're not going to have 100 clustered fileservers - instead, you're going to have one or maybe two. You're not going to have a dozen e-mail servers - instead, you're going to have one or two. Consequently, the office admin's focus isn't going to be scalability; it just won't matter to the admin if they can script, say, creating a mailbox for 100 new users instead of just one. Instead, said office admin is going to be more focused on finding ways to do semi-unusual things (e.g. "create a VPN between this office and our new branch office", "promote this new server as a domain controller", "install SQL", etc.) that they might do, oh, once a year.

The trouble with Linux, and I'm speaking as someone who's used YaST in precisely this context, is that you have to make a choice - do you let the GUI manage it or do you CLI it? If you try to do both, there will be inconsistencies because the grammar of the config files is too ambiguous; consequently, the GUI config file parser will probably just overwrite whatever manual changes it thinks is "invalid", whether it really is or not. If you let the GUI manage it, you better hope the GUI has the flexibility necessary to meet your needs. If, for example, YaST doesn't understand named Apache virtual hosts, well, good luck figuring out where it's hiding all of the various config files that it was sensibly spreading out in multiple locations for you, and don't you dare use YaST to manage Apache again or it'll delete your Apache-legal but YaST-"invalid" directive.

The only solution I really see is for manual config file support with optional XML (or some other machine-friendly but still human-readable format) linkages. For example, if you want to hand-edit your resolv.conf, that's fine, but if the GUI is going to take over, it'll toss a directive on line 1 that says "#import resolv.conf.xml" and immediately overrides (but does not overwrite) everything following that. Then, if you still want to use the GUI but need to hand-edit something, you can edit the XML file using the appropriate syntax and know that your change will be reflected on the GUI.

That's my take. Your mileage, of course, may vary.

icebraining (1313345) writes: on Monday October 04, @07:24PM (#33789494) Homepage

I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL. Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script.

devent (1627873) writes:

Why Windows servers have a GUI is beyond me anyway. The servers are running 99,99% of the time without a monitor and normally you just login per ssh to a console if you need to administer them. But they are consuming the extra RAM, the extra CPU cycles and the extra security threats. I don't now, but can you de-install the GUI from a Windows server? Or better, do you have an option for no-GUI installation? Just saw the minimum hardware requirements. 512 MB RAM and 32 GB or greater disk space. My server runs

sirsnork (530512) writes: on Monday October 04, @07:43PM (#33789672)

it's called a "core" install in Server 2008 and up, and if you do that, there is no going back, you can't ever add the GUI back.

What this means is you can run a small subset of MS services that don't need GUI interaction. With R2 that subset grew somwhat as they added the ability to install .Net too, which mean't you could run IIS in a useful manner (arguably the strongest reason to want to do this in the first place).

Still it's a one way trip and you better be damn sure what services need to run on that box for the lifetime of that box or you're looking at a reinstall. Most windows admins will still tell you the risk isn't worth it.

Simple things like network configuration without a GUI in windows is tedious, and, at least last time i looked, you lost the ability to trunk network poers because the NIC manufactuers all assumed you had a GUI to configure your NICs

prichardson (603676) writes: on Monday October 04, @07:27PM (#33789520) Journal

This is also a problem with Max OS X Server. Apple builds their services from open source products and adds a GUI for configuration to make it all clickable and easy to set up. However, many options that can be set on the command line can't be set in the GUI. Even worse, making CLI changes to services can break the GUI entirely.

The hardware and software are both super stable and run really smoothly, so once everything gets set up, it's awesome. Still, it's hard for a guy who would rather make changes on the CLI to get used to.

MrEricSir (398214) writes:

Just because you're used to a CLI doesn't make it better. Why would I want to read a bunch of documentation, mess with command line options, then read whole block of text to see what it did? I'd much rather sit back in my chair, click something, and then see if it worked. Don't make me read a bunch of man pages just to do a simple task. In essence, the question here is whether it's okay for the user to be lazy and use a GUI, or whether the programmer should be too lazy to develop a GUI.

ak_hepcat (468765) writes: <[email protected] minus author> on Monday October 04, @07:38PM (#33789626) Homepage Journal

Probably because it's also about the ease of troubleshooting issues.

How do you troubleshoot something with a GUI after you've misconfigured? How do you troubleshoot a programming error (bug) in the GUI -> device communication? How do you scale to tens, hundreds, or thousands of devices with a GUI?

CLI makes all this easier and more manageable.

arth1 (260657) writes:

Why would I want to read a bunch of documentation, mess with command line options, then read whole block of text to see what it did? I'd much rather sit back in my chair, click something, and then see if it worked. Don't make me read a bunch of man pages just to do a simple task. Because then you'll be stuck at doing simple tasks, and will never be able to do more advanced tasks. Without hiring a team to write an app for you instead of doing it yourself in two minutes, that is. The time you spend reading man

fandingo (1541045) writes: on Monday October 04, @07:54PM (#33789778)

I don't think you really understand systems administration. 'Users,' or in this case admins, don't typically do stuff once. Furthermore, they need to know what he did and how to do it again (i.e. new server or whatever) or just remember what he did. One-off stuff isn't common and is a sign of poor administration (i.e. tracking changes and following processes).

What I'm trying to get at is that admins shouldn't do anything without reading the manual. As a Windows/Linux admin, I tend to find Linux easier to properly administer because I either already know how to perform an operation or I have to read the manual (manpage) and learn a decent amount about the operation (i.e. more than click here/use this flag).

Don't get me wrong, GUIs can make unknown operations significantly easier, but they often lead to poor process management. To document processes, screenshots are typically needed. They can be done well, but I find that GUI documentation (created by admins, not vendor docs) tend to be of very low quality. They are also vulnerable to 'upgrades' where vendors change the interface design. CLI programs typically have more stable interfaces, but maybe that's just because they have been around longer...

maotx (765127) writes: <[email protected]> on Monday October 04, @07:42PM (#33789666)

That's one thing Microsoft did right with Exchange 2007. They built it entirely around their new powershell CLI and then built a GUI for it. The GUI is limited in compared to what you can do with the CLI, but you can get most things done. The CLI becomes extremely handy for batch jobs and exporting statistics to csv files. I'd say it's really up there with BASH in terms of scripting, data manipulation, and integration (not just Exchange but WMI, SQL, etc.)

They tried to do similar with Windows 2008 and their Core [petri.co.il] feature, but they still have to load a GUI to present a prompt...Reply to This

Charles Dodgeson (248492) writes: <[email protected]> on Monday October 04, @08:51PM (#33790206) Homepage Journal

Probably Debian would have been OK, but I was finding admin of most Linux distros a pain for exactly these reasons. I couldn't find a layer where I could do everything that I needed to do without worrying about one thing stepping on another. No doubt there are ways that I could manage a Linux system without running into different layers of management tools stepping on each other, but it was a struggle.

There were other reasons as well (although there is a lot that I miss about Linux), but I think that this was one of the leading reasons.

(NB: I realize that this is flamebait (I've got karma to burn), but that isn't my intention here.)

[Nov 28, 2017] Rees Re OO

Notable quotes:
"... The conventional Simula 67-like pattern of class and instance will get you {1,3,7,9}, and I think many people take this as a definition of OO. ..."
"... Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince you that you are a loser. ..."
"... In such a pack-programming world, the language is a constitution or set of by-laws, and the interpreter/compiler/QA dept. acts in part as a rule checker/enforcer/police force. Co-programmers want to know: If I work with your code, will this help me or hurt me? Correctness is undecidable (and generally unenforceable), so managers go with whatever rule set (static type system, language restrictions, "lint" program, etc.) shows up at the door when the project starts. ..."
Nov 04, 2017 | www.paulgraham.com

(Jonathan Rees had a really interesting response to Why Arc isn't Especially Object-Oriented , which he has allowed me to reproduce here.)

Here is an a la carte menu of features or properties that are related to these terms; I have heard OO defined to be many different subsets of this list.

  1. Encapsulation - the ability to syntactically hide the implementation of a type. E.g. in C or Pascal you always know whether something is a struct or an array, but in CLU and Java you can hide the difference.
  2. Protection - the inability of the client of a type to detect its implementation. This guarantees that a behavior-preserving change to an implementation will not break its clients, and also makes sure that things like passwords don't leak out.
  3. Ad hoc polymorphism - functions and data structures with parameters that can take on values of many different types.
  4. Parametric polymorphism - functions and data structures that parameterize over arbitrary values (e.g. list of anything). ML and Lisp both have this. Java doesn't quite because of its non-Object types.
  5. Everything is an object - all values are objects. True in Smalltalk (?) but not in Java (because of int and friends).
  6. All you can do is send a message (AYCDISAM) = Actors model - there is no direct manipulation of objects, only communication with (or invocation of) them. The presence of fields in Java violates this.
  7. Specification inheritance = subtyping - there are distinct types known to the language with the property that a value of one type is as good as a value of another for the purposes of type correctness. (E.g. Java interface inheritance.)
  8. Implementation inheritance/reuse - having written one pile of code, a similar pile (e.g. a superset) can be generated in a controlled manner, i.e. the code doesn't have to be copied and edited. A limited and peculiar kind of abstraction. (E.g. Java class inheritance.)
  9. Sum-of-product-of-function pattern - objects are (in effect) restricted to be functions that take as first argument a distinguished method key argument that is drawn from a finite set of simple names.

So OO is not a well defined concept. Some people (eg. Abelson and Sussman?) say Lisp is OO, by which they mean {3,4,5,7} (with the proviso that all types are in the programmers' heads). Java is supposed to be OO because of {1,2,3,7,8,9}. E is supposed to be more OO than Java because it has {1,2,3,4,5,7,9} and almost has 6; 8 (subclassing) is seen as antagonistic to E's goals and not necessary for OO.

The conventional Simula 67-like pattern of class and instance will get you {1,3,7,9}, and I think many people take this as a definition of OO.

Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince you that you are a loser.

Perhaps part of the confusion - and you say this in a different way in your little memo - is that the C/C++ folks see OO as a liberation from a world that has nothing resembling a first-class functions, while Lisp folks see OO as a prison since it limits their use of functions/objects to the style of (9.). In that case, the only way OO can be defended is in the same manner as any other game or discipline -- by arguing that by giving something up (e.g. the freedom to throw eggs at your neighbor's house) you gain something that you want (assurance that your neighbor won't put you in jail).

This is related to Lisp being oriented to the solitary hacker and discipline-imposing languages being oriented to social packs, another point you mention. In a pack you want to restrict everyone else's freedom as much as possible to reduce their ability to interfere with and take advantage of you, and the only way to do that is by either becoming chief (dangerous and unlikely) or by submitting to the same rules that they do. If you submit to rules, you then want the rules to be liberal so that you have a chance of doing most of what you want to do, but not so liberal that others nail you.

In such a pack-programming world, the language is a constitution or set of by-laws, and the interpreter/compiler/QA dept. acts in part as a rule checker/enforcer/police force. Co-programmers want to know: If I work with your code, will this help me or hurt me? Correctness is undecidable (and generally unenforceable), so managers go with whatever rule set (static type system, language restrictions, "lint" program, etc.) shows up at the door when the project starts.

I recently contributed to a discussion of anti-OO on the e-lang list. My main anti-OO message (actually it only attacks points 5/6) was http://www.eros-os.org/pipermail/e-lang/2001-October/005852.html . The followups are interesting but I don't think they're all threaded properly.

(Here are the pet definitions of terms used above:

Complete Exchange

[Nov 28, 2017] Sometimes the Old Ways Are Best by Brian Kernighan

Nov 01, 2008 | IEEE Software, pp.18-19

As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it.

... ... ...

Here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back in time.

But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur stuck in the past.

On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well

[Nov 27, 2017] Stop Writing Classes

Notable quotes:
"... If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code. ..."
Nov 27, 2017 | www.youtube.com

Tom coAdjoint , 1 year ago

My god I wish the engineers at my work understood this

kobac , 2 years ago

If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code.

If there's something else that I've noticed in my career, it's that their code is the hardest to maintain and for some reason they want the rest of the team to depend on them since they are the only "enough smart" to understand that code and change it. No need to say that these guys are not part of my team. Your code should be direct, simple and readable. End of story.

[Nov 27, 2017] Stop Writing Classes

Notable quotes:
"... If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code. ..."
Nov 27, 2017 | www.youtube.com

Tom coAdjoint , 1 year ago

My god I wish the engineers at my work understood this

kobac , 2 years ago

If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code.

If there's something else that I've noticed in my career, it's that their code is the hardest to maintain and for some reason they want the rest of the team to depend on them since they are the only "enough smart" to understand that code and change it. No need to say that these guys are not part of my team. Your code should be direct, simple and readable. End of story.

[Oct 26, 2017] Amazon.com Customer reviews Extreme Programming Explained Embrace Change

Oct 26, 2017 | www.amazon.com
2.0 out of 5 stars

By Mohammad B. Abdulfatah on February 10, 2003

Programming Malpractice Explained: Justifying Chaos

To fairly review this book, one must distinguish between the methodology it presents and the actual presentation. As to the presentation, the author attempts to win the reader over with emotional persuasion and pep talk rather than with facts and hard evidence. Stories of childhood and comradeship don't classify as convincing facts to me. A single case study-the C3 project-is often referred to, but with no specific information (do note that the project was cancelled by the client after staying in development for far too long).
As to the method itself, it basically boils down to four core practices:
1. Always have a customer available on site.
2. Unit test before you code.
3. Program in pairs.
4. Forfeit detailed design in favor of incremental, daily releases and refactoring.
If you do the above, and you have excellent staff on your hands, then the book promises that you'll reap the benefits of faster development, less overtime, and happier customers. Of course, the book fails to point out that if your staff is all highly qualified people, then the project is likely to succeed no matter what methodology you use. I'm sure that anyone who has worked in the software industry for sometime has noticed the sad state that most computer professionals are in nowadays.
However, assuming that you have all the topnotch developers that you desire, the outlined methodology is almost impossible to apply in real world scenarios. Having a customer always available on site would mean that the customer in question is probably a small, expendable fish in his organization and is unlikely to have any useful knowledge of its business practices. Unit testing code before it is written means that one would have to have a mental picture of what one is going to write before writing it, which is difficult without upfront design. And maintaining such tests as the code changes would be a nightmare. Programming in pairs all the time would assume that your topnotch developers are also sociable creatures, which is rarely the case, and even if they were, no one would be able to justify the practice in terms of productivity. I won't discuss why I think that abandoning upfront design is a bad practice; the whole idea is too ridiculous to debate.
Both book and methodology will attract fledgling developers with its promise of hacking as an acceptable software practice and a development universe revolving around the programmer. It's a cult, not a methodology, were the followers shall find salvation and 40-hour working weeks. Experience is a great teacher, but only a fool would learn from it alone. Listen to what the opponents have to say before embracing change, and don't forget to take the proverbial grain of salt.
Two stars out of five for the presentation for being courageous and attempting to defy the standard practices of the industry. Two stars for the methodology itself, because it underlines several common sense practices that are very useful once practiced without the extremity.

By wiredweird HALL OF FAME TOP 1000 REVIEWER on May 24, 2004
eXtreme buzzwording

Maybe it's an interesting idea, but it's just not ready for prime time.
Parts of Kent's recommended practice - including aggressive testing and short integration cycle - make a lot of sense. I've shared the same beliefs for years, but it was good to see them clarified and codified. I really have changed some of my practice after reading this and books like this.
I have two broad kinds of problem with this dogma, though. First is the near-abolition of documentation. I can't defend 2000 page specs for typical kinds of development. On the other hand, declaring that the test suite is the spec doesn't do it for me either. The test suite is code, written for machine interpretation. Much too often, it is not written for human interpretation. Based on the way I see most code written, it would be a nightmare to reverse engineer the human meaning out of any non-trivial test code. Some systematic way of ensuring human intelligibility in the code, traceable to specific "stories" (because "requirements" are part of the bad old way), would give me a lot more confidence in the approach.
The second is the dictatorial social engineering that eXtremity mandates. I've actually tried the pair programming - what a disaster. The less said the better, except that my experience did not actually destroy any professional relationships. I've also worked with people who felt that their slightest whim was adequate reason to interfere with my work. That's what Beck institutionalizes by saying that any request made of me by anyone on the team must be granted. It puts me completely at the mercy of anyone walking by. The requisite bullpen physical environment doesn't work for me either. I find that the visual and auditory distraction make intense concentration impossible.
I find revival tent spirit of the eXtremists very off-putting. If something works, it works for reasons, not as a matter of faith. I find much too much eXhortation to believe, to go ahead and leap in, so that I will eXperience the wonderfulness for myself. Isn't that what the evangelist on the subway platform keeps saying? Beck does acknowledge unbelievers like me, but requires their exile in order to maintain the group-think of the X-cult.
Beck's last chapters note a number of exceptions and special cases where eXtremism may not work - actually, most of the projects I've ever encountered.
There certainly is good in the eXtreme practice. I look to future authors to tease that good out from the positively destructive threads that I see interwoven.

By A customer on May 2, 2004
A work of fiction

The book presents extreme programming. It is divided into three parts:
(1) The problem
(2) The solution
(3) Implementing XP.
The problem, as presented by the author, is that requirements change but current methodologies are not agile enough to cope with this. This results in customer being unhappy. The solution is to embrace change and to allow the requirements to be changed. This is done by choosing the simplest solution, releasing frequently, refactoring with the security of unit tests.
The basic assumption which underscores the approach is that the cost of change is not exponential but reaches a flat asymptote. If this is not the case, allowing change late in the project would be disastrous. The author does not provide data to back his point of view. On the other hand there is a lot of data against a constant cost of change (see for example discussion of cost in Code Complete). The lack of reasonable argumentation is an irremediable flaw in the book. Without some supportive data it is impossible to believe the basic assumption, nor the rest of the book. This is all the more important since the only project that the author refers to was cancelled before full completion.
Many other parts of the book are unconvincing. The author presents several XP practices. Some of them are very useful. For example unit tests are a good practice. They are however better treated elsewhere (e.g., Code Complete chapter on unit test). On the other hand some practices seem overkill. Pair programming is one of them. I have tried it and found it useful to generate ideas while prototyping. For writing production code, I find that a quiet environment is by far the best (see Peopleware for supportive data). Again the author does not provide any data to support his point.
This book suggests an approach aiming at changing software engineering practices. However the lack of supportive data makes it a work of fiction.
I would suggest reading Code Complete for code level advice or Rapid Development for management level advice.

By A customer on November 14, 2002
Not Software Engineering.

Any Engineering discipline is based on solid reasoning and logic not on blind faith. Unfortunately, most of this book attempts to convince you that Extreme programming is better based on the author's experiences. A lot of the principles are counter - intutive and the author exhorts you just try it out and get enlightened. I'm sorry but these kind of things belong in infomercials not in s/w engineering.
The part about "code is the documentation" is the scariest part. It's true that keeping the documentation up to date is tough on any software project, but to do away with dcoumentation is the most ridiculous thing I have heard. It's like telling people to cut of their noses to avoid colds.
Yes we are always in search of a better software process. Let me tell you that this book won't lead you there.

By Philip K. Ronzone on November 24, 2000
The "gossip magazine diet plans" style of programming.

This book reminds me of the "gossip magazine diet plans", you know, the vinegar and honey diet, or the fat-burner 2000 pill diet etc. Occasionally, people actually lose weight on those diets, but, only because they've managed to eat less or exercise more. The diet plans themselves are worthless. XP is the same - it may sometimes help people program better, but only because they are (unintentionally) doing something different. People look at things like XP because, like dieters, they see a need for change. Overall, the book is a decently written "fad diet", with ideas that are just as worthless.

By A customer on August 11, 2003
Hackers! Salvation is nigh!!

It's interesting to see the phenomenon of Extreme Programming happening in the dawn of the 21st century. I suppose historians can explain such a reaction as a truly conservative movement. Of course, serious software engineering practice is hard. Heck, documentation is a pain in the neck. And what programmer wouldn't love to have divine inspiration just before starting to write the latest web application and so enlightened by the Almighty, write the whole thing in one go, as if by magic? No design, no documentation, you and me as a pair, and the customer too. Sounds like a hacker's dream with "Imagine" as the soundtrack (sorry, John).
The Software Engineering struggle is over 50 years old and it's only logical to expect some resistance, from time to time. In the XP case, the resistance comes in one of its worst forms: evangelism. A fundamentalist cult, with very little substance, no proof of any kind, but then again if you don't have faith you won't be granted the gift of the mystic revelation. It's Gnosticism for Geeks.
Take it with a pinch of salt.. well, maybe a sack of salt. If you can see through the B.S. that sells millions of dollars in books, consultancy fees, lectures, etc, you will recognise some common-sense ideas that are better explained, explored and detailed elsewhere.

By Ian K. VINE VOICE on February 27, 2015
Long have I hated this book

Kent is an excellent writer. He does an excellent job of presenting an approach to software development that is misguided for anything but user interface code. The argument that user interface code must be gotten into the hands of users to get feedback is used to suggest that complex system code should not be "designed up front". This is simply wrong. For example, if you are going to deply an application in the Amazon Cloud that you want to scale, you better have some idea of how this is going to happen. Simply waiting until you application falls over and fails is not an acceptable approach.

One of the things I despise the most about the software development culture is the mindless adoption of fads. Extreme programming has been adopted by some organizations like a religious dogma.

Engineering large software systems is one of the most difficult things that humans do. There are no silver bullets and there are no dogmatic solutions that will make the difficult simple.

By Anil Philip on March 24, 2005
not found - the silver bullet

Maybe I'm too cynical because I never got to work for the successful, whiz-kid companies; Maybe this book wasn't written for me!

This book reminds me of Jacobsen's "Use Cases" book of the 1990s. 'Use Cases' was all the rage but after several years, we slowly learned the truth: Uses Cases does not deal with the architecture - a necessary and good foundation for any piece of software.

Similarly, this book seems to be spotlighting Testing and taking it to extremes.

'the test plan is the design doc'

Not True. The design doc encapsulates wisdom and insight

a picture that accurately describes the interactions of the lower level software components is worth a thousand lines of code-reading.

Also present is an evangelistic fervor that reminds me of the rah-rah eighties' bestseller, "In Search Of Excellence" by Peters and Waterman. (Many people have since noted that most of the spotlighted companies of that book are bankrupt twenty five years later).

- in a room full of people with a bully supervisor (as I experienced in my last job at a major telco) innovation or good work is largely absent.

- deploy daily - are you kidding?

to run through the hundreds of test cases in a large application takes several hours if not days. Not all testing can be automated.

- I have found the principle of "baby steps", one of the principles in the book, most useful in my career - it is the basis for prototyping iteratively. However I heard it described in 1997 at a pep talk at MCI that the VP of our department gave to us. So I dont know who stole it from whom!

Lastly, I noted that the term 'XP' was used throughout the book, and the back cover has a blurb from an M$ architect. Was it simply coincidence that Windows shares the same name for its XP release? I wondered if M$ had sponsored part of the book as good advertising for Windows XP! :)

[Oct 08, 2017] Disbelieving the 'many eyes' myth

Notable quotes:
"... This article originally appeared on Alice, Eve, and Bob – a security blog and is republished with permission. ..."
Oct 08, 2017 | opensource.com

Review by many eyes does not always prevent buggy code There is a view that because open source software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth.

06 Oct 2017 Mike Bursell (Red Hat) Feed 8 up Image credits : Internet Archive Book Images . CC BY-SA 4.0 Writing code is hard. Writing secure code is harder -- much harder. And before you get there, you need to think about design and architecture. When you're writing code to implement security functionality, it's often based on architectures and designs that have been pored over and examined in detail. They may even reflect standards that have gone through worldwide review processes and are generally considered perfect and unbreakable. *

However good those designs and architectures are, though, there's something about putting things into actual software that's, well, special. With the exception of software proven to be mathematically correct, ** being able to write software that accurately implements the functionality you're trying to realize is somewhere between a science and an art. This is no surprise to anyone who's actually written any software, tried to debug software, or divine software's correctness by stepping through it; however, it's not the key point of this article.

Nobody *** actually believes that the software that comes out of this process is going to be perfect, but everybody agrees that software should be made as close to perfect and bug-free as possible. This is why code review is a core principle of software development. And luckily -- in my view, at least -- much of the code that we use in our day-to-day lives is open source, which means that anybody can look at it, and it's available for tens or hundreds of thousands of eyes to review.

And herein lies the problem: There is a view that because open source software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth. A dangerous myth. The problems with this view are at least twofold. The first is the "if you build it, they will come" fallacy. I remember when there was a list of all the websites in the world, and if you added your website to that list, people would visit it. **** In the same way, the number of open source projects was (maybe) once so small that there was a good chance that people might look at and review your code. Those days are past -- long past. Second, for many areas of security functionality -- crypto primitives implementation is a good example -- the number of suitably qualified eyes is low.

Don't think that I am in any way suggesting that the problem is any less in proprietary code: quite the opposite. Not only are the designs and architectures in proprietary software often hidden from review, but you have fewer eyes available to look at the code, and the dangers of hierarchical pressure and groupthink are dramatically increased. "Proprietary code is more secure" is less myth, more fake news. I completely understand why companies like to keep their security software secret, and I'm afraid that the "it's to protect our intellectual property" line is too often a platitude they tell themselves when really, it's just unsafe to release it. So for me, it's open source all the way when we're looking at security software.

So, what can we do? Well, companies and other organizations that care about security functionality can -- and have, I believe a responsibility to -- expend resources on checking and reviewing the code that implements that functionality. Alongside that, the open source community, can -- and is -- finding ways to support critical projects and improve the amount of review that goes into that code. ***** And we should encourage academic organizations to train students in the black art of security software writing and review, not to mention highlighting the importance of open source software.

We can do better -- and we are doing better. Because what we need to realize is that the reason the "many eyes hypothesis" is a myth is not that many eyes won't improve code -- they will -- but that we don't have enough expert eyes looking. Yet.


* Yeah, really: "perfect and unbreakable." Let's just pretend that's true for the purposes of this discussion.

** and that still relies on the design and architecture to actually do what you want -- or think you want -- of course, so good luck.

*** Nobody who's actually written more than about five lines of code (or more than six characters of Perl).

**** I added one. They came. It was like some sort of magic.

***** See, for instance, the Linux Foundation 's Core Infrastructure Initiative .

This article originally appeared on Alice, Eve, and Bob – a security blog and is republished with permission.

[Oct 02, 2017] Techs push to teach coding isnt about kids success – its about cutting wages by Ben Tarnoff

Highly recommended!
IT is probably one of the most "neoliberalized" industry (even in comparison with finance). So atomization of labor and "plantation economy" is a norm in IT. It occurs on rather high level of wages, but with influx of foreign programmers and IT specialists (in the past) and mass outsourcing (now) this is changing. Completion for good job positions is fierce. Dog eats dog competition, the dream of neoliberals. Entry level jobs are already paying $15 an hour, if not less.
Programming is a relatively rare talent, much like ability to play violin. Even amateur level is challenging. On high level (developing large complex programs in a team and still preserving your individuality and productivity ) it is extremely rare. Most of "commercial" programmers are able to produce only a mediocre code (which might be adequate). Only a few programmers can excel if complex software projects. Sometimes even performing solo. There is also a pathological breed of "programmer junkie" ( graphomania happens in programming too ) who are able sometimes to destroy something large projects singlehandedly. That often happens with open source projects after the main developer lost interest and abandoned the project.
It's good to allow children the chance to try their hand at coding when they otherwise may not had that opportunity, But in no way that means that all of them can became professional programmers. No way. Again the top level of programmers required position of a unique talent, much like top musical performer talent.
Also to get a decent entry position you iether need to be extremely talented or graduate from Ivy League university. When applicants are abundant, resume from less prestigious universities are not even considered. this is just easier for HR to filter applications this way.
Also under neoliberalism cheap labor via H1B visas flood the market and depresses wages. Many Silicon companies were so to say "Russian speaking in late 90th after the collapse of the USSR. Not offshoring is the dominant way to offload the development to cheaper labor.
Notable quotes:
"... As software mediates more of our lives, and the power of Silicon Valley grows, it's tempting to imagine that demand for developers is soaring. The media contributes to this impression by spotlighting the genuinely inspiring stories of those who have ascended the class ladder through code. You may have heard of Bit Source, a company in eastern Kentucky that retrains coalminers as coders. They've been featured by Wired , Forbes , FastCompany , The Guardian , NPR and NBC News , among others. ..."
"... A former coalminer who becomes a successful developer deserves our respect and admiration. But the data suggests that relatively few will be able to follow their example. Our educational system has long been producing more programmers than the labor market can absorb. ..."
"... More tellingly, wage levels in the tech industry have remained flat since the late 1990s. Adjusting for inflation, the average programmer earns about as much today as in 1998. If demand were soaring, you'd expect wages to rise sharply in response. Instead, salaries have stagnated. ..."
"... Tech executives have pursued this goal in a variety of ways. One is collusion – companies conspiring to prevent their employees from earning more by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department antitrust complaint in 2010, along with a class action suit that culminated in a $415m settlement . Another, more sophisticated method is importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status. ..."
"... Guest workers and wage-fixing are useful tools for restraining labor costs. But nothing would make programming cheaper than making millions more programmers. ..."
"... Silicon Valley has been unusually successful in persuading our political class and much of the general public that its interests coincide with the interests of humanity as a whole. But tech is an industry like any other. It prioritizes its bottom line, and invests heavily in making public policy serve it. The five largest tech firms now spend twice as much as Wall Street on lobbying Washington – nearly $50m in 2016. The biggest spender, Google, also goes to considerable lengths to cultivate policy wonks favorable to its interests – and to discipline the ones who aren't. ..."
"... Silicon Valley is not a uniquely benevolent force, nor a uniquely malevolent one. Rather, it's something more ordinary: a collection of capitalist firms committed to the pursuit of profit. And as every capitalist knows, markets are figments of politics. They are not naturally occurring phenomena, but elaborately crafted contraptions, sustained and structured by the state – which is why shaping public policy is so important. If tech works tirelessly to tilt markets in its favor, it's hardly alone. What distinguishes it is the amount of money it has at its disposal to do so. ..."
"... The problem isn't training. The problem is there aren't enough good jobs to be trained for ..."
"... Everyone should have the opportunity to learn how to code. Coding can be a rewarding, even pleasurable, experience, and it's useful for performing all sorts of tasks. More broadly, an understanding of how code works is critical for basic digital literacy – something that is swiftly becoming a requirement for informed citizenship in an increasingly technologized world. ..."
"... But coding is not magic. It is a technical skill, akin to carpentry. Learning to build software does not make you any more immune to the forces of American capitalism than learning to build a house. Whether a coder or a carpenter, capital will do what it can to lower your wages, and enlist public institutions towards that end. ..."
"... Exposing large portions of the school population to coding is not going to magically turn them into coders. It may increase their basic understanding but that is a long way from being a software engineer. ..."
"... All schools teach drama and most kids don't end up becoming actors. You need to give all kids access to coding in order for some can go on to make a career out of it. ..."
"... it's ridiculous because even out of a pool of computer science B.Sc. or M.Sc. grads - companies are only interested in the top 10%. Even the most mundane company with crappy IT jobs swears that they only hire "the best and the brightest." ..."
"... It's basically a con-job by the big Silicon Valley companies offshoring as many US jobs as they can, or "inshoring" via exploitation of the H1B visa ..."
"... Masters is the new Bachelors. ..."
"... I taught CS. Out of around 100 graduates I'd say maybe 5 were reasonable software engineers. The rest would be fine in tech support or other associated trades, but not writing software. Its not just a set of trainable skills, its a set of attitudes and ways of perceiving and understanding that just aren't that common. ..."
"... Yup, rings true. I've been in hi tech for over 40 years and seen the changes. I was in Silicon Valley for 10 years on a startup. India is taking over, my current US company now has a majority Indian executive and is moving work to India. US politicians push coding to drive down wages to Indian levels. ..."
Oct 02, 2017 | www.theguardian.com

This month, millions of children returned to school. This year, an unprecedented number of them will learn to code.

Computer science courses for children have proliferated rapidly in the past few years. A 2016 Gallup report found that 40% of American schools now offer coding classes – up from only 25% a few years ago. New York, with the largest public school system in the country, has pledged to offer computer science to all 1.1 million students by 2025. Los Angeles, with the second largest, plans to do the same by 2020. And Chicago, the fourth largest, has gone further, promising to make computer science a high school graduation requirement by 2018.

The rationale for this rapid curricular renovation is economic. Teaching kids how to code will help them land good jobs, the argument goes. In an era of flat and falling incomes, programming provides a new path to the middle class – a skill so widely demanded that anyone who acquires it can command a livable, even lucrative, wage.

This narrative pervades policymaking at every level, from school boards to the government. Yet it rests on a fundamentally flawed premise. Contrary to public perception, the economy doesn't actually need that many more programmers. As a result, teaching millions of kids to code won't make them all middle-class. Rather, it will proletarianize the profession by flooding the market and forcing wages down – and that's precisely the point.

At its root, the campaign for code education isn't about giving the next generation a shot at earning the salary of a Facebook engineer. It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry.

As software mediates more of our lives, and the power of Silicon Valley grows, it's tempting to imagine that demand for developers is soaring. The media contributes to this impression by spotlighting the genuinely inspiring stories of those who have ascended the class ladder through code. You may have heard of Bit Source, a company in eastern Kentucky that retrains coalminers as coders. They've been featured by Wired , Forbes , FastCompany , The Guardian , NPR and NBC News , among others.

A former coalminer who becomes a successful developer deserves our respect and admiration. But the data suggests that relatively few will be able to follow their example. Our educational system has long been producing more programmers than the labor market can absorb. A study by the Economic Policy Institute found that the supply of American college graduates with computer science degrees is 50% greater than the number hired into the tech industry each year. For all the talk of a tech worker shortage, many qualified graduates simply can't find jobs.

More tellingly, wage levels in the tech industry have remained flat since the late 1990s. Adjusting for inflation, the average programmer earns about as much today as in 1998. If demand were soaring, you'd expect wages to rise sharply in response. Instead, salaries have stagnated.

Still, those salaries are stagnating at a fairly high level. The Department of Labor estimates that the median annual wage for computer and information technology occupations is $82,860 – more than twice the national average. And from the perspective of the people who own the tech industry, this presents a problem. High wages threaten profits. To maximize profitability, one must always be finding ways to pay workers less.

Tech executives have pursued this goal in a variety of ways. One is collusion – companies conspiring to prevent their employees from earning more by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department antitrust complaint in 2010, along with a class action suit that culminated in a $415m settlement . Another, more sophisticated method is importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status.

Guest workers and wage-fixing are useful tools for restraining labor costs. But nothing would make programming cheaper than making millions more programmers. And where better to develop this workforce than America's schools? It's no coincidence, then, that the campaign for code education is being orchestrated by the tech industry itself. Its primary instrument is Code.org, a nonprofit funded by Facebook, Microsoft, Google and others . In 2016, the organization spent nearly $20m on training teachers, developing curricula, and lobbying policymakers.

Silicon Valley has been unusually successful in persuading our political class and much of the general public that its interests coincide with the interests of humanity as a whole. But tech is an industry like any other. It prioritizes its bottom line, and invests heavily in making public policy serve it. The five largest tech firms now spend twice as much as Wall Street on lobbying Washington – nearly $50m in 2016. The biggest spender, Google, also goes to considerable lengths to cultivate policy wonks favorable to its interests – and to discipline the ones who aren't.

Silicon Valley is not a uniquely benevolent force, nor a uniquely malevolent one. Rather, it's something more ordinary: a collection of capitalist firms committed to the pursuit of profit. And as every capitalist knows, markets are figments of politics. They are not naturally occurring phenomena, but elaborately crafted contraptions, sustained and structured by the state – which is why shaping public policy is so important. If tech works tirelessly to tilt markets in its favor, it's hardly alone. What distinguishes it is the amount of money it has at its disposal to do so.

Money isn't Silicon Valley's only advantage in its crusade to remake American education, however. It also enjoys a favorable ideological climate. Its basic message – that schools alone can fix big social problems – is one that politicians of both parties have been repeating for years. The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric. That if we teach students the right skills, we can solve poverty, inequality and stagnation. The school becomes an engine of economic transformation, catapulting young people from challenging circumstances into dignified, comfortable lives.

This argument is immensely pleasing to the technocratic mind. It suggests that our core economic malfunction is technical – a simple asymmetry. You have workers on one side and good jobs on the other, and all it takes is training to match them up. Indeed, every president since Bill Clinton has talked about training American workers to fill the "skills gap". But gradually, one mainstream economist after another has come to realize what most workers have known for years: the gap doesn't exist. Even Larry Summers has concluded it's a myth.

The problem isn't training. The problem is there aren't enough good jobs to be trained for . The solution is to make bad jobs better, by raising the minimum wage and making it easier for workers to form a union, and to create more good jobs by investing for growth. This involves forcing business to put money into things that actually grow the productive economy rather than shoveling profits out to shareholders. It also means increasing public investment, so that people can make a decent living doing socially necessary work like decarbonizing our energy system and restoring our decaying infrastructure.

Everyone should have the opportunity to learn how to code. Coding can be a rewarding, even pleasurable, experience, and it's useful for performing all sorts of tasks. More broadly, an understanding of how code works is critical for basic digital literacy – something that is swiftly becoming a requirement for informed citizenship in an increasingly technologized world.

But coding is not magic. It is a technical skill, akin to carpentry. Learning to build software does not make you any more immune to the forces of American capitalism than learning to build a house. Whether a coder or a carpenter, capital will do what it can to lower your wages, and enlist public institutions towards that end.

Silicon Valley has been extraordinarily adept at converting previously uncommodified portions of our common life into sources of profit. Our schools may prove an easy conquest by comparison.

See also:

willyjack, 21 Sep 2017 16:56

"Everyone should have the opportunity to learn how to code. " OK, and that's what's being done. And that's what the article is bemoaning. What would be better: teach them how to change tires or groom pets? Or pick fruit? Amazingly condescending article.

MrFumoFumo , 21 Sep 2017 14:54
However, training lots of people to be coders won't automatically result in lots of people who can actually write good code. Nor will it give managers/recruiters the necessary skills to recognize which programmers are any good.

congenialAnimal -> alfredooo , 24 Sep 2017 09:57

A valid rebuttal but could I offer another observation? Exposing large portions of the school population to coding is not going to magically turn them into coders. It may increase their basic understanding but that is a long way from being a software engineer.

Just as children join art, drama or biology classes so they do not automatically become artists, actors or doctors. I would agree entirely that just being able to code is not going to guarantee the sort of income that might be aspired to. As with all things, it takes commitment, perseverance and dogged determination. I suppose ultimately it becomes the Gattaca argument.

alfredooo -> racole , 24 Sep 2017 06:51
Fair enough, but, his central argument, that an overabundance of coders will drive wages in that sector down, is generally true, so in the future if you want your kids to go into a profession that will earn them 80k+ then being a "coder" is not the route to take. When coding is - like reading, writing, and arithmetic - just a basic skill, there's no guarantee having it will automatically translate into getting a "good" job.
Wiretrip , 21 Sep 2017 14:14
This article lumps everyone in computing into the 'coder' bin, without actually defining what 'coding' is. Yes there is a glut of people who can knock together a bit of HTML and JavaScript, but that is not really programming as such.

There are huge shortages of skilled developers however; people who can apply computer science and engineering in terms of analysis and design of software. These are the real skills for which relatively few people have a true aptitude.

The lack of really good skills is starting to show in some terrible software implementation decisions, such as Slack for example; written as a web app running in Electron (so that JavaScript code monkeys could knock it out quickly), but resulting in awful performance. We will see more of this in the coming years...

Taylor Dotson -> youngsteveo , 21 Sep 2017 13:53
My brother is a programmer, and in his experience these coding exams don't test anything but whether or not you took (and remember) a very narrow range of problems introduce in the first years of a computer science degree. The entire hiring process seems premised on a range of ill-founded ideas about what skills are necessary for the job and how to assess them in people. They haven't yet grasped that those kinds of exams mostly test test-taking ability, rather than intelligence, creativity, diligence, communication ability, or anything else that a job requires beside coughing up the right answer in a stressful, timed environment without outside resources.

The_Raven , 23 Sep 2017 15:45

I'm an embedded software/firmware engineer. Every similar engineer I've ever met has had the same background - starting in electronics and drifting into embedded software writing in C and assembler. It's virtually impossible to do such software without an understanding of electronics. When it goes wrong you may need to get the test equipment out to scope the hardware to see if it's a hardware or software problem. Coming from a pure computing background just isn't going to get you a job in this type of work.
waltdangerfield , 23 Sep 2017 14:42
All schools teach drama and most kids don't end up becoming actors. You need to give all kids access to coding in order for some can go on to make a career out of it.
TwoSugarsPlease , 23 Sep 2017 06:13
Coding salaries will inevitably fall over time, but such skills give workers the option, once they discover that their income is no longer sustainable in the UK, of moving somewhere more affordable and working remotely.
DiGiT81 -> nixnixnix , 23 Sep 2017 03:29
Completely agree. Coding is a necessary life skill for 21st century but there are levels to every skill. From basic needs for an office job to advanced and specialised.
nixnixnix , 23 Sep 2017 00:46
Lots of people can code but very few of us ever get to the point of creating something new that has a loyal and enthusiastic user-base. Everyone should be able to code because it is or will be the basis of being able to create almost anything in the future. If you want to make a game in Unity, knowing how to code is really useful. If you want to work with large data-sets, you can't rely on Excel and so you need to be able to code (in R?). The use of code is becoming so pervasive that it is going to be like reading and writing.

All the science and engineering graduates I know can code but none of them have ever sold a stand-alone software. The argument made above is like saying that teaching everyone to write will drive down the wages of writers. Writing is useful for anyone and everyone but only a tiny fraction of people who can write, actually write novels or even newspaper columns.

DolyGarcia -> Carl Christensen , 22 Sep 2017 19:24
Immigrants have always a big advantage over locals, for any company, including tech companies: the government makes sure that they will stay in their place and never complain about low salaries or bad working conditions because, you know what? If the company sacks you, an immigrant may be forced to leave the country where they live because their visa expires, which is never going to happen with a local. Companies always have more leverage over immigrants. Given a choice between more and less exploitable workers, companies will choose the most exploitable ones.

Which is something that Marx figured more than a century ago, and why he insisted that socialism had to be international, which led to the founding of the First International Socialist. If worker's fights didn't go across country boundaries, companies would just play people from one country against the other. Unfortunately, at some point in time socialists forgot this very important fact.

xxxFred -> Tomix Da Vomix , 22 Sep 2017 18:52
SO what's wrong with having lots of people able to code? The only argument you seem to have is that it'll lower wages. So do you think that we should stop teaching writing skills so that journalists can be paid more? And no one os going to "force" kids into high-level abstract coding practices in kindergarten, fgs. But there is ample empirical proof that young children can learn basic principles. In fact the younger that children are exposed to anything, the better they can enhance their skills adn knowlege of it later in life, and computing concepts are no different.
Tomix Da Vomix -> xxxFred , 22 Sep 2017 18:40
You're completely missing the point. Kids are forced into the programming field (even STEM as a more general term), before they evolve their abstract reasoning. For that matter, you're not producing highly skilled people, but functional imbeciles and a decent labor that will eventually lower the wages.
Conspiracy theory? So Google, FB and others paying hundreds of millions of dollars for forming a cartel to lower the wages is not true? It sounds me that you're sounding more like a 1969 denier that Guardian is. Tech companies are not financing those incentives because they have a good soul. Their primary drive has always been money, otherwise they wouldn't sell your personal data to earn money.

But hey, you can always sleep peacefully when your kid becomes a coder. When he is 50, everyone will want to have a Cobol, Ada programmer with 25 years of experience when you can get 16 year old kid from a high school for 1/10 of a price. Go back to sleep...

Carl Christensen -> xxxFred , 22 Sep 2017 16:49
it's ridiculous because even out of a pool of computer science B.Sc. or M.Sc. grads - companies are only interested in the top 10%. Even the most mundane company with crappy IT jobs swears that they only hire "the best and the brightest."
Carl Christensen , 22 Sep 2017 16:47
It's basically a con-job by the big Silicon Valley companies offshoring as many US jobs as they can, or "inshoring" via exploitation of the H1B visa - so they can say "see, we don't have 'qualified' people in the US - maybe when these kids learn to program in a generation." As if American students haven't been coding for decades -- and saw their salaries plummet as the H1B visa and Indian offshore firms exploded......
Declawed -> KDHughes , 22 Sep 2017 16:40
Dude, stow the attitude. I've tested code from various entities, and seen every kind of crap peddled as gold.

But I've also seen a little 5-foot giggly lady with two kids, grumble a bit and save a $100,000 product by rewriting another coder's man-month of work in a few days, without any flaws or cracks. Almost nobody will ever know she did that. She's so far beyond my level it hurts.

And yes, the author knows nothing. He's genuinely crying wolf while knee-deep in amused wolves. The last time I was in San Jose, years ago , the room was already full of people with Indian surnames. If the problem was REALLY serious, a programmer from POLAND was called in.

If you think fighting for a violinist spot is hard, try fighting for it with every spare violinist in the world . I am training my Indian replacement to do my job right now . At least the public can appreciate a good violin. Can you appreciate Duff's device ?

So by all means, don't teach local kids how to think in a straight line, just in case they make a dent in the price of wages IN INDIA.... *sheesh*

Declawed -> IanMcLzzz , 22 Sep 2017 15:35
That's the best possible summarisation of this extremely dumb article. Bravo.

For those who don't know how to think of coding, like the article author, here's a few analogies :

A computer is a box that replays frozen thoughts, quickly. That is all.

Coding is just the art of explaining. Anyone who can explain something patiently and clearly, can code. Anyone who can't, can't.

Making hardware is very much like growing produce while blind. Making software is very much like cooking that produce while blind.

Imagine looking after a room full of young eager obedient children who only do exactly, *exactly*, what you told them to do, but move around at the speed of light. Imagine having to try to keep them from smashing into each other or decapitating themselves on the corners of tables, tripping over toys and crashing into walls, etc, while you get them all to play games together.

The difference between a good coder and a bad coder is almost life and death. Imagine a broth prepared with ingredients from a dozen co-ordinating geniuses and one idiot, that you'll mass produce. The soup is always far worse for the idiot's additions. The more cooks you involve, the more chance your mass produced broth will taste bad.

People who hire coders, typically can't tell a good coder from a bad coder.

Zach Dyer -> Mystik Al , 22 Sep 2017 15:18
Tech jobs will probably always be available long after your gone or until another mass extinction.
edmundberk -> AmyInNH , 22 Sep 2017 14:59
No you do it in your own time. If you're not prepared to put in long days IT is not for you in any case. It was ever thus, but more so now due to offshoring - rather than the rather obscure forces you seem to believe are important.
WithoutPurpose -> freeandfair , 22 Sep 2017 13:21
Bit more rhan that.
peter nelson -> offworldguy , 22 Sep 2017 12:44
Sorry, offworldguy, but you're losing this one really badly. I'm a professional software engineer in my 60's and I know lots of non-professionals in my age range who write little programs, scripts and apps for fun. I know this because they often contact me for help or advice.

So you've now been told by several people in this thread that ordinary people do code for fun or recreation. The fact that you don't know any probably says more about your network of friends and acquaintances than about the general population.

xxxFred , 22 Sep 2017 12:18
This is one of the daftest articles I've come across in a long while.
If it's possible that so many kids can be taught to code well enough so that wages come down, then that proves that the only reason we've been paying so much for development costs is the scarcity of people able to do it, not that it's intrinsically so hard that only a select few could anyway. In which case, there is no ethical argument for keeping the pools of skilled workers to some select group. Anyone able to do it should have an equal opportunity to do it.
What is the argument for not teaching coding (other than to artificially keep wages high)? Why not stop teaching the three R's, in order to boost white-collar wages in general?
Computing is an ever-increasingly intrinsic part of life, and people need to understand it at all levels. It is not just unfair, but tantamount to neglect, to fail to teach children all the skills they may require to cope as adults.
Having said that, I suspect that in another generation or two a good many lower-level coding jobs will be redundant anyway, with such code being automatically generated, and "coders" at this level will be little more than technicians setting various parameters. Even so, understanding the basics behind computing is a part of understanding the world they live in, and every child needs that.
Suggesting that teaching coding is some kind of conspiracy to force wages down is well, it makes the moon-landing conspiracy looks sensible by comparison.
timrichardson -> offworldguy , 22 Sep 2017 12:16
I think it is important to demystify advanced technology, I think that has importance in its own right.Plus, schools should expose kids to things which may spark their interest. Not everyone who does a science project goes on years later to get a PhD, but you'd think that it makes it more likely. Same as giving a kid some music lessons. There is a big difference between serious coding and the basic steps needed to automate a customer service team or a marketing program, but the people who have some mastery over automation will have an advantage in many jobs. Advanced machines are clearly going to be a huge part of our future. What should we do about it, if not teach kids how to understand these tools?
rogerfederere -> William Payne , 22 Sep 2017 12:13
tl;dr.
Mystik Al , 22 Sep 2017 12:08
As automation is about to put 40% of the workforce permanently out of work getting into to tech seems like a good idea!
timrichardson , 22 Sep 2017 12:04
This is like arguing that teaching kids to write is nothing more than a plot to flood the market for journalists. Teaching first aid and CPR does not make everyone a doctor.
Coding is an essential skill for many jobs already: 50 years ago, who would have thought you needed coders to make movies? Being a software engineer, a serious coder, is hard. IN fact, it takes more than technical coding to be a software engineer: you can learn to code in a week. Software Engineering is a four year degree, and even then you've just started a career. But depriving kids of some basic insights may mean they won't have the basic skills needed in the future, even for controlling their car and house. By all means, send you kids to a school that doesn't teach coding. I won't.
James Jones -> vimyvixen , 22 Sep 2017 11:41
Did you learn SNOBOL, or is Snowball a language I'm not familiar with? (Entirely possible, as an American I never would have known Extended Mercury Autocode existed we're it not for a random book acquisition at my home town library when I was a kid.)
William Payne , 22 Sep 2017 11:17
The tide that is transforming technology jobs from "white collar professional" into "blue collar industrial" is part of a larger global economic cycle.

Successful "growth" assets inevitably transmogrify into "value" and "income" assets as they progress through the economic cycle. The nature of their work transforms also. No longer focused on innovation; on disrupting old markets or forging new ones; their fundamental nature changes as they mature into optimising, cost reducing, process oriented and most importantly of all -- dividend paying -- organisations.

First, the market invests. And then, .... it squeezes.

Immature companies must invest in their team; must inspire them to be innovative so that they can take the creative risks required to create new things. This translates into high skills, high wages and "white collar" social status.

Mature, optimising companies on the other hand must necessarily avoid risks and seek variance-minimising predictability. They seek to control their human resources; to eliminate creativity; to to make the work procedural, impersonal and soulless. This translates into low skills, low wages and "blue collar" social status.

This is a fundamental part of the economic cycle; but it has been playing out on the global stage which has had the effect of hiding some of its' effects.

Over the past decades, technology knowledge and skills have flooded away from "high cost" countries and towards "best cost" countries at a historically significant rate. Possibly at the maximum rate that global infrastructure and regional skills pools can support. Much of this necessarily inhumane and brutal cost cutting and deskilling has therefore been hidden by the tide of outsourcing and offshoring. It is hard to see the nature of the jobs change when the jobs themselves are changing hands at the same time.

The ever tighter ratchet of dehumanising industrialisation; productivity and efficiency continues apace, however, and as our global system matures and evens out, we see the seeds of what we have sown sail home from over the sea.

Technology jobs in developed nations have been skewed towards "growth" activities since for the past several decades most "value" and "income" activities have been carried out in developing nations. Now, we may be seeing the early preparations for the diffusion of that skewed, uneven and unsustainable imbalance.

The good news is that "Growth" activities are not going to disappear from the world. They just may not be so geographically concentrated as they are today. Also, there is a significant and attention-worthy argument that the re-balancing of skills will result in a more flexible and performant global economy as organisations will better be able to shift a wider variety of work around the world to regions where local conditions (regulation, subsidy, union activity etc...) are supportive.

For the individuals concerned it isn't going to be pretty. And of course it is just another example of the race to the bottom that pits states and public sector purse-holders against one another to win the grace and favour of globally mobile employers.

As a power play move it has a sort of inhumanly psychotic inevitability to it which is quite awesome to observe.

I also find it ironic that the only way to tame the leviathan that is the global free-market industrial system might actually be effective global governance and international cooperation within a rules-based system.

Both "globalist" but not even slightly both the same thing.

Vereto -> Wiretrip , 22 Sep 2017 11:17
not just coders, it put even IT Ops guys into this bin. Basically good old - so you are working with computers sentence I used to hear a lot 10-15 years ago.
Sangmin , 22 Sep 2017 11:15
You can teach everyone how to code but it doesn't necessarily mean everyone will be able to work as one. We all learn math but that doesn't mean we're all mathematicians. We all know how to write but we're not all professional writers.

I have a graduate degree in CS and been to a coding bootcamp. Not everyone's brain is wired to become a successful coder. There is a particular way how coders think. Quality of a product will stand out based on these differences.

Vereto -> Jared Hall , 22 Sep 2017 11:12
Very hyperbolic is to assume that the profit in those companies is done by decreasing wages. In my company the profit is driven by ability to deliver products to the market. And that is limited by number of top people (not just any coder) you can have.
KDHughes -> kcrane , 22 Sep 2017 11:06
You realise that the arts are massively oversupplied and that most artists earn very little, if anything? Which is sort of like the situation the author is warning about. But hey, he knows nothing. Congratulations, though, on writing one of the most pretentious posts I've ever read on CIF.
offworldguy -> Melissa Boone , 22 Sep 2017 10:21
So you know kids, college age people and software developers who enjoy doing it in their leisure time? Do you know any middle aged mothers, fathers, grandparents who enjoy it and are not software developers?

Sorry, I don't see coding as a leisure pursuit that is going to take off beyond a very narrow demographic and if it becomes apparent (as I believe it will) that there is not going to be a huge increase in coding job opportunities then it will likely wither in schools too, perhaps replaced by music lessons.

Bread Eater , 22 Sep 2017 10:02
From their perspective yes. But there are a lot of opportunities in tech so it does benefit students looking for jobs.
Melissa Boone -> jamesbro , 22 Sep 2017 10:00
No, because software developer probably fail more often than they succeed. Building anything worthwhile is an iterative process. And it's not just the compiler but the other devs, oyur designer, your PM, all looking at your work.
Melissa Boone -> peterainbow , 22 Sep 2017 09:57
It's not shallow or lazy. I also work at a tech company and it's pretty common to do that across job fields. Even in HR marketing jobs, we hire students who can't point to an internship or other kind of experience in college, not simply grades.
Vereto -> savingUK , 22 Sep 2017 09:50
It will take ages, the issue of Indian programmers is in the education system and in "Yes boss" culture.

But on the other hand most of Americans are just as bad as Indians

Melissa Boone -> offworldguy , 22 Sep 2017 09:50
A lot of people do find it fun. I know many kids - high school and young college age - who code in the leisure time because they find it pleasurable to make small apps and video games. I myself enjoy it too. Your argument is like saying since you don't like to read books in your leisure time, nobody else must.

The point is your analogy isn't a good one - people who learn to code can not only enjoy it in their spare time just like music, but they can also use it to accomplish all kinds of basic things. I have a friend who's a software developer who has used code to program his Roomba to vacuum in a specific pattern and to play Candy Land with his daughter when they lost the spinner.

Owlyrics -> CapTec , 22 Sep 2017 09:44
Creativity could be added to your list. Anyone can push a button but only a few can invent a new one.
One company in the US (after it was taken over by a new owner) decided it was more profitable to import button pushers from off-shore, they lost 7 million customers (gamers) and had to employ more of the original American developers to maintain their high standard and profits.
Owlyrics -> Maclon , 22 Sep 2017 09:40
Masters is the new Bachelors.
Maclon , 22 Sep 2017 09:22
So similar to 500k a year people going to university ( UK) now when it used to be 60k people a year( 1980). There was never enough graduate jobs in 1980 so can't see where the sudden increase in need for graduates has come from.
PaulDavisTheFirst -> Ethan Hawkins , 22 Sep 2017 09:17

They aren't really crucial pieces of technology except for their popularity

It's early in the day for me, but this is the most ridiculous thing I've read so far, and I suspect it will be high up on the list by the end of the day.

There's no technology that is "crucial" unless it's involved in food, shelter or warmth. The rest has its "crucialness" decided by how widespread its use is, and in the case of those 3 languages, the answer is "very".

You (or I) might not like that very much, but that's how it is.

Julian Williams -> peter nelson , 22 Sep 2017 09:12
My benchmark would be if the average new graduate in the discipline earns more or less than one of the "professions", Law, medicine, Economics etc. The short answer is that they don't. Indeed, in my experience of professions, many good senior SW developers, say in finance, are paid markedly less than the marketing manager, CTO etc. who are often non-technical.

My benchmark is not "has a car, house etc." but what does 10, 15 20 years of experience in the area generate as a relative income to another profession, like being a GP or a corporate solicitor or a civil servant (which is usually the benchmark academics use for pay scaling). It is not to denigrate, just to say that markets don't always clear to a point where the most skilled are the highest paid.

I was also suggesting that even if you are not intending to work in the SW area, being able to translate your imagination into a program that reflects your ideas is a nice life skill.

AmyInNH -> freeandfair , 22 Sep 2017 09:05
Your assumption has no basis in reality. In my experience, as soon as Clinton ramped up H1Bs, my employer would invite 6 same college/degree/curriculum in for interviews, 5 citizen, 1 foreign student and default offer to foreign student without asking interviewers a single question about the interview. Eventually, the skipped the farce of interviewing citizens all together. That was in 1997, and it's only gotten worse. Wall St's been pretty blunt lately. Openly admits replacing US workers for import labor, as it's the "easiest" way to "grow" the economy, even though they know they are ousting citizens from their jobs to do so.
AmyInNH -> peter nelson , 22 Sep 2017 08:59
"People who get Masters and PhD's in computer science" Feed western universities money, for degree programs that would otherwise not exist, due to lack of market demand. "someone has a Bachelor's in CS" As citizens, having the same college/same curriculum/same grades, as foreign grad. But as citizens, they have job market mobility, and therefore are shunned. "you can make something real and significant on your own" If someone else is paying your rent, food and student loans while you do so.
Ethan Hawkins -> farabundovive , 22 Sep 2017 07:40
While true, it's not the coders' fault. The managers and execs above them have intentionally created an environment where these things are secondary. What's primary is getting the stupid piece of garbage out the door for Q profit outlook. Ship it amd patch it.
offworldguy -> millartant , 22 Sep 2017 07:38
Do most people find it fun? I can code. I don't find it 'fun'. Thirty years ago as a young graduate I might have found it slightly fun but the 'fun' wears off pretty quick.
Ethan Hawkins -> anticapitalist , 22 Sep 2017 07:35
In my estimation PHP is an utter abomination. Python is just a little better but still very bad. Ruby is a little better but still not at all good.

Languages like PHP, Python and JS are popular for banging out prototypes and disposable junk, but you greatly overestimate their importance. They aren't really crucial pieces of technology except for their popularity and while they won't disappear they won't age well at all. Basically they are big long-lived fads. Java is now over 20 years old and while Java 8 is not crucial, the JVM itself actually is crucial. It might last another 20 years or more. Look for more projects like Ceylon, Scala and Kotlin. We haven't found the next step forward yet, but it's getting more interesting, especially around type systems.

A strong developer will be able to code well in a half dozen languages and have fairly decent knowledge of a dozen others. For me it's been many years of: Z80, x86, C, C++, Java. Also know some Perl, LISP, ANTLR, Scala, JS, SQL, Pascal, others...

millartant -> Islingtonista , 22 Sep 2017 07:26
You need a decent IDE
millartant -> offworldguy , 22 Sep 2017 07:24

One is hardly likely to 'do a bit of coding' in ones leisure time

Why not? The right problem is a fun and rewarding puzzle to solve. I spend a lot of my leisure time "doing a bit of coding"

Ethan Hawkins -> Wiretrip , 22 Sep 2017 07:12
The worst of all are the academics (on average).
Ethan Hawkins -> KatieL , 22 Sep 2017 07:09
This makes people like me with 35 years of experience shipping products on deadlines up and down every stack (from device drivers and operating systems to programming languages, platforms and frameworks to web, distributed computing, clusters, big data and ML) so much more valuable. Been there, done that.
Ethan Hawkins -> Taylor Dotson , 22 Sep 2017 07:01
It's just not true. In SV there's this giant vacuum created by Apple, Google, FB, etc. Other good companies struggle to fill positions. I know from being on the hiring side at times.
TheBananaBender -> peter nelson , 22 Sep 2017 07:00
You don't work for a major outsourcer then like Serco, Atos, Agilisys
offworldguy -> LabMonkey , 22 Sep 2017 06:59
Plenty of people? I don't know of a single person outside of my work which is teaming with programmers. Not a single friend, not my neighbours, not my wife or her extended family, not my parents. Plenty of people might do it but most people don't.
Ethan Hawkins -> finalcentury , 22 Sep 2017 06:56
Your ignorance of coding is showing. Coding IS creative.
Ricardo111 -> peter nelson , 22 Sep 2017 06:56
Agreed: by gifted I did not meant innate. It's more of a mix of having the interest, the persistence, the time, the opportunity and actually enjoying that kind of challenge.

While some of those things are to a large extent innate personality traits, others are not and you don't need max of all of them, you just need enough to drive you to explore that domain.

That said, somebody that goes into coding purelly for the money and does it for the money alone is extremely unlikelly to become an exceptional coder.

Ricardo111 -> eirsatz , 22 Sep 2017 06:50
I'm as senior as they get and have interviewed quite a lot of programmers for several positions, including for Technical Lead (in fact, to replace me) and so far my experience leads me to believe that people who don't have a knack for coding are much less likely to expose themselves to many different languages and techniques, and also are less experimentalist, thus being far less likely to have those moments of transcending merely being aware of the visible and obvious to discover the concerns and concepts behind what one does. Without those moments that open the door to the next Universe of concerns and implications, one cannot do state transitions such as Coder to Technical Designer or Technical Designer to Technical Architect.

Sure, you can get the title and do the things from the books, but you will not get WHY are those things supposed to work (and when they will not work) and thus cannot adjust to new conditions effectively and will be like a sailor that can't sail away from sight of the coast since he can't navigate.

All this gets reflected in many things that enhance productivity, from the early ability to quickly piece together solutions for a new problem out of past solutions for different problems to, later, conceiving software architecture designs fittted to the typical usage pattern in the industry for which the software is going to be made.

LabMonkey , 22 Sep 2017 06:50
From the way our IT department is going, needing millions of coders is not the future. It'll be a minority of developers at the top, and an army of low wage monkeys at the bottom who can troubleshoot from a script - until AI comes along that can code faster and more accurately.
LabMonkey -> offworldguy , 22 Sep 2017 06:46

One is hardly likely to 'do a bit of coding' in ones leisure time

Really? I've programmed a few simple videogames in my spare time. Plenty of people do.

CapTec , 22 Sep 2017 06:29
Interesting piece that's fundamentally flawed. I'm a software engineer myself. There is a reason a University education of a minimum of three years is the base line for a junior developer or 'coder'.

Software engineering isn't just writing code. I would say 80% of my time is spent designing and structuring software before I even touch the code.

Explaining software engineering as a discipline at a high level to people who don't understand it is simple.

Most of us who learn to drive learn a few basics about the mechanics of a car. We know that brake pads need to be replaced, we know that fuel is pumped into an engine when we press the gas pedal. Most of us know how to change a bulb if it blows.

The vast majority of us wouldn't be able to replace a head gasket or clutch though. Just knowing the basics isn't enough to make you a mechanic.

Studying in school isn't enough to produce software engineers. Software engineering isn't just writing code, it's cross discipline. We also need to understand the science behind the computer, we need too understand logic, data structures, timings, how to manage memory, security, how databases work etc.

A few years of learning at school isn't nearly enough, a degree isn't enough on its own due to the dynamic and ever evolving nature of software engineering. Schools teach technology that is out of date and typically don't explain the science very well.

This is why most companies don't want new developers, they want people with experience and multiple skills.

Programming is becoming cool and people think that because of that it's easy to become a skilled developer. It isn't. It takes time and effort and most kids give up.

French was on the national curriculum when I was at school. Most people including me can't hold a conversation in French though.

Ultimately there is a SKILL shortage. And that's because skill takes a long time, successes and failures to acquire. Most people just give up.

This article is akin to saying 'schools are teaching basic health to reduce the wages of Doctors'. It didn't happen.

offworldguy -> thecurio , 22 Sep 2017 06:19
There is a difference. When you teach people music you teach a skill that can be used for a lifetimes enjoyment. One might sit at a piano in later years and play. One is hardly likely to 'do a bit of coding' in ones leisure time.

The other thing is how good are people going to get at coding and how long will they retain the skill if not used? I tend to think maths is similar to coding and most adults have pretty terrible maths skills not venturing far beyond arithmetic. Not many remember how to solve a quadratic equation or even how to rearrange some algebra.

One more thing is we know that if we teach people music they will find a use for it, if only in their leisure time. We don't know that coding will be in any way useful because we don't know if there will be coding jobs in the future. AI might take over coding but we know that AI won't take over playing piano for pleasure.

If we want to teach logical thinking then I think maths has always done this and we should make sure people are better at maths.

Alex Mackaness , 22 Sep 2017 06:08
Am I missing something here? Being able to code is a skill that is a useful addition to the skill armoury of a youngster entering the work place. Much like reading, writing, maths... Not only is it directly applicable and pervasive in our modern world, it is built upon logic.

The important point is that American schools are not ONLY teaching youngsters to code, and producing one dimensional robots... instead coding makes up one part of their overall skill set. Those who wish to develop their coding skills further certainly can choose to do so. Those who specialise elsewhere are more than likely to have found the skills they learnt whilst coding useful anyway.

I struggle to see how there is a hidden capitalist agenda here. I would argue learning the basics of coding is simply becoming seen as an integral part of the school curriculum.

thecurio , 22 Sep 2017 05:56
The word "coding" is shorthand for "computer programming" or "software development" and it masks the depth and range of skills that might be required, depending on the application.

This subtlety is lost, I think, on politicians and perhaps the general public. Asserting that teaching lots of people to code is a sneaky way to commodotise an industry might have some truth to it, but remember that commodotisation (or "sharing and re-use" as developers might call it) is nothing new. The creation of freely available and re-usable software components and APIs has driven innovation, and has put much power in the hands of developers who would not otherwise have the skill or time to tackle such projects.

There's nothing to fear from teaching more people to "code", just as there's nothing to fear from teaching more people to "play music". These skills simply represent points on a continuum.

There's room for everyone, from the kid on a kazoo all the way to Coltrane at the Village Vanguard.

sbw7 -> ragingbull , 22 Sep 2017 05:44
I taught CS. Out of around 100 graduates I'd say maybe 5 were reasonable software engineers. The rest would be fine in tech support or other associated trades, but not writing software. Its not just a set of trainable skills, its a set of attitudes and ways of perceiving and understanding that just aren't that common.
offworldguy , 22 Sep 2017 05:02
I can't understand the rush to teach coding in schools. First of all I don't think we are going to be a country of millions of coders and secondly if most people have the skills then coding is hardly going to be a well paid job. Thirdly you can learn coding from scratch after school like people of my generation did. You could argue that it is part of a well rounded education but then it is as important for your career as learning Shakespeare, knowing what an oxbow lake is or being able to do calculus: most jobs just won't need you to know.
savingUK -> yannick95 , 22 Sep 2017 04:35
While you roll on the floor laughing, these countries will slowly but surely get their act together. That is how they work. There are top quality coders over there and they will soon promoted into a position to organise the others.

You are probably too young to remember when people laughed at electronic products when they were made in Japan then Taiwan. History will repeat it's self.

zii000 -> JohnFreidburg , 22 Sep 2017 04:04
Yes it's ironic and no different here in the UK. Traditionally Labour was the party focused on dividing the economic pie more fairly, Tories on growing it for the benefit of all. It's now completely upside down with Tories paying lip service to the idea of pay rises but in reality supporting this deflationary race to the bottom, hammering down salaries and so shrinking discretionary spending power which forces price reductions to match and so more pressure on employers to cut costs ... ad infinitum.
Labour now favour policies which would cause an expansion across the entire economy through pay rises and dramatically increased investment with perhaps more tolerance of inflation to achieve it.
ID0193985 -> jamesbro , 22 Sep 2017 03:46
Not surprising if they're working for a company that is cold-calling people - which should be banned in my opinion. Call centres providing customer support are probably less abuse-heavy since the customer is trying to get something done.
vimyvixen , 22 Sep 2017 02:04
I taught myself to code in 1974. Fortran, COBOL were first. Over the years as a aerospace engineer I coded in numerous languages ranging from PLM, Snowball, Basic, and more assembly languages than I can recall, not to mention deep down in machine code on more architectures than most know even existed. Bottom line is that coding is easy. It doesn't take a genius to code, just another way of thinking. Consider all the bugs in the software available now. These "coders", not sufficiently trained need adult supervision by engineers who know what they are doing for computer systems that are important such as the electrical grid, nuclear weapons, and safety critical systems. If you want to program toy apps then code away, if you want to do something important learn engineering AND coding.
Dwight Spencer , 22 Sep 2017 01:44
Laughable. It takes only an above-average IQ to code. Today's coders are akin to the auto mechanics of the 1950s where practically every high school had auto shop instruction . . . nothing but a source of cheap labor for doing routine implementations of software systems using powerful code libraries built by REAL software engineers.
sieteocho -> Islingtonista , 22 Sep 2017 01:19
That's a bit like saying that calculus is more valuable than arithmetic, so why teach children arithmetic at all?

Because without the arithmetic, you're not going to get up to the calculus.

JohnFreidburg -> Tommyward , 22 Sep 2017 01:15
I disagree. Technology firms are just like other firms. Why then the collusion not to pay more to workers coming from other companies? To believe that they are anything else is naive. The author is correct. We need policies that actually grow the economy and not leaders who cave to what the CEOs want like Bill Clinton did. He brought NAFTA at the behest of CEOs and all it ended up doing was ripping apart the rust belt and ushering in Trump.
Tommyward , 22 Sep 2017 00:53
So the media always needs some bad guys to write about, and this month they seem to have it in for the tech industry. The article is BS. I interview a lot of people to join a large tech company, and I can guarantee you that we aren't trying to find cheaper labor, we're looking for the best talent.

I know that lots of different jobs have been outsourced to low cost areas, but these days the top companies are instead looking for the top talent globally.

I see this article as a hit piece against Silicon Valley, and it doesn't fly in the face of the evidence.

finalcentury , 22 Sep 2017 00:46
This has got to be the most cynical and idiotic social interest piece I have ever read in the Guardian. Once upon a time it was very helpful to learn carpentry and machining, but now, even if you are learning those, you will get a big and indispensable headstart if you have some logic and programming skills. The fact is, almost no matter what you do, you can apply logic and programming skills to give you an edge. Even journalists.
hoplites99 , 22 Sep 2017 00:02
Yup, rings true. I've been in hi tech for over 40 years and seen the changes. I was in Silicon Valley for 10 years on a startup. India is taking over, my current US company now has a majority Indian executive and is moving work to India. US politicians push coding to drive down wages to Indian levels.

On the bright side I am old enough and established enough to quit tomorrow, its someone else's problem, but I still despise those who have sold us out, like the Clintons, the Bushes, the Googoids, the Zuckerboids.

liberalquilt -> yannick95 , 21 Sep 2017 23:45
Sure markets existed before governments, but capitalism didn't, can't in fact. It needs the organs of state, the banking system, an education system, and an infrastructure.
thegarlicfarmer -> canprof , 21 Sep 2017 23:36
Then teach them other things but not coding! Here in Australia every child of school age has to learn coding. Now tell me that everyone of them will need it? Look beyond computers as coding will soon be automated just like every other job.
Islingtonista , 21 Sep 2017 22:25
If you have never coded then you will not appreciate how labour intensive it is. Coders effectively use line editors to type in, line by line, the instructions. And syntax is critical; add a comma when you meant a semicolon and the code doesn't work properly. Yeah, we use frameworks and libraries of already written subroutines, but, in the end, it is all about manually typing in the code.

Which is an expensive way of doing things (hence the attractions of 'off-shoring' the coding task to low cost economies in Asia).

And this is why teaching kids to code is a waste of time.

Already, AI based systems are addressing the task of interpreting high level design models and simply generating the required application.

One of the first uses templates and a smart chatbot to enable non-tech business people to build their websites. By describe in non-coding terms what they want, the chatbot is able to assemble the necessary components and make the requisite template amendments to build a working website.

Much cheaper than hiring expensive coders to type it all in manually.

It's early days yet, but coding may well be one of the big losers to AI automation along with all those back office clerical jobs.

Teaching kids how to think about design rather than how to code would be much more valuable.

jamesbro -> peter nelson , 21 Sep 2017 21:31
Thick-skinned? Just because you might get a few error messages from the compiler? Call centre workers have to put up with people telling them to fuck off eight hours a day.
Joshua Ian Lee , 21 Sep 2017 21:03
Spot on. Society will never need more than 1% of its people to code. We will need far more garbage men. There are only so many (relatively) good jobs to go around and its about competing to get them.
canprof , 21 Sep 2017 20:53
I'm a professor (not of computer science) and yet, I try to give my students a basic understanding of algorithms and logic, to spark an interest and encourage them towards programming. I have no skin in the game, except that I've seen unemployment first-hand, and want them to avoid it. The best chance most of them have is to learn to code.
Evelita , 21 Sep 2017 14:35
Educating youth does not drive wages down. It drives our economy up. China, India, and other countries are training youth in programming skills. Educating our youth means that they will be able to compete globally. This is the standard GOP stand that we don't need to educate our youth, but instead fantasize about high-paying manufacturing jobs miraculously coming back.

Many jobs, including new manufacturing jobs have an element of coding because they are automated. Other industries require coding skills to maintain web sites and keep computer systems running. Learning coding skills opens these doors.

Coding teaches logic, an essential thought process. Learning to code, like learning anything, increases the brains ability to adapt to new environments which is essential to our survival as a species. We must invest in educating our youth.

cwblackwell , 21 Sep 2017 13:38
"Contrary to public perception, the economy doesn't actually need that many more programmers." This really looks like a straw man introducing a red herring. A skill can be extremely valuable for those who do not pursue it as a full time profession.

The economy doesn't actually need that many more typists, pianists, mathematicians, athletes, dietitians. So, clearly, teaching typing, the piano, mathematics, physical education, and nutrition is a nefarious plot to drive down salaries in those professions. None of those skills could possibly enrich the lives or enhance the productivity of builders, lawyers, public officials, teachers, parents, or store managers.

DJJJJJC , 21 Sep 2017 14:23

A study by the Economic Policy Institute found that the supply of American college graduates with computer science degrees is 50% greater than the number hired into the tech industry each year.

You're assuming that all those people are qualified to work in software because they have a piece of paper that says so, but that's not a valid assumption. The quality of computer science degree courses is generally poor, and most people aren't willing or able to teach themselves. Universities are motivated to award degrees anyway because if they only awarded degrees to students who are actually qualified then that would reflect very poorly on their quality of teaching.

A skills shortage doesn't mean that everyone who claims to have a skill gets hired and there are still some jobs left over that aren't being done. It means that employers are forced to hire people who are incompetent in order to fill all their positions. Many people who get jobs in programming can't really do it and do nothing but create work for everyone else. That's why most of the software you use every day doesn't work properly. That's why competent programmers' salaries are still high in spite of the apparently large number of "qualified" people who aren't employed as programmers.

[Sep 24, 2017] Do Strongly Typed Languages Reduce Bugs?

Sep 24, 2017 | developers.slashdot.org

(acolyer.org) Posted by EditorDavid on Saturday September 23, 2017 @05:19PM from the dynamic-discussions dept. "Static vs dynamic typing is always one of those topics that attracts passionately held positions," writes the Morning Paper -- reporting on an "encouraging" study that attempted to empirically evaluate the efficacy of statically-typed systems on mature, real-world code bases. The study was conducted by Christian Bird at Microsoft's "Research in Software Engineering" group with two researchers from University College London. Long-time Slashdot reader phantomfive writes: This study looked at bugs found in open source Javascript code. Looking through the commit history, they enumerated the bugs that would have been caught if a more strongly typed language (like Typescript) had been used. They found that a strongly typed language would have reduced bugs by 15%. Does this make you want to avoid Python?

[Sep 16, 2017] Google Publicly Releases Internal Developer Documentation Style Guide

Sep 12, 2017 | developers.slashdot.org

(betanews.com)

Posted by BeauHD on Tuesday September 12, 2017

@06:00AM from the free-for-all dept.

BrianFagioli shares a report from BetaNews: The documentation aspect of any project is very important, as it can help people to both understand it and track changes. Unfortunately, many developers aren't very interested in documentation aspect, so it often gets neglected. Luckily, if you want to maintain proper documentation and stay organized, today, Google is releasing its internal developer documentation style guide .

This can quite literally guide your documentation, giving you a great starting point and keeping things consistent. Jed Hartman, Technical Writer, Google says , "For some years now, our technical writers at Google have used an internal-only editorial style guide for most of our developer documentation. In order to better support external contributors to our open source projects, such as Kubernetes, AMP, or Dart, and to allow for more consistency across developer documentation, we're now making that style guide public.

If you contribute documentation to projects like those, you now have direct access to useful guidance about voice, tone, word choice, and other style considerations. It can be useful for general issues, like reminders to use second person, present tense, active voice, and the serial comma; it can also be great for checking very specific issues, like whether to write 'app' or 'application' when you want to be consistent with the Google Developers style."

You can access Google's style guide here .

[Oct 14, 2001] In Defense of Not-Invented-Here Syndrome by Joel Spolsky

Joel on Software

Time for a pop quiz.

1. Code Reuse is:

a) Good
b) Bad

2. Reinventing the Wheel is:

a) Good
b) Bad

3. The Not-Invented-Here Syndrome is:

a) Good
b) Bad

Of course, everybody knows that you should always leverage other people's work. The correct answers are, of course, 1(a) 2(b) 3(b).

Right?

Not so fast, there!

The Not-Invented-Here Syndrome is considered a classic management pathology, in which a team refuses to use a technology that they didn't create themselves. People with NIH syndrome are obviously just being petty, refusing to do what's in the best interest of the overall organization because they can't find a way to take credit. (Right?) The Boring Business History Section at your local megabookstore is rife with stories about stupid teams that spend millions of dollars and twelve years building something they could have bought at Egghead for $9.99. And everybody who has paid any attention whatsoever to three decades of progress in computer programming knows that Reuse is the Holy Grail of all modern programming systems.

Right. Well, that's what I thought, too. So when I was the program manager in charge of the first implementation of Visual Basic for Applications, I put together a careful coalition of four, count them, four different teams at Microsoft to get custom dialog boxes in Excel VBA. The idea was complicated and fraught with interdependencies. There was a team called AFX that was working on some kind of dialog editor. Then we would use this brand new code from the OLE group which let you embed one app inside another. And the Visual Basic team would provide the programming language behind it. After a week of negotiation I got the AFX, OLE, and VB teams to agree to this in principle.

I stopped by Andrew Kwatinetz's office. He was my manager at the time and taught me everything I know. "The Excel development team will never accept it," he said. "You know their motto? 'Find the dependencies -- and eliminate them.' They'll never go for something with so many dependencies."

In-ter-est-ing. I hadn't known that. I guess that explained why Excel had its own C compiler.

By now I'm sure many of my readers are rolling on the floor laughing. "Isn't Microsoft stupid," you're thinking, "they refused to use other people's code and they even had their own compiler just for one product."

Not so fast, big boy! The Excel team's ruggedly independent mentality also meant that they always shipped on time, their code was of uniformly high quality, and they had a compiler which, back in the 1980s, generated pcode and could therefore run unmodified on Macintosh's 68000 chip as well as Intel PCs. The pcode also made the executable file about half the size that Intel binaries would have been, which loaded faster from floppy disks and required less RAM.

"Find the dependencies -- and eliminate them." When you're working on a really, really good team with great programmers, everybody else's code, frankly, is bug-infested garbage, and nobody else knows how to ship on time. When you're a cordon bleu chef and you need fresh lavender, you grow it yourself instead of buying it in the farmers' market, because sometimes they don't have fresh lavender or they have old lavender which they pass off as fresh.

A Comparative Review of LOCC and CodeCount

This paper provides one review of the comparative strengths and weaknesses of LOCC and CodeCount, two tools for calculating the size of software source code. The next two sections provide quick overviews of CodeCount and LOCC. The final section presents the perceived strengths and weaknesses of the two tools. A caveat: although I am attempting to be objective in this review, I have in-depth knowledge of LOCC and only very superficial knowledge of CodeCount. Comments and corrections solicited and welcomed.

CodeCount

The incarnations of CodeCount can be divided into two basic flavors. The first flavor, which I will dub as "Classic" CodeCount, has been in production use for over a decade. The second flavor, dubbed "CSCI" CodeCount, is the result of a set of student projects to support various "object" languages by extending Classic CodeCount.

Classic CodeCount is a relatively old and mature project. The first work on Classic CodeCount began in the late 1980's. CodeCount reflects its early origins in three important ways.

  1. It is written in ANSI C, with all the typical strengths (speed) and weaknesses (lack of higher level program structuring mechanisms, etc.) of that language.

  2. It was originally designed to provide size metric information appropriate to the kinds of languages targetted at that time (assembly languages and non-OO languages such as C and Fortran). Classic CodeCount does not employ a grammar and so has cannot reliably recognize object oriented language structures within a file, such as packages, classes, methods, inner classes, etc.

  3. It has been used extensively by many industrial partners for over a decade, and has presumably been used to count many millions of lines of code. It can be presumed to be well-tested and reliable for the kinds of measures it produces.

Here is an example of the kind of output produced by CodeCount:

Libre Software Engineering - Tools - Other

CODECount

The CodeCount toolset is a collection of tools designed to automate the collection of source code sizing information. The CodeCount toolset spans multiple programming languages and utilizes one of two possible Source Lines of Code (SLOC) definitions, physical or logical. CODECount's license is not Libre Software as it puts additional restrictions.

cflow

The cflow command analyzes the C, C++, yacc, lex, assembler, and object files and writes a chart of their external references to standard output. Cflow2vcg converts the result of the cflow utility to a VCG format. See also Cflow2Cflow

calltree

calltree is a static call tree generator for C programs. It parses a collection of input files and builds a graph that represents the static call structure of the files.

perl_metrics

perl-metrics is intended to help perl programmers write better code by helping programmers become more aware of their coding style. In particular, one would like to know the code-to-comment ratio, the average number of lines per subroutine, and the longest subroutine.

c_count

c_count counts lines, statements, and other simple measures of C/C++/Java source programs. It is not lex/yacc based, and is easily portable to a variety of systems.

Pythius

Pythius is a set of tools to assess the quality of Python code. This is commonly done by applying different code metrics. Simple code metrics are the ratio between comments and code lines, module and function size, etc.

Free Code Graphing Project

Produces a PostScript representation of a project's source code. The fcgp comes from Rusty Russell's Linux Kernel Graphing Project (lgp)

Metrics collection tools for C and C++ Source Code

This page offers access to a collection of static code analysis tools that compute various metrics defined on C and C++ source code. The metrics are primarily size and complexity of various types (lines of code, Halstead, McCabe, etc.).

Python Cyclomatic Complexity Analyzer

Perl-written script that calculates the cyclomatic complexity for Python scripts.

CodeWeb

Open source code for a wide range of software is now in abundance on the net. The goal of the CodeWeb project is to data mine

QSM Source Code Counter Links -- list several programs for counting lines, statements, etc.

Tutorials

The Elements of Programming Style

Brian W. Kernighan, P. J. Plauger / Paperback / Published 1988
Amazon Price: $40.74 ~ You Save: $10.19 (20%)

Was first published in 1979. Now slightly outdated. Code Complete : A Practical Handbook of Software Construction contains similar material if you want a more recent book, but this book pioneered the field. Contains an interesting discussion of how to transform the program to a better one and common pitfalls in programming. The authors have provided an useful set of tips for coding (and sometimes design). Here is a summary of the important programming style tips from Brian Kernighan's 1994 guest CS50 lecture:

The road to better programming Introduction and chapter 1

Zlatanov ([email protected])
Programmer, Gold Software Systems
November 1, 2001

The success or failure of any software programming group depends largely on its ability to work together as a team. From manager to members, to well-conceived, yet dynamic guidelines, the team as a whole is defined by the unison of its parts. Shattering the myth of the faultless programmer, Teodor dismantles the uninspired software group and then builds it up again into a synchronized, energized ensemble.

Welcome to a series of articles on developerWorks comprising a complete guide to better programming in Perl. In this first installment, Teodor introduces his book and looks at coding guidelines from a fresh perspective.

This is the book for the beginner to intermediate Perl programmer. But even an advanced Perl programmer can find the majority of the chapters exciting and relevant, from the tips of Part I to the project management tools presented in Part II to the Parse::RecDescentsource code analysis scripts in Part III.

The words program and script are used interchangeably. In Perl, the two mean pretty much the same thing. A program can, indeed, be made up of many scripts, and a script can contain many programs, but for simplicity's sake, we will use the two terms with the understanding that one script file contains only one program.

Goals of the book

Part I is full of tips to improve your Perl skills, ranging from best programming practices to code debugging. It does not teach you Perl programming. There are many books with that purpose, and they would be hard to surpass in clarity and completeness.

Part II will teach you how a small Perl software team can be better managed with the standard tools of software project management. Often, Perl programmers embody the "herd of cats" view of software teams. Part II will apply project management tools to a small (2- to 6-person) Perl development team, and will examine how managing such a team successfully is different from the classic project management approach.

Part III will develop tools to analyze source code (Perl and C examples will be developed) and to help you manage your team better. Analysis of source code is superficial at best today, ranging from the obvious and irrelevant "lines of code" metrics to function points (see Resources later in this article), which do not help in understanding the programmer's mindset. Understanding the programmer's mindset will be the goal of Part III. Tools will be developed that help track metrics such as comment legibility and consistency, repetitiveness of code, and code legibility. These metrics will be introduced as a part of a software project, not its goal.

There is no perfection in programming, only its pursuit. Good programmers learn something new every day and continually improve their skills and technique. Rigidity and inflexibility are forever the enemy of ingenuity and creativity.

In pursuit of perfection

The most common mistake a programmer can make is not in the list of bugs for his program. It is not a function of the programmer's age or language of choice. It is, simply, the assumption that his abilities are complete and there is no room for improvement.

Arguably, such is human nature; but I would argue that human nature is always on the prowl for knowledge and improvement. Only hubris and the fear of being proven wrong hold us back. Resisting them both will not only make a better programmer, but a better person as well.

The social interactions and the quality of the people, I believe, are what create successful software teams more than any other factors. Constant improvement in a programmer's skills and the ability to take criticism are fundamental requirements for members of a software team. These requirements should precede all others.

Think back to the last time you changed your style. Was it the new algorithm you learned, or commenting style, or simply a different way of naming your variables? Whatever it was, it was only a step along the way, not the final change that made your code complete and perfect.

A programmer shouldn't be required to follow precise code guidelines to the letter; nor should he improvise those guidelines to get the job done. Consider an orchestra -- neither static, soulless performers nor wildly improvisational virtuosos (though the latter is more acclaimed). A static performer simply follows the notes without putting effort and soul into the music; the virtuoso must restrain herself from errantly exploring new pieces of the melody or marching to the beat of her own drum.

Striking a concordant tone
Code guidelines are like the written directions a musician follows -- when to come on, when to come off, how fast to play, what beat, etc. The notes themselves, to extend the analogy somewhat precariously, are the goals of the project -- sometimes lone high notes, and sometimes a harmony of instruments.

In an orchestra, there is a conductor that directs but does not tell every musician how to play, and everyone has a part in the performance. The conductor creates harmony. Because music has been around for many more centuries than the art of programming, perhaps these are lessons well worth learning. The software project manager is neither a gorilla nor a walled-off convict. She is a part of the team just like everyone else.

The guidelines presented in this series are not to be blindly extracted into an official coding policy. The coding standards in your project are uniquely yours, and they reflect your very own orchestral composition. Don't force programmers to do things exactly right, thereby creating an atmosphere of distrust and fear. You can forget about code reviews, or admission of responsibility for the smallest bugs.

Instead, present the guidelines and watch how people react. If no one adopts the comment format you like, perhaps it's a bad format. If people write without cleverness, perhaps you have been too clever in the guidelines. If the debugger you thought everyone must run is sitting in a dusty room, still packed, then rethink the need for Whizzo Debugger 3.4. Maybe everyone is happy with Acme Debugger 1.0 for a reason.

Of course, programmers can be stubborn for no reason at all, only out of reluctance to change. It's hard to convince people that 20 years of experience do not entitle them to an organized religion. On the other hand, freshly minted college graduates often lack self-confidence. Recognize and adapt to those characteristics, and to all the others of your team. Present ideas to the stubbornly experienced in such a way that they feel they have helped with it. Build up the college graduates with guidance and support until they can fly on their own.

All this, just for a few coding guidelines?
Coding guidelines are fundamental to a software team, just as direction and harmony are to music. They create consistency and cohesiveness. New team members will feel welcome and gel more quickly. Ye olde team members will accept newcomers more readily. The loss of a team member will not cripple the project just because someone can't understand someone else's code.

Keep in mind that speed is not the only measure of improvement in a program's code. Consider ease of testing, documentation, and maintenance just as important to any software project, especially for the long term. A language as flexible as Perl facilitates good coding in every stage of the software project. Although this book focuses on Perl, many of the principles are valid for other languages such as C, C++, Java, and Python.

Finally, be an innovator. Regardless of your position in the team -- manager or member -- always look for new ideas and put them into action. Perfection may be impossible, but it's a worthy goal. Innovators are the true strength of a team and without them the melody grows stale very quickly. Stay in touch with your peers; continually learn new things from them. A medium such as Usenet (see Resources) is a great place for an exchange of ideas. Teach and learn, to and from each other. Remember, there's always room for improvement. Above all, have fun, and let the music begin.

Resources

About the author

Teodor Zlatanov graduated with an M.S. in computer engineering from Boston University in 1999. He has worked as a programmer since 1992, using Perl, Java, C, and C++. His interests are in open source work on text parsing, 3-ti

C Elements of Style

C Elements of Style was published by M&T books in 1992. An abbreviated copy of the book is presented here.

This book covers only the C language and is a bit out dated. However it still contains a lot of good advice.

Note: The HTML conversion is not perfect. If you want things nicely formatted, get the PDF version.

Table of Contents HTML Format PDF Format
Chapter 1: Style and Program Organization HTML Format PDF Format
Chapter 2: File Basics, Comments, and Program Headings HTML Format PDF Format
Chapter 3: Variable Names HTML Format PDF Format
Chapter 4: Statement Formatting HTML Format PDF Format
Chapter 5: Statement Details HTML Format PDF Format
Chapter 6: Preprocessor HTML Format PDF Format
Chapter 7: Directory Organization and Makefile Style HTML Format PDF Format
Chapter 8: User-Friendly Programming HTML Format PDF Format
Chapter 9: Rules HTML Format PDF Format


Recommended Links

Google matched content

Softpanorama Recommended

Top articles

[Jun 07, 2021] What is your tale of lasagna code- (Code with too many layers) Published on Jun 07, 2021 | dev.to

[Dec 01, 2019] Academic Conformism is the road to 1984. - Sic Semper Tyrannis Published on Dec 01, 2019 | turcopolier.typepad.com

[Oct 06, 2019] Weird Al Yankovic - Mission Statement Published on Oct 06, 2019 | www.youtube.com

[May 17, 2019] Shareholder Capitalism, the Military, and the Beginning of the End for Boeing Published on May 17, 2019 | www.nakedcapitalism.com

[Dec 27, 2018] The Yoda of Silicon Valley by Siobhan Roberts Published on Dec 17, 2018 | www.nytimes.com

[Oct 02, 2017] Techs push to teach coding isnt about kids success – its about cutting wages by Ben Tarnoff Published on Oct 02, 2017 | www.theguardian.com

Sites

Brian Kernighan

Various C style recommendations and standards

General

C and C++ Style Guides by Christopher Lott

This page offers access to many style guides for code written in C and C++, as well as some discussion about the value and utility of such style guides. This collection includes two style guides that are based on work done at Bell Labs Indian Hill (the site in Naperville, Illinois where the 5ESS digital switch is developed).

The list includes HTML, PDF, postscript, and original versions whenever possible. If you have a working formatter (either LaTeX or troff), the original versions are almost certainly the best, and the PDF or postscript versions are probably preferrable to the HTML version. But hey, this is the web, I put up the HTML versions to make browsing easy.

The documents are not listed in any particular order. All postscript and original version are gzip'd to save transmission time (for you) and disk space (for me).

Finally, if you would like a quick way to develop your own style guide for C, C++, or Java, Sven Rosvall offers a style-document generator. His page lets you make some choices about various constructs, and the generator builds a HTML document for you.
http://www.qualitygeneration.com/cgi-bin/genCodeStd.pl

Broken links:

Form Over Substance


Programming abilities

Table of contents for journalcogp , volume40 , issue2 , part0 -- online papers

Cognitive Psychology

Psychological Online Documents - Cognitive and Experimental Psychology

AI, CogSci and Robotics Cognitive Science, Psychology, Linguistics -- good collection of links

Cognitive Psychology Research Group


Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: June 07, 2021