Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers May the source be with you, but remember the KISS principle ;-) Skepticism and critical thinking is not panacea, but can help to understand the world better

# Programming Languages Usage and Design Problems

 News Scripting languages Recommended Books Recommended Links Typical errors The Art of Debugging Programming as a Profession Programming style Software Engineering Object-Oriented Cult: A Slightly Skeptical View on the Object-Oriented Programming Real Insights into Architecture Come Only From Actual Programming Perl-based Bug Tracking CMM (Capability Maturity Model) KISS Principle Brooks law Slightly Skeptical View on Extreme Programming Donald Knuth TAoCP and its Influence of Computer Science Algorithms and Data Structures Bit Tricks Searching Algorithms Sorting Algorithms Pattern Matching Graph Algorithms Assembler C Cpp Java Pascal PL/1 Prolog R programming language GCC UNIX Make Program Understanding Refactoring vs Restructuring Version Control & Configuration Management Tools Literate Programming Programming style Software Testing Forth Compilers Lexical analysis Recursive Descent Parsing Coroutines A symbol table Bit Tricks Debugging Conway Law Perl-based Bug Tracking Code Reviews and Inspections Code Metrics Structured programming Design patterns Extreme Programming CMM (Capability Maturity Model) Software Fashion Project Management Inhouse vs Outsourced Applications Development Tips Quotes History Humor Etc

As Donald Knuth noted (Don Knuth and the Art of Computer Programming The Interview):

I think of a programming language as a tool to convert a programmer's mental images into precise operations that a machine can perform. The main idea is to match the user's intuition as well as possible. There are many kinds of users, and many kinds of application areas, so we need many kinds of languages.
 Ordinarily technology changes fast. But programming languages are different: programming languages are not just technology, but what programmers think in. They're half technology and half religion. And so the median language, meaning whatever language the median programmer uses, moves as slow as an iceberg. Paul Graham: Beating the Averages Libraries are more important that the language. Donald Knuth

### Introduction

A fruitful way to think about language development is to consider it a to be special type of theory building. Peter Naur suggested that programming in general is theory building activity in his 1985 paper "Programming as Theory Building". But idea is especially applicable to compilers and interpreters. What Peter Naur failed to understand was that design of programming languages has religious overtones and sometimes represent an activity, which is pretty close to the process of creating a new, obscure cult ;-). Clueless academics publishing junk papers at obscure conferences are high priests of the church of programming languages. some like Niklaus Wirth and Edsger W. Dijkstra (temporary) reached the status close to those of (false) prophets :-).

On a deep conceptual level building of a new language is a human way of solving complex problems. That means that complier construction in probably the most underappreciated paradigm of programming of large systems much more so then greatly oversold object-oriented programming. OO benefits are greatly overstated. For users, programming languages distinctly have religious aspects, so decisions about what language to use are often far from being rational and are mainly cultural.  Indoctrination at the university plays a very important role. Recently they were instrumental in making Java a new Cobol.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities,  popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things.  A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked.  This is  a story behind success of  Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

Progress in programming languages has been very uneven and contain several setbacks. Currently this progress is mainly limited to development of so called scripting languages.  Traditional high level languages field is stagnant for many decades.

At the same time there are some mysterious, unanswered question about factors that help the language to succeed or fail. Among them:

• Why new programming languages repeat old mistakes?  If this because complexity of languages is already too high, or because language designers are unable to learn on from "old masters" ?
• Why starting from approximately 1990 the progress in language design is almost absent and the most popular languages created after  1990 such as Java and PHP are at best mediocre and constitute a (huge) step back from the state of the art of language design?
• Why fashion rules fashionable languages (OO-based) gain momentum and support despite their (obvious) flaws.
• Why "worse is better" approach is so successful, why less powerful and less elegant languages can make it to mainstream and stay here ?
• How complexity of the language inhibit it wide usage. The story of PHP (language inferiors to almost any other scripting language developed after 1990) eliminating Perl as a CGI scripting language is an interesting and pretty fascinating story. Success of Pascal (which is bastardatized version of Algol) is similar but is related to the fact that it was used at universities as the first programming language. Now the same situation repeats with Java.

Those are difficult questions to answer without some way of classifying languages into different categories. Several such classifications exists. First of all like with natural languages, the number of people who speak a given language is a tremendous force that can overcome any real of perceived deficiencies of the language. In programming languages, like in natural languages nothing succeed like success.

### Complexity Curse

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that a language with simpler core, or even simplistic core Basic, Pascal) have better chances to acquire high level of popularity.  The underlying fact here probably is that most programmers are at best mediocre and such programmers tend on intuitive level to avoid more complex, more rich languages and prefer, say, Pascal to PL/1 and PHP to Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because of the progress of hardware, availability of compilers and not the least, because it was associated with OO exactly at the time OO became a mainstream fashion).  Complex non-orthogonal languages can succeed only as a result of a long period of language development (which usually adds complexly -- just compare Fortran IV with Fortran 99; or PHP 3 with PHP 5 ) from a smaller core. The banner of some fashionable new trend extending existing popular language to this new "paradigm" is also a possibility (OO programming in case of C++, which is a superset of C).

Historically, few complex languages were successful (PL/1, Ada, Perl, C++), but even if they were successful, their success typically was temporary rather then permanent  (PL/1, Ada, Perl). As Professor Wilkes noted   (iee90):

Things move slowly in the computer language field but, over a sufficiently long period of time, it is possible to discern trends. In the 1970s, there was a vogue among system programmers for BCPL, a typeless language. This has now run its course, and system programmers appreciate some typing support. At the same time, they like a language with low level features that enable them to do things their way, rather than the compiler’s way, when they want to.

They continue, to have a strong preference for a lean language. At present they tend to favor C in its various versions. For applications in which flexibility is important, Lisp may be said to have gained strength as a popular programming language.

Further progress is necessary in the direction of achieving modularity. No language has so far emerged which exploits objects in a fully satisfactory manner, although C++ goes a long way. ADA was progressive in this respect, but unfortunately it is in the process of collapsing under its own great weight.

ADA is an example of what can happen when an official attempt is made to orchestrate technical advances. After the experience with PL/1 and ALGOL 68, it should have been clear that the future did not lie with massively large languages.

I would direct the reader’s attention to Modula-3, a modest attempt to build on the appeal and success of Pascal and Modula-2 [12].

Complexity of the compiler/interpreter also matter as it affects portability: this is one thing that probably doomed PL/1 (and later Ada), although those days a new language typically come with open source compiler (or in case of scripting languages, an interpreter) and this is less of a problem.

Here is an interesting take on language design from the preface to The D programming language book:

Programming language design seeks power in simplicity and, when successful, begets beauty.

Choosing the trade-offs among contradictory requirements is a difficult task that requires good taste from the language designer as much as mastery of theoretical principles and of practical implementation matters. Programming language design is software-engineering-complete.

D is a language that attempts to consistently do the right thing within the constraints it chose: system-level access to computing resources, high performance, and syntactic similarity with C-derived languages. In trying to do the right thing, D sometimes stays with tradition and does what other languages do, and other times it breaks tradition with a fresh, innovative solution. On occasion that meant revisiting the very constraints that D ostensibly embraced. For example, large program fragments or indeed entire programs can be written in a well-defined memory-safe subset of D, which entails giving away a small amount of system-level access for a large gain in program debuggability.

You may be interested in D if the following values are important to you:

• Performance. D is a systems programming language. It has a memory model that, although highly structured, is compatible with C’s and can call into and be called from C functions without any intervening translation.
• Expressiveness. D is not a small, minimalistic language, but it does have a high power-to-weight ratio. You can define eloquent, self-explanatory designs in D that model intricate realities accurately.
• “Torque.” Any backyard hot-rodder would tell you that power isn’t everything; its availability is. Some languages are most powerful for small programs, whereas other languages justify their syntactic overhead only past a certain size. D helps you get work done in short scripts and large programs alike, and it isn’t unusual for a large program to grow organically from a simple single-file script.
• Concurrency. D’s approach to concurrency is a definite departure from the languages it resembles, mirroring the departure of modern hardware designs from the architectures of yesteryear. D breaks away from the curse of implicit memory sharing (though it allows statically checked explicit sharing) and fosters mostly independent threads that communicate with one another via messages.
• Generic code. Generic code that manipulates other code has been pioneered by the powerful Lisp macros and continued by C++ templates, Java generics, and similar features in various other languages. D offers extremely powerful generic and generational mechanisms.
• Eclecticism. D recognizes that different programming paradigms are advantageous for different design challenges and fosters a highly integrated federation of styles instead of One True Approach.
• “These are my principles. If you don’t like them, I’ve got others.” D tries to observe solid principles of language design. At times, these run into considerations of implementation difficulty, usability difficulties, and above all human nature that doesn’t always find blind consistency sensible and intuitive. In such cases, all languages must make judgment calls that are ultimately subjective and are about balance, flexibility, and good taste more than anything else. In my opinion, at least, D compares very favorably with other languages that inevitably have had to make similar decisions.

### The role of fashion

At the initial, the most difficult stage of language development the language should solve an important problem that was inadequately solved by currently popular languages.  But at the same time the language has few chances rto cesseed unless it perfectly fits into the current software fashion. This "fashion factor" is probably as important as several other factors combined with the exclution of "language sponsor" factor.

Like in woman dress fashion rules in language design.  And with time this trend became more and more prononced.  A new language should simultaneously represent the current fashionable trend.  For example OO-programming was a visit card into the world of "big, successful languages" since probably early 90th (C++, Java, Python).  Before that "structured programming" and "verification" (Pascal, Modula) played similar role.

### Programming environment and the role of "powerful sponsor" in language success

PL/1, Java, C#, Ada are languages that had powerful sponsors. Pascal, Basic, Forth are examples of the languages that had no such sponsor during the initial period of development.  C and C++ are somewhere in between.

But any language now need a "programming environment" which consists of a set of libraries, debugger and other tools (make tool, link, pretty-printer, etc). The set of standard" libraries and debugger are probably two most important elements. They cost  lot of time (or money) to develop and here the role of powerful sponsor is difficult to underestimate.

While this is not a necessary condition for becoming popular, it really helps: other things equal the weight of the sponsor of the language does matter. For example Java, being a weak, inconsistent language (C-- with garbage collection and OO) was pushed through the throat on the strength of marketing and huge amount of money spend on creating Java programming environment.  The same was partially true for  C# and Python. That's why Python, despite its "non-Unix" origin is more viable scripting language now then, say, Perl (which is better integrated with Unix and has pretty innovative for scripting languages support of pointers and regular expressions), or Ruby (which has support of coroutines form day 1, not as "bolted on" feature like in Python). Like in political campaigns, negative advertizing also matter. For example Perl suffered greatly from blackmail comparing programs in it with "white noise".   And then from withdrawal of O'Reilly from the role of sponsor of the language (although it continue to milk that Perl book publishing franchise ;-)

People proved to be pretty gullible and in this sense language marketing is not that different from woman clothing marketing :-)

### Language level and success

One very important classification of programming languages is based on so called the level of the language.  Essentially after there is at least one language that is successful on a given level, the success of other languages on the same level became more problematic. Higher chances for success are for languages that have even slightly higher, but still higher level then successful predecessors.

The level of the language informally can be described as the number of statements (or, more correctly, the number of  lexical units (tokens)) needed to write a solution of a particular problem in one language versus another. This way we can distinguish several levels of programming languages:

• Lowest levels. This level is occupied by assemblers and languages designed fro specific instruction sets like PL\360.

• Low level with access to  low level architecture features (C, BCPL). They are also called system programming  languages and are, in essence,  a high-level assembler). In those languages you need specify details related to the machine organization (computer instruction set); memory is allocated explicitly.
• High level  without automatic memory allocation for variables and garbage collection (Fortran, Algol style languages like Modula, Pascal, PL/1, C++, VB. Most of languages in this category are compiled.
• High level  with automatic memory allocation for variables and garbage collection. Languages of this category (Java, C#)  typically are compiled not to the native instruction set of the computer they need to run, but to some abstract instruction set called virtual machine.

• Very high level languages (scripting languages, as well as Icon, SETL, and awk). Most are impossible to compile as dynamic features prevent generation of code at compile time. they also typically use a virtual machine and garbage collection.
• OS shells. They also are often called "glue" languages as they provide integration of existing OS utilities. Those language currently represent the highest level of languages available. This category is mainly represented by Unix shells such as bash and ksh93, but Windows PowerShell belongs to the same category. . They typically use virtual machine and intermediate code like scripting languages.  They presuppose a specific OS as a programming environment and as such are less portable then other categories.

### "Nanny languages" vs "Sharp razor" languages

Some people distinguish between "nanny languages" and "sharp razor" languages. The latter do not attempt to protect user from his errors while the former usually go too far... Right compromise is extremely difficult to find.

For example, I consider the explicit availability of pointers as an important feature of the language that greatly increases its expressive power and far outweighs risks of errors in hands of unskilled practitioners.  In other words attempts to make the language "safer" often misfire.

### Expressive style of the languages

Another useful typology is based in expressive style of the language:

• Procedural. The programming style you're probably used to, procedural languages execute a sequence of statements that lead to a result. In essence, a procedural language expresses the procedure to be followed to solve a problem. Procedural languages typically use many variables and have heavy use of loops and other elements of "state", which distinguishes them from functional programming languages. Functions in procedural languages may modify variables or have other side effects (e.g., printing out information) other than the value that the function returns.
• Functional. Employing a programming style often contrasted with procedural programming, functional programs typically make little use of stored state, often eschewing loops in favor of recursive functions. The most popular functional language and the most successful one (most of functional languages are failures, despite interesting features that are present) is probably regular expressions notation. Another very successful non-procedural language notation are Unix pipe notation. All-in-all functional languages have a lot of problems and none of them managed to get into mainstream. All the talk about superiority of Lisp remained the talk, as Lisp limits the expressive power of programmer by overloading the board on one side.
• Object-oriented. This is a popular subclass on procedural languages with a better handling of namespaces (hierarchical structuring on namespace that reminds Unix file system) and couple of other conveniences in defining multiple entry functions (class methods in OO-speak). Classes strictly speaking are evolution of records introduced by Simula. The main difference with Cobol and PL/1 style of records is that classes have executable components (pointers to functions) and are hierarchically organized with subclasses being lower level sub-records, that is still accessible for any name space with higher level class. A pure hierarchically organized structures were introduced  in Cobol. Later PL/1 extended and refined them introducing name-space copy (like attribute), pointer base (based -records), etc. C being mostly a subset of PL/1 also used some of those refinements but in a very limited way.  In a way PL/1 record is a non-inherited class without any methods. Some languages like Perl 5 implement "nuts and bolts" approach to the introduction of OO constructs, exposing the kitchen. As such those implementation is highly educational for students as they can see how "object-oriented" kitchen operates. For example, the type of the class in Perl 5 is implemented as a hidden first parameter that is passed with each procedure call "behind the scène".
• Scripting languages are typically procedural but may contain non-procedural elements (regular expressions) as well as elements of object-oriented languages (Python, Ruby). Some of them support coroutines. They fall into their own category because they are higher level languages then compiled language or languages with an abstract machine and garbage collection (Java). Scripting languages usually implement automatic garbage collection. Variables type in scripting languages is typically dynamic, declarations of variables are not strictly needed (but can be used) and they usually do not have compile-time type checking of type compatibility of operands in classic operations. Some like Perl try to convert the variable into the type required by particular operation (for example string into numeric constant, if "+" operation is used). Possible errors are "swiped under the carpet." Uninitialized variables typically are hanged as having the value zero in numeric operations and null string in string operations. In case operation can't be performed it returns zero, nil or some other special value. Some scripting language have a special value of UNDEF which gives the possibility to determine whether particular variable was assigned any value before using it in expression.
• Logic. Logic programming languages allow programmers to make declarative statements (possibly in first-order logic: "grass implies green" for example). The most successful was probably Prolog. In a way this is another type of functional languages and Prolog is kind of regular expressions on steroids. The success of this type of languages was/is very limited.
Those categories are not pure and somewhat overlap. For example, it's possible to program in an object-oriented style in C, or even assembler. Some scripting languages like Perl have built-in regular expressions engines that are a part of the language so they have functional component despite being procedural. Some relatively low level languages (Algol-style languages) implement garbage collection. A good example is Java. There are scripting languages that compile into common language framework which was designed for high level languages. For example, Iron Python compiles into .Net.

### Weak correlation between quality of design and popularity

Popularity of the programming languages is not strongly connected to their quality. Some languages that look like a collection of language designer blunders (PHP, Java ) became quite popular. Java became especially a new Cobol and PHP dominates dynamic Web sites construction. The dominant technology for such Web sites is often called LAMP, which means Linux - Apache -My SQL PHP. Being a highly simplified but badly constructed subset of Perl, kind of new Basic for dynamic Web sites construction PHP provides the most depressing experience. I was unpleasantly surprised when I had learnt the Wikipedia engine was rewritten in PHP from Perl some time ago, but this quite illustrates the trend.

So language design quality has little to do with the language success in the marketplace. Simpler languages have more wide appeal as success of PHP (which at the beginning was at the expense of Perl) suggests. In addition much depends whether the language has powerful sponsor like was the case with Java (Sun and IBM) as well as Python (Google).

Progress in programming languages has been very uneven and contain several setbacks like Java. Currently this progress is usually associated with scripting languages. History of programming languages raises interesting general questions about "laws" of programming language design. First let's reproduce several notable quotes:

1. Knuth law of optimization: "Premature optimization is the root of all evil (or at least most of it) in programming." - Donald Knuth
2. "Greenspun's Tenth Rule of Programming: any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp." - Phil Greenspun
3. "The key to performance is elegance, not battalions of special cases." - Jon Bentley and Doug McIlroy
4. "Some may say Ruby is a bad rip-off of Lisp or Smalltalk, and I admit that. But it is nicer to ordinary people." - Matz, LL2
5. Most papers in computer science describe how their author learned what someone else already knew. - Peter Landin
6. "The only way to learn a new programming language is by writing programs in it." - Kernighan and Ritchie
7. "If I had a nickel for every time I've written "for (i = 0; i < N; i++)" in C, I'd be a millionaire." - Mike Vanier
8. "Language designers are not intellectuals. They're not as interested in thinking as you might hope. They just want to get a language done and start using it." - Dave Moon
9. "Don't worry about what anybody else is going to do. The best way to predict the future is to invent it." - Alan Kay
10. "Programs must be written for people to read, and only incidentally for machines to execute." - Abelson & Sussman, SICP, preface to the first edition

Please note that one thing is to read language manual and appreciate how good the concepts are, and another to bet your project on a new, unproved language without good debuggers, manuals and, what is very important, libraries. Debugger is very important but standard libraries are crucial: they represent a factor that makes or breaks new languages.

In this sense languages are much like cars. For many people car is the thing that they use get to work and shopping mall and they are not very interesting is engine inline or V-type and the use of fuzzy logic in the transmission. What they care is safety, reliability, mileage, insurance and the size of trunk. In this sense "Worse is better" is very true. I already mentioned the importance of the debugger. The other important criteria is quality and availability of libraries. Actually libraries are what make 80% of the usability of the language, moreover in a sense libraries are more important than the language...

A popular belief that scripting is "unsafe" or "second rate" or "prototype" solution is completely wrong. If a project had died than it does not matter what was the implementation language, so for any successful project and tough schedules scripting language (especially in dual scripting language+C combination, for example TCL+C) is an optimal blend that for a large class of tasks. Such an approach helps to separate architectural decisions from implementation details much better that any OO model does.

Moreover even for tasks that handle a fair amount of computations and data (computationally intensive tasks) such languages as Python and Perl are often (but not always !) competitive with C++, C# and, especially, Java.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities, popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things. A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked. This is a story behind success of Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that languages with simpler core, or even simplistic core has more chanced to acquire high level of popularity. The underlying fact here probably is that most programmers are at best mediocre and such programmer tend on intuitive level to avoid more complex, more rich languages like, say, PL/1 and Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because OO became a fashion). Complex non-orthogonal languages can succeed only as a result on long period of language development from a smaller core or with the banner of some fashionable new trend (OO programming in case of C++).

### Programming Language Development Timeline

Here is modified from Byte the timeline of Programming Languages (for the original see BYTE.com September 1995 / 20th Anniversary /)

### Forties

ca. 1946

• Konrad Zuse , a German engineer working alone while hiding out in the Bavarian Alps, develops Plankalkul. He applies the language to, among other things, chess.

1949

• Short Code , the first computer language actually used on an electronic computing device, appears. It is, however, a "hand-compiled" language.

### Fifties

1951

• Grace Hopper , working for Remington Rand, begins design work on the first widely known compiler, named A-0. When the language is released by Rand in 1957, it is called MATH-MATIC.

1952

• Alick E. Glennie , in his spare time at the University of Manchester, devises a programming system called AUTOCODE, a rudimentary compiler.

1957

• FORTRAN --mathematical FORmula TRANslating system--appears. Heading the team is John Backus, who goes on to contribute to the development of ALGOL and the well-known syntax-specification system known as BNF.

1958

• FORTRAN II appears, able to handle subroutines and links to assembly language.
• LISP. John McCarthy at M.I.T. begins work on LISP--LISt Processing.
• Algol-58. The original specification for ALGOL appears. The specification does not describe how data will be input or output; that is left to the individual implementations.

1959

• LISP 1.5 appears.
• COBOL is created by the Conference on Data Systems and Languages (CODASYL).

### Sixties

1960

• ALGOL 60 , the specification for Algol-60, the first block-structured language, appears. This is the root of the family tree that will ultimately produce the likes of Pascal. ALGOL goes on to become the most popular language in Europe in the mid- to late-1960s. Compilers for the language were quite difficult to write and that hampered it widespread use. FORTRAN managed to hold its own in the area of numeric computations and Cobol in data processing. Only PL/1 (which was released in 1964) managed to advance ideas of Algol 60 to reasonably wide audience.
• APL Sometime in the early 1960s , Kenneth Iverson begins work on the language that will become APL--A Programming Language. It uses a specialized character set that, for proper use, requires APL-compatible I/O devices.
• Discovery of context free languages formalism. The 1960's also saw the rise of automata theory and the theory of formal languages.  Noam Chomsky introduced the notion of context free languages and later became well-known for his theory that language is "hard-wired" in human brains, and for his criticism of American foreign policy.

1962

• Snobol was designed in 1962 in Bell Labs by R. E. Griswold and I. Polonsky. Work begins on the sure-fire winner of the "clever acronym" award, SNOBOL--StriNg-Oriented symBOlic Language. It will spawn other clever acronyms: FASBOL, a SNOBOL compiler (in 1971), and SPITBOL--SPeedy ImplemenTation of snoBOL--also in 1971.
• APL is documented in Iverson's book, A Programming Language .
• FORTRAN IV appears.

1963

• ALGOL 60 is revised.
• PL/1. Work begins on PL/1.

1964

• System/360, announced in April of 1964,
• PL/1 is released with high quality compiler (F-compiler), which beats is quality of both compile-time and run-time diagnostics most of the compilers of the time.  Later two brilliantly written and in some aspects unsurpassable compilers: debugging and optimizing PL/1 compilers were added. Both represented state of the art of compiler writing. Cornell University implemented subset of PL/1 for teaching called PL/C with the compiler that has probably the most advanced error detection and correction capabilities of batch compilers of all times.  PL/1 was also adopted as system implementation language for Multics.
• APL\360 is implemented.
• BASIC. At Dartmouth University , professors John G. Kemeny and Thomas E. Kurtz invent BASIC. The first implementation was on a timesharing system. The first BASIC program runs at about 4:00 a.m. on May 1, 1964.

1965

• SNOBOL3 appears.

1966

• FORTRAN 66 appears.
• LISP 2 appears.
• Work begins on LOGO at Bolt, Beranek, & Newman. The team is headed by Wally Fuerzeig and includes Seymour Papert. LOGO is best known for its "turtle graphics."

1967

• SNOBOL4 , a much-enhanced SNOBOL, appears.

• The first volume of The Art of Computer Programming was published in 1968 and instantly became classic Donald Knuth (b. 1938) later published  two additional volumes of his world famous three-volume treatise.
• Structured programming movement started. The start of the first religious cult in programming language design. It was created by Edgar Dijkstra who published his infamous "Go to statement considered harmful" (CACM 11(3), March 1968, pp 147-148). While misguided this cult somewhat contributed to the design of control structures in programming languages serving as a kind of stimulus for creation of more rich set of control structures in new programming languages (with PL/1 and its derivative -- C as probably the two popular programming languages which incorporated this new tendencies).  Later it degenerated into completely fundamentalist and mostly counter-productive verification cult.
• ALGOL 68 , the successor of ALGOL 60, appears. Was the first extensible language that got some traction but generally was a flop. Some members of the specifications committee--including C.A.R. Hoare and Niklaus Wirth -- protested its approval on the basis of its overcomplexity. They proved to be partially write: ALGOL 68 compilers proves to be difficult to implement and tat doomed the language. Dissatisfied with the complexity of the Algol-68 Niklaus Wirth begins his work on a simple teaching language which later becomes Pascal.
• ALTRAN , a FORTRAN variant, appears.
• COBOL is officially defined by ANSI.
• Niklaus Wirth begins work on Pascal language design (in part as a reaction to overcomplexity of Algol 68). Like Basic before it, Pascal was specifically designed for teaching programming at universities and as such was specifically designed to allow one pass recursive decent compiler. But the language has multiple grave deficiencies. While a talented language designer Wirth went overboard in simplification of the language (for example in the initial version of the language loops were the allowed to have only increment one, arrays were only static, etc). It also was used to promote bizarre ideas of correctness proofs of the program inspired by verification movement with the high priest Edgar Dijkstra -- the first (or may be the second after structured programming) mass religious cult in programming languages history that destroyed careers of several talented computer scientists who joined it, such as David Gries). Some of blunders in Pascal design were later corrected in Modula and Modula 2.

1969

• 500 people attend an APL conference at IBM's headquarters in Armonk, New York. The demands for APL's distribution are so great that the event is later referred to as "The March on Armonk."

### Seventies

1970

• Forth. Sometime in the early 1970s , Charles Moore writes the first significant programs in his new language, Forth.
• Prolog. Work on Prolog begins about this time. For some time Prolog became fashionable due to Japan initiatives. Later it returned to relative obscurity, although did not completely disappeared from the language map.

• Also sometime in the early 1970s , work on Smalltalk begins at Xerox PARC, led by Alan Kay. Early versions will include Smalltalk-72, Smalltalk-74, and Smalltalk-76.
• An implementation of Pascal appears on a CDC 6000-series computer.
• Icon , a descendant of SNOBOL4, appears.

1972

• The manuscript for Konrad Zuse's Plankalkul (see 1946) is finally published.
• Dennis Ritchie produces C. The definitive reference manual for it will not appear until 1974.
• PL/M. In 1972 Gary Kildall implemented a subset of PL/1, called "PL/M" for microprocessors. PL/M was used to write the CP/M operating system  - and much application software running on CP/M and MP/M. Digital Research also sold a PL/I compiler for the PC written in PL/M. PL/M was used to write much other software at Intel for the 8080, 8085, and Z-80 processors during the 1970s.
• The first implementation of Prolog -- by Alain Colmerauer and Phillip Roussel

1974

• Donald E. Knuth published his article that give a decisive blow to "structured programming fundamentalists" led by Edgar Dijkstra: Structured Programming with go to Statements. ACM Comput. Surv. 6(4): 261-301 (1974)
• Another ANSI specification for COBOL appears.

1975

• Paul Abrahams (Courant Intritute of Mathematical Sciences) destroyed credibility of "structured programming" cult in his article " 'Structure programming' considered harmful" (SYGPLAN Notices, 1975, April, p 13-24
• Tiny BASIC by Bob Albrecht and Dennis Allison (implementation by Dick Whipple and John Arnold) runs on a microcomputer in 2 KB of RAM. It is usable of a  4-KB machine, which left 2 KB available for the program.
• Microsoft was formed on April 4, 1975 to develop and sell BASIC interpreters for the Altair 8800. Bill Gates and Paul Allen write a version of BASIC that they sell to MITS (Micro Instrumentation and Telemetry Systems) on a per-copy royalty basis. MITS is producing the Altair, one of the earlier  8080-based microcomputers that came with a interpreter for a programming language.
• Scheme , a LISP dialect by G.L. Steele and G.J. Sussman, appears.
• Pascal User Manual and Report , by Jensen and Wirth, is published. Still considered by many to be the definitive reference on Pascal. This was kind of attempt to replicate the success of Basic relying of growing "structured programming" fundamentalism movement started by Edgar Dijkstra. Pascal acquired large following in universities as compiler was made freely available. It was adequate for teaching, has fast completer and was superior to Basic.
• B.W. Kerninghan describes RATFOR--RATional FORTRAN. It is a preprocessor that allows C-like control structures in FORTRAN. RATFOR is used in Kernighan and Plauger's "Software Tools," which appears in 1976.

1976

• Backlash on Dijkstra correctness proofs pseudo-religious cult started:

• Andrew Tenenbaum (Vrije University, Amsterdam) published paper In Defense of Program Testing or Correctness Proofs Considered Harmful (SIGPLAN Notices, May 1976 pp 64-68). Made the crucial contribution to the "Structured programming without GOTO" programming debate, which was a decisive blow to the structured programming fundamentalists led by E. Dijkstra;
• Maurice Wilkes, famous computer scientists and the first president of British Computer Society (1957-1960) attacked "verification cult" in this article Software engineering and Structured programming published in IEEE transactions on Software engineering (SE-2, No.4, December 1976, pp 274-276. The paper was also presented as a Keynote address at the Second International Conference on Software engineering, San Francisco, CA, October 1976.
• Design System Language , considered to be a forerunner of PostScript, appears.

1977

• AWK was probably the second (after Snobol) string processing language that extensively use regular expressions. The first version was created in BellLabs by Alfred V. Aho, Peter J. Weinberger, and Brian W. Keringhan in 1977. This was also the first widely used language with built-in garbage collection.
• The ANSI standard for MUMPS -- Massachusetts General Hospital Utility Multi-Programming System -- appears. Used originally to handle medical records, MUMPS recognizes only a string data-type. Later renamed M.
• The design competition that will produce Ada begins. Honeywell Bull's team, led by Jean Ichbiah, will win the competition. Ada never live to promises and became an expensive flop.
• Kim Harris and others set up FIG, the FORTH interest group. They develop FIG-FORTH, which they sell for around 20. • UCSD Pascal. In the late 1970s , Kenneth Bowles produces UCSD Pascal, which makes Pascal available on PDP-11 and Z80-based computers. • Niklaus Wirth begins work on Modula, forerunner of Modula-2 and successor to Pascal. It was the first widely used language that incorporate the concept of coroutines. 1978 • AWK -- a text-processing language named after the designers, Aho, Weinberger, and Kernighan -- appears. • FORTRAN 77: The ANSI standard for FORTRAN 77 appears. 1979 • Bourne shell. The Bourne shell was included Unix Version 7. It was inferior to paralleled developed C-shell but gained tremendous popularity on the strength of AT&T ownership of Unix. • C shell.The Second Berkeley Software Distribution (2BSD), was released in May 1979. It included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor (a visual version of ex) and the C shell. • REXX was designed and first implemented between 1979 and mid-1982 by Mike Cowlishaw of IBM. ### Eighties 1980 • Smalltalk-80 appears. • Modula-2 appears. • Franz LISP appears. • Bjarne Stroustrup develops a set of languages -- collectively referred to as "C With Classes" -- that serve as the breeding ground for C++. 1981 • C-shell was extended into tcsh. • Effort begins on a common dialect of LISP, referred to as Common LISP. • Japan begins the Fifth Generation Computer System project. The primary language is Prolog. 1982 • ISO Pascal appears. • In 1982 one of the first scripting languages REXX was released by IBM as a product. It was four years after AWK was released. Over the years IBM included REXX in almost all of its operating systems (VM/CMS, VM/GCS, MVS TSO/E, AS/400, VSE/ESA, AIX, CICS/ESA, PC DOS, and OS/2), and has made versions available for Novell NetWare, Windows, Java, and Linux. • PostScript appears. It revolutionized printing on dot matrix and laser printers. 1983 • REXX was included in the third release of IBM's VM/CMS shipped in 1983; It was four years after AWK was released. Over the years IBM included REXX in almost all of its operating systems (VM/CMS, VM/GCS, MVS TSO/E, AS/400, VSE/ESA, AIX, CICS/ESA, PC DOS, and OS/2), and has made versions available for Novell NetWare, Windows, Java, and Linux. • The Korn shell (ksh) was released in 1983. • Smalltalk-80: The Language and Its Implementation by Goldberg et al is published. Influencial early book that promoted ideas of OO programming. • Ada appears . Its name comes from Lady Augusta Ada Byron, Countess of Lovelace and daughter of the English poet Byron. She has been called the first computer programmer because of her work on Charles Babbage's analytical engine. In 1983, the Department of Defense directs that all new "mission-critical" applications be written in Ada. • In late 1983 and early 1984, Microsoft and Digital Research both release the first C compilers for microcomputers. • In July , the first implementation of C++ appears. The name was coined by Rick Mascitti. • In November , Borland's Turbo Pascal hits the scene like a nuclear blast, thanks to an advertisement in BYTE magazine. 1984 • GCC development started. In 1984 Stallman started his work on an open source C compiler that became widely knows as gcc. The same year Steven Levy "Hackers" book is published with a chapter devoted to RMS that presented him in an extremely favorable light. • Icon. R.E.Griswold designed Icon programming language Icon (see overview). Like Perl Icon is a high-level, programming language with a large repertoire of features for processing data structures and character strings. Icon is an imperative, procedural language with a syntax reminiscent of C and Pascal, but with semantics at a much higher level (see Griswold, Ralph E. and Madge T. Griswold. The Icon Programming Language, Second Edition, Prentice-Hall, Inc., Englewood Cliffs, New Jersey. 1990, ISBN 0-13-447889-4.). • APL2. A reference manual for APL2 appears. APL2 is an extension of APL that permits nested arrays. 1985 • REXX. The first PC implementation of REXX was released. • Forth controls the submersible sled that locates the wreck of the Titanic. • Vanilla SNOBOL4 for microcomputers is released. • Methods, a line-oriented Smalltalk for PCs, is introduced. • The first version of GCC was able to compile itself appeared in late 1985. The same year GNU Manifesto published 1986 • Smalltalk/V appears--the first widely available version of Smalltalk for microcomputers. • Apple releases Object Pascal for the Mac. • Borland releases Turbo Prolog. • Charles Duff releases Actor, an object-oriented language for developing Microsoft Windows applications. • Eiffel , another object-oriented language, appears. • C++ appears. 1987 • PERL. The first version of Perl, Perl 1.000 was released by Larry Wall in 1987. See an excellent PerlTimeline for more information. • Turbo Pascal version 4.0 is released. 1988 • The specification for CLOS -- Common LISP Object System -- is published. • Oberon. Niklaus Wirth finishes Oberon, his follow-up to Modula-2. The language was still-born but some of its ideas found its was to Python. • PERL 2 was released. • TCL was created. The Tcl scripting language grew out of work of John Ousterhout on creating the design tools for integrated circuits at the University of California at Berkeley in the early 1980's. In the fall of 1987, while on sabbatical at DEC's Western Research Laboratory, he decided to build an embeddable command language. He started work on Tcl in early 1988, and began using the first version of Tcl in a graphical text editor in the spring of 1988. The idea of TCL is different and to certain extent more interesting than idea of Perl -- TCL was designed as embeddable macro language for applications. In this sense TCL is closer to REXX (which was probably was one of the first language that was used both as a shell language and as a macrolanguage). Important products that use Tcl are TK toolkit and Expect. 1989 • The ANSI C specification is published. • C++ 2.0 arrives in the form of a draft reference manual. The 2.0 version adds features such as multiple inheritance and pointers to members. • Perl 3.0 was released in 1989 was distributed under GNU public license -- one of the first major open source project distributed under GNU license and probably the first outside FSF. ### Nineties 1990 • zsh. Paul Falstad wrote zsh, a superset of the ksh88 which also had many csh features. • C++ 2.1 , detailed in Annotated C++ Reference Manual by B. Stroustrup et al, is published. This adds templates and exception-handling features. • FORTRAN 90 includes such new elements as case statements and derived types. • Kenneth Iverson and Roger Hui present J at the APL90 conference. 1991 • Visual Basic wins BYTE's Best of Show award at Spring COMDEX. • PERL 4 released. In January 1991 the first edition of Programming Perl, a.k.a. The Pink Camel, by Larry Wall and Randal Schwartz is published by O'Reilly and Associates. It described a new, 4.0 version of Perl. Simultaneously Perl 4.0 was released (in March of the same year). Final version of Perl 4 was released in 1993. Larry Wall is awarded the Dr. Dobbs Journal Excellence in Programming Award. (March) 1992 • Dylan -- named for Dylan Thomas -- an object-oriented language resembling Scheme, is released by Apple. 1993 • ksh93 was released by David Korn. Was the last of line on AT&T developed shells. • ANSI releases the X3J4.1 technical report -- the first-draft proposal for (gulp) object-oriented COBOL. The standard is expected to be finalized in 1997. • PERL 4. Version 4 was the first widely used version of Perl. Timing was simply perfect: it was already widely available before WEB explosion in 1994. 1994 • PERL 5. Version 5 was released in the end of 1994: • Microsoft incorporates Visual Basic for Applications into Excel. 1995 • In February , ISO accepts the 1995 revision of the Ada language. Called Ada 95, it includes OOP features and support for real-time systems. • RUBY December: First release 0.95. 1996 • first ANSI C++ standard . • Ruby 1.0 released. Did not gain much popularity until later. 1997 • Java. In 1997 Java was released. Sun launches a tremendous and widely successful complain to replace Cobol with Java as a standard language for writing commercial applications for the industry. 2006 2007 2011 • Dennis Ritchie, the creator of C, dies. He was only 70 at the time. There are several interesting "language-induced" errors -- errors that particular programming language facilitates rather then helps to avoid. They are most studied for C-style languages. Funny but Pl/1 (from which C was derived) was a better designed language then much simpler C in several of those categories. ### Avoiding C-style languages design blunder of "easy" mistyping "=" instead of "==" One of most famous C design blunders was two small lexical difference between assignment and comparison (remember that Algol used := for assignment) caused by the design decision to make the language more compact (terminals at this type were not very reliable and number of symbols typed matter greatly. In C assignment is allowed in if statement but no attempts were made to make language more failsafe by avoiding possibility of mixing up "=" and "==". In C syntax the statement if (alpha = beta) ...  assigns the contents of the variable beta to the variable alpha and executes the code in then branch if beta <> 0. It is easy to mix thing and write if (alpha = beta ) instead of (if (alpha == beta) which is a pretty nasty, and remarkably consistent C-induced bug. in case you are comparing the constant to a variable, you can often reverse the sequence and put constant first like in if ( 1==i ) ... as if ( 1=i ) ... does not make any sense. In this case such a blunder will be detected on syntax level. ### Dealing with unbalanced "{" and "}" problem in C-style languages Another nasty problems with C, C++, Java, Perl and other C-style languages is that missing curvy brackets are pretty difficult to find. they also canbe insertd incorrectly endign with the even more nasty logical error. One effective solution that was first implemented in PL/1 and was based on calculation of the level of nesting (in compiler listing) and ability of multiple closure of blocks in the end statement (PL/1 did not use brackets {}, they were introduced in C). In C one can use pseudo comments that signify nesting level zero and check those points with special program or by writing an editor macro. Many editors have the ability to point to the closing bracket for any given opening bracket and vise versa. This is also useful, but less efficient way to solve the problem. ### Problem of unclosed literal Specifying max length of literals is an effecting way of catching missing quote. This idea was forst implemented in debugging PL/1 compilers. You can also have an option to limit literal to a single line. In general multi-line literals should have different lexical markers (like "here" construct in shell). Some language like Perl provide opportunity to use concatenation operator for splitting literals into multiple lines, which are "merged" at compile time. But if there is no limit on the number of lines string literal can occupy some bug can slip in which unmatched quote can closed by another unmatched quote in a nearby literal " commenting out" some part of the code. So this does not help much. Limit on the language of the literal can be communicated via pragma statement at compile type in a particular fragment of text. This is an effective way to avoid the problem. Usually only few places in program use multiline literals, if any. Editors that use coloring help to detect unclosed literal problem, but there are cases when they are useless. ### Commenting out blocks of code This is best done not with comments, but with a preprocessor if the language has one (PL/1, C, etc) ### The "dangling else" problem Having both an if-else and an if statement leads to some possibilities of confusion when one of the clause of a selection statement is itself a selection statement. For example, the C code if (level >= good) if (level == excellent) cout << "excellent" << endl; else cout << "bad" << endl; is intended to process a three-state situation in which something can be bad, good or (as a special case of good) excellent; it is supposed to print an appropriate description for the excellent and bad cases, and print nothing for the good case. The indentation of the code reflects these expectations. Unfortunately, the code does not do this. Instead, it prints excellent for the excellent case, bad for the good case, and nothing for the bad case. The problem is deciding which if matches the else in this expression. The basic rule is  an else matches the nearest previous unmatched if There are two ways to avoid the dangling else problem: • reverse the logic of the outer branch, so that the else is nested inside another else instead of an unmatched if: if (bad) cout << "bad" << endl; else if (excellent) cout << "excellent" << endl; • use brackets around the if clause so that the inner if is terminated by the end of the enclosing bracket: if (good) { if (excellent) cout << "excellent" << endl; } else cout << "bad" << endl; In fact, you can avoid the dangling else problem completely by always using brackets around the clauses of an if or if-else statement, even if they only enclose a single statement. So a good strategy for notation of if-else statements is always use { brace brackets } around the clauses of an if-else or if statement  Always use { brace brackets } around the clauses of an if-else or if statement (This strategy also helps if you need to cut-and-paste more code into one of the clauses: if a clause consists of only one statement, without enclosing brace brackets, and you add another statement to it, then you also need to add the brace brackets. Having the brace brackets there already makes the job easier.) ## NEWS CONTENTS ## Old News ;-) #### [May 17, 2019] Shareholder Capitalism, the Military, and the Beginning of the End for Boeing ##### Highly recommended! ##### Notable quotes: ##### "... Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve). ..." ##### "... The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally. ..." ##### "... "Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. ..." ##### "... If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can. ..." ##### "... It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism ..." ##### "... When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. ..." ###### May 17, 2019 | www.nakedcapitalism.com The fall of the Berlin Wall and the corresponding end of the Soviet Empire gave the fullest impetus imaginable to the forces of globalized capitalism, and correspondingly unfettered access to the world's cheapest labor. What was not to like about that? It afforded multinational corporations vastly expanded opportunities to fatten their profit margins and increase the bottom line with seemingly no risk posed to their business model. Or so it appeared. In 2000, aerospace engineer L.J. Hart-Smith's remarkable paper, sardonically titled "Out-Sourced Profits – The Cornerstone of Successful Subcontracting," laid out the case against several business practices of Hart-Smith's previous employer, McDonnell Douglas, which had incautiously ridden the wave of outsourcing when it merged with the author's new employer, Boeing. Hart-Smith's intention in telling his story was a cautionary one for the newly combined Boeing, lest it follow its then recent acquisition down the same disastrous path. Of the manifold points and issues identified by Hart-Smith, there is one that stands out as the most compelling in terms of understanding the current crisis enveloping Boeing: The embrace of the metric "Return on Net Assets" (RONA). When combined with the relentless pursuit of cost reduction (via offshoring), RONA taken to the extreme can undermine overall safety standards. Related to this problem is the intentional and unnecessary use of complexity as an instrument of propaganda. Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve). All of these pernicious concepts are branches of the same poisoned tree: " shareholder capitalism ": [A] notion best epitomized by Milton Friedman that the only social responsibility of a corporation is to increase its profits, laying the groundwork for the idea that shareholders, being the owners and the main risk-bearing participants, ought therefore to receive the biggest rewards. Profits therefore should be generated first and foremost with a view toward maximizing the interests of shareholders, not the executives or managers who (according to the theory) were spending too much of their time, and the shareholders' money, worrying about employees, customers, and the community at large. The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally. "Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. In essence, it means maximizing the returns of those dollars deployed in the operation of the business. Applied to a corporation, it comes down to this: If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can. It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism. Engineering reality, however, is far more complicated than what is outlined in university MBA textbooks. For corporations like McDonnell Douglas, for example, RONA was used not as a way to prioritize new investment in the corporation but rather to justify disinvestment in the corporation. This disinvestment ultimately degraded the company's underlying profitability and the quality of its planes (which is one of the reasons the Pentagon helped to broker the merger with Boeing; in another perverse echo of the 2008 financial disaster, it was a politically engineered bailout). RONA in Practice When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. Productivity is diminished, even as labor-saving technologies are introduced. Precision machinery is sold off and replaced by inferior, but cheaper, machines. Engineering quality deteriorates. And the upshot is that a reliable plane like Boeing's 737, which had been a tried and true money-spinner with an impressive safety record since 1967, becomes a high-tech death trap. The drive toward efficiency is translated into a drive to do more with less. Get more out of workers while paying them less. Make more parts with fewer machines. Outsourcing is viewed as a way to release capital by transferring investment from skilled domestic human capital to offshore entities not imbued with the same talents, corporate culture and dedication to quality. The benefits to the bottom line are temporary; the long-term pathologies become embedded as the company's market share begins to shrink, as the airlines search for less shoddy alternatives. You must do one more thing if you are a Boeing director: you must erect barriers to bad news, because there is nothing that bursts a magic bubble faster than reality, particularly if it's bad reality. The illusion that Boeing sought to perpetuate was that it continued to produce the same thing it had produced for decades: namely, a safe, reliable, quality airplane. But it was doing so with a production apparatus that was stripped, for cost reasons, of many of the means necessary to make good aircraft. So while the wine still came in a bottle signifying Premier Cru quality, and still carried the same price, someone had poured out the contents and replaced them with cheap plonk. And that has become remarkably easy to do in aviation. Because Boeing is no longer subject to proper independent regulatory scrutiny. This is what happens when you're allowed to " self-certify" your own airplane , as the Washington Post described: "One Boeing engineer would conduct a test of a particular system on the Max 8, while another Boeing engineer would act as the FAA's representative, signing on behalf of the U.S. government that the technology complied with federal safety regulations." This is a recipe for disaster. Boeing relentlessly cut costs, it outsourced across the globe to workforces that knew nothing about aviation or aviation's safety culture. It sent things everywhere on one criteria and one criteria only: lower the denominator. Make it the same, but cheaper. And then self-certify the plane, so that nobody, including the FAA, was ever the wiser. Boeing also greased the wheels in Washington to ensure the continuation of this convenient state of regulatory affairs for the company. According to OpenSecrets.org , Boeing and its affiliates spent15,120,000 in lobbying expenses in 2018, after spending, $16,740,000 in 2017 (along with a further$4,551,078 in 2018 political contributions, which placed the company 82nd out of a total of 19,087 contributors). Looking back at these figures over the past four elections (congressional and presidential) since 2012, these numbers represent fairly typical spending sums for the company.

But clever financial engineering, extensive political lobbying and self-certification can't perpetually hold back the effects of shoddy engineering. One of the sad byproducts of the FAA's acquiescence to "self-certification" is how many things fall through the cracks so easily.

#### [Feb 11, 2019] 6 most prevalent problems in the software development world

###### Dec 01, 2018 | www.catswhocode.com

November 20, 2018

#### [Jan 14, 2019] Quickly move an executable between systems with ELF Statifier Linux.com The source for Linux information by Ben Martin

###### Oct 23, 2008 | monkeyiq.blogspot.com

Shared libraries that are dynamically linked make more efficient use of disk space than those that are statically linked, and more importantly allow you to perform security updates in a more efficient manner, but executables compiled against a particular version of a dynamic library expect that version of the shared library to be available on the machine they run on. If you are running machines with both Fedora 9 and openSUSE 11, the versions of some shared libraries are likely to be slightly different, and if you copy an executable between the machines, the file might fail to execute because of these version differences.

With ELF Statifier you can create a statically linked version of an executable, so the executable includes the shared libraries instead of seeking them at run time. A statically linked executable is much more likely to run on a different Linux distribution or a different version of the same distribution.

Of course, to do this you sacrifice some disk space, because the statically linked executable includes a copy of the shared libraries that it needs, but in these days of terabyte disks the space consideration is less important than the security one. Consider what happens if your executables are dynamically linked to a shared library, say libfoo, and there is a security update to libfoo. When your applications are dynamically linked you can just update the shared copy of libfoo and your applications will no longer be vulnerable to the security issue in the older libfoo. If on the other hand you have a statically linked executable, it will still include and use its own private copy of the old libfoo. You'll have to recreate the statically linked executable to get the newer libfoo and security update.

Still, there are times when you want to take a daemon you compiled on a Fedora machine and run it on your openSUSE machine without having to recompile it and all its dependencies. Sometimes you just want it to execute now and can rebuild it later if desired. Of course, the machine you copy the executable from and the one on which you want to run it must have the same architecture.

ELF Statifier is packaged as a 1-Click install for openSUSE 10.3 but not for Ubuntu Hardy or Fedora. I'll use version 1.6.14 of ELF Statifier and build it from source on a Fedora 9 x86 machine. ELF Statifier does not use autotools, so you compile by simply invoking make . Compilation and installation is shown below.

$tar xzvf statifier-1.6.14.tar.gz$ cd ./statifier-*
$make$ sudo make install

As an example of how to use the utility, I'll create a statically linked version of the ls binary in the commands shown below. First I create a personal copy of the dynamically linked executable and inspect it to see what it dynamically links to. You run statifier with the path to the dynamically linked executable as the first argument and the path where you want to create the statically linked executable as the second argument. Notice that the ldd command reports that no dynamically linked libraries are required by ls-static. The next command shows that the binary size has grown significantly for the static version of ls.

$mkdir test$ cd ./test
$cp -a /bin/ls ls-dynamic$ ls -lh
-rwxr-xr-x 1 ben ben 112K 2008-08-01 04:05 ls-dynamic
$ldd ls-dynamic linux-gate.so.1 => (0x00110000) librt.so.1 => /lib/librt.so.1 (0x00a3a000) libselinux.so.1 => /lib/libselinux.so.1 (0x00a06000) libacl.so.1 => /lib/libacl.so.1 (0x00d8a000) libc.so.6 => /lib/libc.so.6 (0x0084e000) libpthread.so.0 => /lib/libpthread.so.0 (0x009eb000) /lib/ld-linux.so.2 (0x0082e000) libdl.so.2 => /lib/libdl.so.2 (0x009e4000) libattr.so.1 => /lib/libattr.so.1 (0x0606d000)$ statifier ls-dynamic ls-static
$ldd ls-static not a dynamic executable$ ls -lh ls-static
-rwxr-x--- 1 ben ben 2.0M 2008-10-03 12:05 ls-static

$ls-static /tmp ...$ ls-static -lh
Segmentation fault

As you can see above, the statified ls crashes when you run it with the -l option. If you get segmentation faults when running your statified executables you should disable stack randomization and recreate the statified executable. The stack and address space randomization feature of the Linux kernel makes the locations used for the stack and other important parts of an executable change every time it is executed. Randomizing things each time you run a binary hinders attacks such as the return-to-libc attack because the location of libc functions changes all the time.

You are giving away some security by changing the randomize_va_space parameter as shown below. The change to randomize_va_space affects not only attacks on the executables themselves but also exploit attempts that rely on buffer overflows to compromise the system. Without randomization, both attacks become more straightforward. If you set randomize_va_space to zero as shown below and recreate the ls-static binary, things should work as expected. You'll have to leave the stack randomization feature disabled in order to execute the statified executable.

# cd /proc/sys/kernel
# cat randomize_va_space
2
# echo -n 0 >| randomize_va_space
# cat randomize_va_space
0

There are a few other tricks up statifier's sleeve: you can set or unset environment variables for the statified executable, and include additional libraries (LD_PRELOAD libraries) into the static executable. Being able to set additional environment variables for a static executable is useful when the binary you are statifying relies on finding additional resources like configuration files. If the binary allows you to tell it where to find its resources through environment variables, you can include these settings directly into the statified executable.

The ability to include preloaded shared libraries into the statified binary (LD_PRELOADing) is probably a less commonly used feature. One use is including additional functionality such as making the statically linked executable "trashcan friendly" by default, perhaps using delsafe , but without needing to install any additional software on the machine that is running the statically linked executable.

Security measures that randomize the address space of binaries might interfere with ELF Statifier and cause it not to work. But when you just want to move the execution of an application to another Linux machine, ELF Statifier might get you up and running without the hassle of a recompile.

Categories:

• Tools & Utilities

#### [Dec 27, 2018] The Yoda of Silicon Valley by Siobhan Roberts

##### "... One good teacher makes all the difference in life. More than one is a rare blessing. ..."
###### Dec 17, 2018 | www.nytimes.com

With more than one million copies in print, "The Art of Computer Programming " is the Bible of its field. "Like an actual bible, it is long and comprehensive; no other book is as comprehensive," said Peter Norvig, a director of research at Google. After 652 pages, volume one closes with a blurb on the back cover from Bill Gates: "You should definitely send me a résumé if you can read the whole thing."

The volume opens with an excerpt from " McCall's Cookbook ":

Here is your book, the one your thousands of letters have asked us to publish. It has taken us years to do, checking and rechecking countless recipes to bring you only the best, only the interesting, only the perfect.

Inside are algorithms, the recipes that feed the digital age -- although, as Dr. Knuth likes to point out, algorithms can also be found on Babylonian tablets from 3,800 years ago. He is an esteemed algorithmist; his name is attached to some of the field's most important specimens, such as the Knuth-Morris-Pratt string-searching algorithm. Devised in 1970, it finds all occurrences of a given word or pattern of letters in a text -- for instance, when you hit Command+F to search for a keyword in a document.

... ... ...

During summer vacations, Dr. Knuth made more money than professors earned in a year by writing compilers. A compiler is like a translator, converting a high-level programming language (resembling algebra) to a lower-level one (sometimes arcane binary) and, ideally, improving it in the process. In computer science, "optimization" is truly an art, and this is articulated in another Knuthian proverb: "Premature optimization is the root of all evil."

Eventually Dr. Knuth became a compiler himself, inadvertently founding a new field that he came to call the "analysis of algorithms." A publisher hired him to write a book about compilers, but it evolved into a book collecting everything he knew about how to write for computers -- a book about algorithms.

... ... ...

When Dr. Knuth started out, he intended to write a single work. Soon after, computer science underwent its Big Bang, so he reimagined and recast the project in seven volumes. Now he metes out sub-volumes, called fascicles. The next installation, "Volume 4, Fascicle 5," covering, among other things, "backtracking" and "dancing links," was meant to be published in time for Christmas. It is delayed until next April because he keeps finding more and more irresistible problems that he wants to present.

In order to optimize his chances of getting to the end, Dr. Knuth has long guarded his time. He retired at 55, restricted his public engagements and quit email (officially, at least). Andrei Broder recalled that time management was his professor's defining characteristic even in the early 1980s.

Dr. Knuth typically held student appointments on Friday mornings, until he started spending his nights in the lab of John McCarthy, a founder of artificial intelligence, to get access to the computers when they were free. Horrified by what his beloved book looked like on the page with the advent of digital publishing, Dr. Knuth had gone on a mission to create the TeX computer typesetting system, which remains the gold standard for all forms of scientific communication and publication. Some consider it Dr. Knuth's greatest contribution to the world, and the greatest contribution to typography since Gutenberg.

This decade-long detour took place back in the age when computers were shared among users and ran faster at night while most humans slept. So Dr. Knuth switched day into night, shifted his schedule by 12 hours and mapped his student appointments to Fridays from 8 p.m. to midnight. Dr. Broder recalled, "When I told my girlfriend that we can't do anything Friday night because Friday night at 10 I have to meet with my adviser, she thought, 'This is something that is so stupid it must be true.'"

... ... ...

Lucky, then, Dr. Knuth keeps at it. He figures it will take another 25 years to finish "The Art of Computer Programming," although that time frame has been a constant since about 1980. Might the algorithm-writing algorithms get their own chapter, or maybe a page in the epilogue? "Definitely not," said Dr. Knuth.

"I am worried that algorithms are getting too prominent in the world," he added. "It started out that computer scientists were worried nobody was listening to us. Now I'm worried that too many people are listening."

Scott Kim Burlingame, CA Dec. 18

Thanks Siobhan for your vivid portrait of my friend and mentor. When I came to Stanford as an undergrad in 1973 I asked who in the math dept was interested in puzzles. They pointed me to the computer science dept, where I met Knuth and we hit it off immediately. Not only a great thinker and writer, but as you so well described, always present and warm in person. He was also one of the best teachers I've ever had -- clear, funny, and interested in every student (his elegant policy was each student can only speak twice in class during a period, to give everyone a chance to participate, and he made a point of remembering everyone's names). Some thoughts from Knuth I carry with me: finding the right name for a project is half the work (not literally true, but he labored hard on finding the right names for TeX, Metafont, etc.), always do your best work, half of why the field of computer science exists is because it is a way for mathematically minded people who like to build things can meet each other, and the observation that when the computer science dept began at Stanford one of the standard interview questions was "what instrument do you play" -- there was a deep connection between music and computer science, and indeed the dept had multiple string quartets. But in recent decades that has changed entirely. If you do a book on Knuth (he deserves it), please be in touch.

IMiss America US Dec. 18

I remember when programming was art. I remember when programming was programming. These days, it is 'coding', which is more like 'code-spraying'. Throw code at a problem until it kind of works, then fix the bugs in the post-release, or the next update.

AI is a joke. None of the current 'AI' actually is. It is just another new buzz-word to throw around to people that do not understand it at all. We should be in a golden age of computing. Instead, we are cutting all corners to get something out as fast as possible. The technology exists to do far more. It is the human element that fails us.

Ronald Aaronson Armonk, NY Dec. 18

My particular field of interest has always been compiler writing and have been long awaiting Knuth's volume on that subject. I would just like to point out that among Kunth's many accomplishments is the invention of LR parsers, which are widely used for writing programming language compilers.

Edward Snowden Russia Dec. 18

Yes, \TeX, and its derivative, \LaTeX{} contributed greatly to being able to create elegant documents. It is also available for the web in the form MathJax, and it's about time the New York Times supported MathJax. Many times I want one of my New York Times comments to include math, but there's no way to do so! It comes up equivalent to: $e^{i\pi}+1$.

48 Recommend
henry pick new york Dec. 18

I read it at the time, because what I really wanted to read was volume 7, Compilers. As I understood it at the time, Professor Knuth wrote it in order to make enough money to build an organ. That apparantly happened by 3:Knuth, Searching and Sorting. The most impressive part is the mathemathics in Semi-numerical (2:Knuth). A lot of those problems are research projects over the literature of the last 400 years of mathematics.

Steve Singer Chicago Dec. 18

I own the three volume "Art of Computer Programming", the hardbound boxed set. Luxurious. I don't look at it very often thanks to time constraints, given my workload. But your article motivated me to at least pick it up and carry it from my reserve library to a spot closer to my main desk so I can at least grab Volume 1 and try to read some of it when the mood strikes. I had forgotten just how heavy it is, intellectual content aside. It must weigh more than 25 pounds.

Terry Hayes Los Altos, CA Dec. 18

I too used my copies of The Art of Computer Programming to guide me in several projects in my career, across a variety of topic areas. Now that I'm living in Silicon Valley, I enjoy seeing Knuth at events at the Computer History Museum (where he was a 1998 Fellow Award winner), and at Stanford. Another facet of his teaching is the annual Christmas Lecture, in which he presents something of recent (or not-so-recent) interest. The 2018 lecture is available online - https://www.youtube.com/watch?v=_cR9zDlvP88

Chris Tong Kelseyville, California Dec. 17

One of the most special treats for first year Ph.D. students in the Stanford University Computer Science Department was to take the Computer Problem-Solving class with Don Knuth. It was small and intimate, and we sat around a table for our meetings. Knuth started the semester by giving us an extremely challenging, previously unsolved problem. We then formed teams of 2 or 3. Each week, each team would report progress (or lack thereof), and Knuth, in the most supportive way, would assess our problem-solving approach and make suggestions for how to improve it. To have a master thinker giving one feedback on how to think better was a rare and extraordinary experience, from which I am still benefiting! Knuth ended the semester (after we had all solved the problem) by having us over to his house for food, drink, and tales from his life. . . And for those like me with a musical interest, he let us play the magnificent pipe organ that was at the center of his music room. Thank you Professor Knuth, for giving me one of the most profound educational experiences I've ever had, with such encouragement and humor!

Been there Boulder, Colorado Dec. 17

I learned about Dr. Knuth as a graduate student in the early 70s from one of my professors and made the financial sacrifice (graduate student assistantships were not lucrative) to buy the first and then the second volume of the Art of Computer Programming. Later, at Bell Labs, when I was a bit richer, I bought the third volume. I have those books still and have used them for reference for years. Thank you Dr, Knuth. Art, indeed!

Gianni New York Dec. 18

@Trerra In the good old days, before Computer Science, anyone could take the Programming Aptitude Test. Pass it and companies would train you. Although there were many mathematicians and scientists, some of the best programmers turned out to be music majors. English, Social Sciences, and History majors were represented as well as scientists and mathematicians. It was a wonderful atmosphere to work in . When I started to look for a job as a programmer, I took Prudential Life Insurance's version of the Aptitude Test. After the test, the interviewer was all bent out of shape because my verbal score was higher than my math score; I was a physics major. Luckily they didn't hire me and I got a job with IBM.

M Martínez Miami Dec. 17

In summary, "May the force be with you" means: Did you read Donald Knuth's "The Art of Computer Programming"? Excellent, we loved this article. We will share it with many young developers we know.

mds USA Dec. 17

Dr. Knuth is a great Computer Scientist. Around 25 years ago, I met Dr. Knuth in a small gathering a day before he was awarded a honorary Doctorate in a university. This is my approximate recollection of a conversation. I said-- " Dr. Knuth, you have dedicated your book to a computer (one with which he had spent a lot of time, perhaps a predecessor to PDP-11). Isn't it unusual?". He said-- "Well, I love my wife as much as anyone." He then turned to his wife and said --"Don't you think so?". It would be nice if scientists with the gift of such great minds tried to address some problems of ordinary people, e.g. a model of economy where everyone can get a job and health insurance, say, like Dr. Paul Krugman.

Nadine NYC Dec. 17

I was in a training program for women in computer systems at CUNY graduate center, and they used his obtuse book. It was one of the reasons I dropped out. He used a fantasy language to describe his algorithms in his book that one could not test on computers. I already had work experience as a programmer with algorithms and I know how valuable real languages are. I might as well have read Animal Farm. It might have been different if he was the instructor.

Doug McKenna Boulder Colorado Dec. 17

Don Knuth's work has been a curious thread weaving in and out of my life. I was first introduced to Knuth and his The Art of Computer Programming back in 1973, when I was tasked with understanding a section of the then-only-two-volume Book well enough to give a lecture explaining it to my college algorithms class. But when I first met him in 1981 at Stanford, he was all-in on thinking about typography and this new-fangled system of his called TeX. Skip a quarter century. One day in 2009, I foolishly decided kind of on a whim to rewrite TeX from scratch (in my copious spare time), as a simple C library, so that its typesetting algorithms could be put to use in other software such as electronic eBook's with high-quality math typesetting and interactive pictures. I asked Knuth for advice. He warned me, prepare yourself, it's going to consume five years of your life. I didn't believe him, so I set off and tried anyway. As usual, he was right.

Baddy Khan San Francisco Dec. 17

I have signed copied of "Fundamental Algorithms" in my library, which I treasure. Knuth was a fine teacher, and is truly a brilliant and inspiring individual. He taught during the same period as Vint Cerf, another wonderful teacher with a great sense of humor who is truly a "father of the internet". One good teacher makes all the difference in life. More than one is a rare blessing.

Indisk Fringe Dec. 17

I am a biologist, specifically a geneticist. I became interested in LaTeX typesetting early in my career and have been either called pompous or vilified by people at all levels for wanting to use. One of my PhD advisors famously told me to forget LaTeX because it was a thing of the past. I have now forgotten him completely. I still use LaTeX almost every day in my work even though I don't generally typeset with equations or algorithms. My students always get trained in using proper typesetting. Unfortunately, the publishing industry has all but largely given up on TeX. Very few journals in my field accept TeX manuscripts, and most of them convert to word before feeding text to their publishing software. Whatever people might argue against TeX, the beauty and elegance of a property typeset document is unparalleled. Long live LaTeX

PaulSFO San Francisco Dec. 17

A few years ago Severo Ornstein (who, incidentally, did the hardware design for the first router, in 1969), and his wife Laura, hosted a concert in their home in the hills above Palo Alto. During a break a friend and I were chatting when a man came over and *asked* if he could chat with us (a high honor, indeed). His name was Don. After a few minutes I grew suspicious and asked "What's your last name?" Friendly, modest, brilliant; a nice addition to our little chat.

Tim Black Wilmington, NC Dec. 17

When I was a physics undergraduate (at Trinity in Hartford), I was hired to re-write professor's papers into TeX. Seeing the beauty of TeX, I wrote a program that re-wrote my lab reports (including graphs!) into TeX. My lab instructors were amazed! How did I do it? I never told them. But I just recognized that Knuth was a genius and rode his coat-tails, as I have continued to do for the last 30 years!

Jack512 Alexandria VA Dec. 17

A famous quote from Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it." Anyone who has ever programmed a computer will feel the truth of this in their bones.

#### [Dec 11, 2018] Software "upgrades" require workers to constantly relearn the same task because some young "genius" observed that a carefully thought out interface "looked tired" and glitzed it up.

###### Dec 11, 2018 | www.ianwelsh.net

S Brennan permalink April 24, 2016

My grandfather, in the early 60's could board a 707 in New York and arrive in LA in far less time than I can today. And no, I am not counting 4 hour layovers with the long waits to be "screened", the jets were 50-70 knots faster, back then your time was worth more, today less.

Not counting longer hours AT WORK, we spend far more time commuting making for much longer work days, back then your time was worth more, today less!

Software "upgrades" require workers to constantly relearn the same task because some young "genius" observed that a carefully thought out interface "looked tired" and glitzed it up. Think about the almost perfect Google Maps driver interface being redesigned by people who take private buses to work. Way back in the '90's your time was worth more than today!

Life is all the "time" YOU will ever have and if we let the elite do so, they will suck every bit of it out of you.

#### [Nov 07, 2018] The Computer Languages Employers Want Most in Silicon Valley

###### Nov 07, 2018 | qz.com

Quartz
Michael J. Coren
November 2, 2018

The Indeed jobs website determined, by counting the most requested computer languages in technology job postings, that Java and Python were most desired by employers across the U.S. The site compared posts from employers in San Francisco, San Jose, and the wider U.S. between October 2017 and October 2018. Analysis found most of the non-Python or non-Java languages to be a reflection of the digital economy's demands, with HTML, CSS, and JavaScript buttressing the everyday Web. Meanwhile, SQL and PHP drive back-end functions such as data retrieval and dynamic content display. Although languages from tech giants such as Microsoft's C# and Apple's Swift for iOS and macOS applications were not among the top 10, both were cited as among the language skills most wanted by developers. Meanwhile, Amazon Web Services, which has proved vital to cloud computing, did crack the top 10.

#### [Nov 05, 2018] Revisiting the Unix philosophy in 2018 Opensource.com by Michael Hausenblas

###### Nov 05, 2018 | opensource.com

Revisiting the Unix philosophy in 2018 The old strategy of building small, focused applications is new again in the modern microservices environment.

Program Design in the Unix Environment " in the AT&T Bell Laboratories Technical Journal, in which they argued the Unix philosophy, using the example of BSD's cat -v implementation. In a nutshell that philosophy is: Build small, focused programs -- in whatever language -- that do only one thing but do this thing well, communicate via stdin / stdout , and are connected through pipes.

Sound familiar?

Yeah, I thought so. That's pretty much the definition of microservices offered by James Lewis and Martin Fowler:

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.

While one *nix program or one microservice may be very limited or not even very interesting on its own, it's the combination of such independently working units that reveals their true benefit and, therefore, their power.

*nix vs. microservices

The following table compares programs (such as cat or lsof ) in a *nix environment against programs in a microservices environment.

*nix Microservices
Unit of execution program using stdin / stdout service with HTTP or gRPC API
Data flow Pipes ?
Configuration & parameterization Command-line arguments,
environment variables, config files
JSON/YAML docs
Discovery Package manager, man, make DNS, environment variables, OpenAPI

Let's explore each line in slightly greater detail.

Unit of execution

More on Microservices

The unit of execution in *nix (such as Linux) is an executable file (binary or interpreted script) that, ideally, reads input from stdin and writes output to stdout . A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you'll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens. Data flow

Traditionally, *nix programs could communicate via pipes. In other words, thanks to Doug McIlroy , you don't need to create temporary files to pass around and each can process virtually endless streams of data between processes. To my knowledge, there is nothing comparable to a pipe standardized in microservices, besides my little Apache Kafka-based experiment from 2017 .

Configuration and parameterization

How do you configure a program or service -- either on a permanent or a by-call basis? Well, with *nix programs you essentially have three options: command-line arguments, environment variables, or full-blown config files. In microservices, you typically deal with YAML (or even worse, JSON) documents, defining the layout and configuration of a single microservice as well as dependencies and communication, storage, and runtime settings. Examples include Kubernetes resource definitions , Nomad job specifications , or Docker Compose files. These may or may not be parameterized; that is, either you have some templating language, such as Helm in Kubernetes, or you find yourself doing an awful lot of sed -i commands.

Discovery

How do you know what programs or services are available and how they are supposed to be used? Well, in *nix, you typically have a package manager as well as good old man; between them, they should be able to answer all the questions you might have. In a microservices setup, there's a bit more automation in finding a service. In addition to bespoke approaches like Airbnb's SmartStack or Netflix's Eureka , there usually are environment variable-based or DNS-based approaches that allow you to discover services dynamically. Equally important, OpenAPI provides a de-facto standard for HTTP API documentation and design, and gRPC does the same for more tightly coupled high-performance cases. Last but not least, take developer experience (DX) into account, starting with writing good Makefiles and ending with writing your docs with (or in?) style .

Pros and cons

Both *nix and microservices offer a number of challenges and opportunities

Composability

It's hard to design something that has a clear, sharp focus and can also play well with others. It's even harder to get it right across different versions and to introduce respective error case handling capabilities. In microservices, this could mean retry logic and timeouts -- maybe it's a better option to outsource these features into a service mesh? It's hard, but if you get it right, its reusability can be enormous.

Observability

In a monolith (in 2018) or a big program that tries to do it all (in 1984), it's rather straightforward to find the culprit when things go south. But, in a

yes | tr \\n x | head -c 450m | grep n


or a request path in a microservices setup that involves, say, 20 services, how do you even start to figure out which one is behaving badly? Luckily we have standards, notably OpenCensus and OpenTracing . Observability still might be the biggest single blocker if you are looking to move to microservices.

Global state

While it may not be such a big issue for *nix programs, in microservices, global state remains something of a discussion. Namely, how to make sure the local (persistent) state is managed effectively and how to make the global state consistent with as little effort as possible.

Wrapping up

In the end, the question remains: Are you using the right tool for a given task? That is, in the same way a specialized *nix program implementing a range of functions might be the better choice for certain use cases or phases, it might be that a monolith is the best option for your organization or workload. Regardless, I hope this article helps you see the many, strong parallels between the Unix philosophy and microservices -- maybe we can learn something from the former to benefit the latter.

Michael Hausenblas is a Developer Advocate for Kubernetes and OpenShift at Red Hat where he helps appops to build and operate apps. His background is in large-scale data processing and container orchestration and he's experienced in advocacy and standardization at W3C and IETF. Before Red Hat, Michael worked at Mesosphere, MapR and in two research institutions in Ireland and Austria. He contributes to open source software incl. Kubernetes, speaks at conferences and user groups, and shares good practices...

#### [Nov 05, 2018] The Linux Philosophy for SysAdmins And Everyone Who Wants To Be One eBook by David Both

###### Nov 05, 2018 | www.amazon.com

Elegance is one of those things that can be difficult to define. I know it when I see it, but putting what I see into a terse definition is a challenge. Using the Linux diet
command, Wordnet provides one definition of elegance as, "a quality of neatness and ingenious simplicity in the solution of a problem (especially in science or mathematics); 'the simplicity and elegance of his invention.'"

In the context of this book, I think that elegance is a state of beauty and simplicity in the design and working of both hardware and software. When a design is elegant,
software and hardware work better and are more efficient. The user is aided by simple, efficient, and understandable tools.

Creating elegance in a technological environment is hard. It is also necessary. Elegant solutions produce elegant results and are easy to maintain and fix. Elegance does not happen by accident; you must work for it.

The quality of simplicity is a large part of technical elegance. So large, in fact that it deserves a chapter of its own, Chapter 18, "Find the Simplicity," but we do not ignore it here. This chapter discusses what it means for hardware and software to be elegant.

Hardware Elegance

Yes, hardware can be elegant -- even beautiful, pleasing to the eye. Hardware that is well designed is more reliable as well. Elegant hardware solutions improve reliability'.

#### [Oct 27, 2018] One issue with Microsoft (not just Microsoft) is that their business model (not the benefit of the users) requires frequent changes in the systems, so bugs are introduced at the steady clip.

###### Oct 27, 2018 | www.moonofalabama.org

Piotr Berman , Oct 26, 2018 2:55:29 PM | 5 ">link

"Even Microsoft, the biggest software company in the world, recently screwed up..."

Isn't it rather logical than the larger a company is, the more screw ups it can make? After all, Microsofts has armies of programmers to make those bugs.

Once I created a joke that the best way to disable missile defense would be to have a rocket that can stop in mid-air, thus provoking the software to divide be zero and crash. One day I told that joke to a military officer who told me that something like that actually happened, but it was in the Navy and it involved a test with a torpedo. Not only the program for "torpedo defense" went down but the system crashed too and the engine of the ship stopped working as well. I also recall explanations that a new complex software system typically has all major bugs removed after being used for a year. And the occasion was Internal Revenue Service changing hardware and software leading to widely reported problems.

One issue with Microsoft (not just Microsoft) is that their business model (not the benefit of the users) requires frequent changes in the systems, so bugs are introduced at the steady clip. Of course, they do not make money on bugs per se, but on new features that in time make it impossible to use older versions of the software and hardware.

#### [Sep 21, 2018] 'It Just Seems That Nobody is Interested in Building Quality, Fast, Efficient, Lasting, Foundational Stuff Anymore'

###### Sep 21, 2018 | tech.slashdot.org

Nikita Prokopov, a software programmer and author of Fira Code, a popular programming font, AnyBar, a universal status indicator, and some open-source Clojure libraries, writes :

Remember times when an OS, apps and all your data fit on a floppy? Your desktop todo app is probably written in Electron and thus has userland driver for Xbox 360 controller in it, can render 3d graphics and play audio and take photos with your web camera. A simple text chat is notorious for its load speed and memory consumption. Yes, you really have to count Slack in as a resource-heavy application. I mean, chatroom and barebones text editor, those are supposed to be two of the less demanding apps in the whole world. Welcome to 2018.

At least it works, you might say. Well, bigger doesn't imply better. Bigger means someone has lost control. Bigger means we don't know what's going on. Bigger means complexity tax, performance tax, reliability tax. This is not the norm and should not become the norm . Overweight apps should mean a red flag. They should mean run away scared. 16Gb Android phone was perfectly fine 3 years ago. Today with Android 8.1 it's barely usable because each app has become at least twice as big for no apparent reason. There are no additional functions. They are not faster or more optimized. They don't look different. They just...grow?

iPhone 4s was released with iOS 5, but can barely run iOS 9. And it's not because iOS 9 is that much superior -- it's basically the same. But their new hardware is faster, so they made software slower. Don't worry -- you got exciting new capabilities like...running the same apps with the same speed! I dunno. [...] Nobody understands anything at this point. Neither they want to. We just throw barely baked shit out there, hope for the best and call it "startup wisdom." Web pages ask you to refresh if anything goes wrong. Who has time to figure out what happened? Any web app produces a constant stream of "random" JS errors in the wild, even on compatible browsers.

[...] It just seems that nobody is interested in building quality, fast, efficient, lasting, foundational stuff anymore. Even when efficient solutions have been known for ages, we still struggle with the same problems: package management, build systems, compilers, language design, IDEs. Build systems are inherently unreliable and periodically require full clean, even though all info for invalidation is there. Nothing stops us from making build process reliable, predictable and 100% reproducible. Just nobody thinks its important. NPM has stayed in "sometimes works" state for years.

K. S. Kyosuke ( 729550 ) , Friday September 21, 2018 @11:32AM ( #57354556 )

Re:Why should they? ( Score: 4 , Insightful)

Less resource use to accomplish the required tasks? Both in manufacturing (more chips from the same amount of manufacturing input) and in operation (less power used)?

K. S. Kyosuke ( 729550 ) writes: on Friday September 21, 2018 @11:58AM ( #57354754 )
Re:Why should they? ( Score: 2 )

Ehm...so for example using smaller cars with better mileage to commute isn't more environmentally friendly either, according to you?https://slashdot.org/comments.pl?sid=12644750&cid=57354556#

DontBeAMoran ( 4843879 ) writes: on Friday September 21, 2018 @12:04PM ( #57354826 )
Re:Why should they? ( Score: 2 )

iPhone 4S used to be the best and could run all the applications.

Today, the same power is not sufficient because of software bloat. So you could say that all the iPhones since the iPhone 4S are devices that were created and then dumped for no reason.

It doesn't matter since we can't change the past and it doesn't matter much since improvements are slowing down so people are changing their phones less often.

Mark of the North ( 19760 ) , Friday September 21, 2018 @01:02PM ( #57355296 )
Re:Why should they? ( Score: 5 , Interesting)

Can you really not see the connection between inefficient software and environmental harm? All those computers running code that uses four times as much data, and four times the number crunching, as is reasonable? That excess RAM and storage has to be built as well as powered along with the CPU. Those material and electrical resources have to come from somewhere.

But the calculus changes completely when the software manufacturer hosts the software (or pays for the hosting) for their customers. Our projected AWS bill motivated our management to let me write the sort of efficient code I've been trained to write. After two years of maintaining some pretty horrible legacy code, it is a welcome change.

The big players care a great deal about efficiency when they can't outsource inefficiency to the user's computing resources.

eth1 ( 94901 ) , Friday September 21, 2018 @11:45AM ( #57354656 )
Re:Why should they? ( Score: 5 , Informative)
We've been trained to be a consuming society of disposable goods. The latest and greatest feature will always be more important than something that is reliable and durable for the long haul.

It's not just consumer stuff.

The network team I'm a part of has been dealing with more and more frequent outages, 90% of which are due to bugs in software running our devices. These aren't fly-by-night vendors either, they're the "no one ever got fired for buying X" ones like Cisco, F5, Palo Alto, EMC, etc.

10 years ago, outages were 10% bugs, and 90% human error, now it seems to be the other way around. Everyone's chasing features, because that's what sells, so there's no time for efficiency/stability/security any more.

LucasBC ( 1138637 ) , Friday September 21, 2018 @12:05PM ( #57354836 )
Re:Why should they? ( Score: 3 , Interesting)

Poor software engineering means that very capable computers are no longer capable of running modern, unnecessarily bloated software. This, in turn, leads to people having to replace computers that are otherwise working well, solely for the reason to keep up with software that requires more and more system resources for no tangible benefit. In a nutshell -- sloppy, lazy programming leads to more technology waste. That impacts the environment. I have a unique perspective in this topic. I do web development for a company that does electronics recycling. I have suffered the continued bloat in software in the tools I use (most egregiously, Adobe), and I see the impact of technological waste in the increasing amount of electronics recycling that is occurring. Ironically, I'm working at home today because my computer at the office kept stalling every time I had Photoshop and Illustrator open at the same time. A few years ago that wasn't a problem.

arglebargle_xiv ( 2212710 ) writes:
Re: ( Score: 3 )

There is one place where people still produce stuff like the OP wants, and that's embedded. Not IoT wank, but real embedded, running on CPUs clocked at tens of MHz with RAM in two-digit kilobyte (not megabyte or gigabyte) quantities. And a lot of that stuff is written to very exacting standards, particularly where something like realtime control and/or safety is involved.

The one problem in this area is the endless battle with standards morons who begin each standard with an implicit "assume an infinitely

commodore64_love ( 1445365 ) , Friday September 21, 2018 @03:58PM ( #57356680 ) Journal
Re:Why should they? ( Score: 3 )

> Poor software engineering means that very capable computers are no longer capable of running modern, unnecessarily bloated software.

Not just computers.

You can add Smart TVs, settop internet boxes, Kindles, tablets, et cetera that must be thrown-away when they become too old (say 5 years) to run the latest bloatware. Software non-engineering is causing a lot of working hardware to be landfilled, and for no good reason.

#### [Sep 21, 2018] Fast, cheap (efficient) and reliable (robust, long lasting): pick 2

###### Sep 21, 2018 | tech.slashdot.org

JoeDuncan ( 874519 ) , Friday September 21, 2018 @12:58PM ( #57355276 )

Obligatory ( Score: 2 )

Fast, cheap (efficient) and reliable (robust, long lasting): pick 2.

roc97007 ( 608802 ) , Friday September 21, 2018 @12:16PM ( #57354946 ) Journal
Re:Bloat = growth ( Score: 2 )

There's probably some truth to that. And it's a sad commentary on the industry.

#### [Sep 21, 2018] Since Moore's law appears to have stalled since at least five years ago, it will be interesting to see if we start to see algorithm research or code optimization techniques coming to the fore again.

###### Sep 21, 2018 | tech.slashdot.org

Anonymous Coward , Friday September 21, 2018 @11:26AM ( #57354512 )

Moore's law ( Score: 5 , Interesting)

When the speed of your processor doubles every two year along with a concurrent doubling of RAM and disk space, then you can get away with bloatware.

Since Moore's law appears to have stalled since at least five years ago, it will be interesting to see if we start to see algorithm research or code optimization techniques coming to the fore again.

#### [Sep 16, 2018] After the iron curtain fell, there was a big demand for Russian-trained programmers because they could program in a very efficient and light manner that didn't demand too much of the hardware, if I remember correctly

##### "... I recall flowcharting entirely on paper before committing a program to punched cards. ..."
###### Aug 01, 2018 | turcopolier.typepad.com

Bill Herschel 2 days ago ,

Very, very slightly off-topic.

Much has been made, including in this post, of the excellent organization of Russian forces and Russian military technology.

I have been re-investigating an open-source relational database system known as PosgreSQL (variously), and I remember finding perhaps a decade ago a very useful whole text search feature of this system which I vaguely remember was written by a Russian and, for that reason, mildly distrusted by me.

Come to find out that the principle developers and maintainers of PostgreSQL are Russian. OMG. Double OMG, because the reason I chose it in the first place is that it is the best non-proprietary RDBS out there and today is supported on Google Cloud, AWS, etc.

The US has met an equal or conceivably a superior, case closed. Trump's thoroughly odd behavior with Putin is just one but a very obvious one example of this.

Of course, Trump's nationalistic blather is creating a "base" of people who believe in the godliness of the US. They are in for a very serious disappointment.

kao_hsien_chih Bill Herschel a day ago ,

After the iron curtain fell, there was a big demand for Russian-trained programmers because they could program in a very efficient and "light" manner that didn't demand too much of the hardware, if I remember correctly.

It's a bit of chicken-and-egg problem, though. Russia, throughout 20th century, had problem with developing small, effective hardware, so their programmers learned how to code to take maximum advantage of what they had, with their technological deficiency in one field giving rise to superiority in another.

Russia has plenty of very skilled, very well-trained folks and their science and math education is, in a way, more fundamentally and soundly grounded on the foundational stuff than US (based on my personal interactions anyways).

Russian tech ppl should always be viewed with certain amount of awe and respect...although they are hardly good on everything.

TTG kao_hsien_chih a day ago ,

Well said. Soviet university training in "cybernetics" as it was called in the late 1980s involved two years of programming on blackboards before the students even touched an actual computer.

It gave the students an understanding of how computers works down to the bit flipping level. Imagine trying to fuzz code in your head.

FarNorthSolitude TTG a day ago ,

I recall flowcharting entirely on paper before committing a program to punched cards. I used to do hex and octal math in my head as part of debugging core dumps. Ah, the glory days.

Honeywell once made a military computer that was 10 bit. That stumped me for a while, as everything was 8 or 16 bit back then.

kao_hsien_chih FarNorthSolitude 10 hours ago ,

That used to be fairly common in the civilian sector (in US) too: computing time was expensive, so you had to make sure that the stuff worked flawlessly before it was committed.

No opportunity to seeing things go wrong and do things over like much of how things happen nowadays. Russians, with their hardware limitations/shortages, I imagine must have been much more thorough than US programmers were back in the old days, and you could only get there by being very thoroughly grounded n the basics.

#### [Sep 07, 2018] How Can We Fix The Broken Economics of Open Source?

##### "... [with some subset of features behind a paywall] ..."
###### Sep 07, 2018 | news.slashdot.org

If we take consulting, services, and support off the table as an option for high-growth revenue generation (the only thing VCs care about), we are left with open core [with some subset of features behind a paywall] , software as a service, or some blurring of the two... Everyone wants infrastructure software to be free and continuously developed by highly skilled professional developers (who in turn expect to make substantial salaries), but no one wants to pay for it. The economics of this situation are unsustainable and broken ...

[W]e now come to what I have recently called "loose" open core and SaaS. In the future, I believe the most successful OSS projects will be primarily monetized via this method. What is it? The idea behind "loose" open core and SaaS is that a popular OSS project can be developed as a completely community driven project (this avoids the conflicts of interest inherent in "pure" open core), while value added proprietary services and software can be sold in an ecosystem that forms around the OSS...

Unfortunately, there is an inflection point at which in some sense an OSS project becomes too popular for its own good, and outgrows its ability to generate enough revenue via either "pure" open core or services and support... [B]uilding a vibrant community and then enabling an ecosystem of "loose" open core and SaaS businesses on top appears to me to be the only viable path forward for modern VC-backed OSS startups.
Klein also suggests OSS foundations start providing fellowships to key maintainers, who currently "operate under an almost feudal system of patronage, hopping from company to company, trying to earn a living, keep the community vibrant, and all the while stay impartial..."

"[A]s an industry, we are going to have to come to terms with the economic reality: nothing is free, including OSS. If we want vibrant OSS projects maintained by engineers that are well compensated and not conflicted, we are going to have to decide that this is something worth paying for. In my opinion, fellowships provided by OSS foundations and funded by companies generating revenue off of the OSS is a great way to start down this path."

#### [Apr 30, 2018] New Book Describes Bluffing Programmers in Silicon Valley

##### "... Programming is statistically a dead-end job. Why should anyone hone a dead-end skill that you won't be able to use for long? For whatever reason, the industry doesn't want old programmers. ..."
###### Apr 30, 2018 | news.slashdot.org

Long-time Slashdot reader Martin S. pointed us to this an excerpt from the new book Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley by Portland-based investigator reporter Corey Pein.

The author shares what he realized at a job recruitment fair seeking Java Legends, Python Badasses, Hadoop Heroes, "and other gratingly childish classifications describing various programming specialities.

" I wasn't the only one bluffing my way through the tech scene. Everyone was doing it, even the much-sought-after engineering talent.

I was struck by how many developers were, like myself, not really programmers , but rather this, that and the other. A great number of tech ninjas were not exactly black belts when it came to the actual onerous work of computer programming. So many of the complex, discrete tasks involved in the creation of a website or an app had been automated that it was no longer necessary to possess knowledge of software mechanics. The coder's work was rarely a craft. The apps ran on an assembly line, built with "open-source", off-the-shelf components. The most important computer commands for the ninja to master were copy and paste...

[M]any programmers who had "made it" in Silicon Valley were scrambling to promote themselves from coder to "founder". There wasn't necessarily more money to be had running a startup, and the increase in status was marginal unless one's startup attracted major investment and the right kind of press coverage. It's because the programmers knew that their own ladder to prosperity was on fire and disintegrating fast. They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers to take over more of the mundane work of producing software. The programmers also knew that the fastest way to win that promotion to founder was to find some new domain that hadn't yet been automated. Every tech industry campaign designed to spur investment in the Next Big Thing -- at that time, it was the "sharing economy" -- concealed a larger programme for the transformation of society, always in a direction that favoured the investor and executive classes.

"I wasn't just changing careers and jumping on the 'learn to code' bandwagon," he writes at one point. "I was being steadily indoctrinated in a specious ideology."

Anonymous Coward , Saturday April 28, 2018 @11:40PM ( #56522045 )

older generations already had a term for this ( Score: 5 , Interesting)

Older generations called this kind of fraud "fake it 'til you make it."

raymorris ( 2726007 ) , Sunday April 29, 2018 @02:05AM ( #56522343 ) Journal
The people who are smarter won't ( Score: 5 , Informative)

> The people can do both are smart enough to build their own company and compete with you.

Been there, done that. Learned a few lessons. Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring, managing people, corporate strategy, staying up on the competition, figuring out tax changes each year and getting taxes filed six times each year, the various state and local requirements, legal changes, contract hassles, etc, while hoping the company makes money this month so they can take a paycheck and lay their rent.

I learned that I'm good at creating software systems and I enjoy it. I don't enjoy all-nighters, partners being dickheads trying to pull out of a contract, or any of a thousand other things related to running a start-up business. I really enjoy a consistent, six-figure compensation package too.

brian.stinar ( 1104135 ) writes:
Re: ( Score: 2 )

* getting taxes filled eighteen times a year.

I pay monthly gross receipts tax (12), quarterly withholdings (4) and a corporate (1) and individual (1) returns. The gross receipts can vary based on the state, so I can see how six times a year would be the minimum.

Cederic ( 9623 ) writes:
Re: ( Score: 2 )

###### Sep 04, 2012 | sanctum.geek.nz

If you don't have root access on a particular GNU/Linux system that you use, or if you don't want to install anything to the system directories and potentially interfere with others' work on the machine, one option is to build your favourite tools in your $HOME directory. This can be useful if there's some particular piece of software that you really need for whatever reason, particularly on legacy systems that you share with other users or developers. The process can include not just applications, but libraries as well; you can link against a mix of your own libraries and the system's libraries as you need. Preparation In most cases this is actually quite a straightforward process, as long as you're allowed to use the system's compiler and any relevant build tools such as autoconf . If the ./configure script for your application allows a --prefix option, this is generally a good sign; you can normally test this with --help : $ mkdir src
$cd src$ wget -q http://fooapp.example.com/fooapp-1.2.3.tar.gz
$tar -xf fooapp-1.2.3.tar.gz$ cd fooapp-1.2.3
$pwd /home/tom/src/fooapp-1.2.3$ ./configure --help | grep -- --prefix
--prefix=PREFIX    install architecture-independent files in PREFIX


Don't do this if the security policy on your shared machine explicitly disallows compiling programs! However, it's generally quite safe as you never need root privileges at any stage of the process.

Naturally, this is not a one-size-fits-all process; the build process will vary for different applications, but it's a workable general approach to the task.

Installing

Configure the application or library with the usual call to ./configure , but use your home directory for the prefix:

$./configure --prefix=$HOME

If you want to include headers or link against libraries in your home directory, it may be appropriate to add definitions for CFLAGS and LDFLAGS to refer to those directories:

$CFLAGS="-I$HOME/include" \
> LDFLAGS="-L$HOME/lib" \ > ./configure --prefix=$HOME


Some configure scripts instead allow you to specify the path to particular libraries. Again, you can generally check this with --help .

$./configure --prefix=$HOME --with-foolib=$HOME/lib  You should then be able to install the application with the usual make and make install , needing root privileges for neither: $ make
$make install  If successful, this process will insert files into directories like $HOME/bin and $HOME/lib . You can then try to call the application by its full path: $ $HOME/bin/fooapp -v fooapp v1.2.3  Environment setup To make this work smoothly, it's best to add to a couple of environment variables, probably in your .bashrc file, so that you can use the home-built application transparently. First of all, if you linked the application against libraries also in your home directory, it will be necessary to add the library directory to LD_LIBRARY_PATH , so that the correct libraries are found and loaded at runtime: $ /home/tom/bin/fooapp -v
/home/tom/bin/fooapp: error while loading shared libraries: libfoo.so: cannot open shared...
Could not load library foolib
$export LD_LIBRARY_PATH=$HOME/lib
$/home/tom/bin/fooapp -v fooapp v1.2.3  An obvious one is adding the $HOME/bin directory to your $PATH so that you can call the application without typing its path: $ fooapp -v
$export PATH="$HOME/bin:$PATH"$ fooapp -v
fooapp v1.2.3


Similarly, defining MANPATH so that calls to man will read the manual for your build of the application first is worthwhile. You may find that $MANPATH is empty by default, so you will need to append other manual locations to it. An easy way to do this is by appending the output of the manpath utility: $ man -k fooapp
$manpath /usr/local/man:/usr/local/share/man:/usr/share/man$ export MANPATH="$HOME/share/man:$(manpath)"
$man -k fooapp fooapp (1) - Fooapp, the programmer's foo apper  This done, you should be able to use your private build of the software comfortably, and all without never needing to reach for root . Caveats This tends to work best for userspace tools like editors or other interactive command-line apps; it even works for shells. However this is not a typical use case for most applications which expect to be packaged or compiled into /usr/local , so there are no guarantees it will work exactly as expected. I have found that Vim and Tmux work very well like this, even with Tmux linked against a home-compiled instance of libevent , on which it depends. In particular, if any part of the install process requires root privileges, such as making a setuid binary, then things are likely not to work as expected. #### [Oct 31, 2017] Unix as IDE: Debugging by Tom Ryder ##### Notable quotes: ##### "... Thanks to user samwyse for the .SUFFIXES suggestion in the comments. ..." ###### Feb 14, 2012 | sanctum.geek.nz When unexpected behaviour is noticed in a program, GNU/Linux provides a wide variety of command-line tools for diagnosing problems. The use of gdb , the GNU debugger, and related tools like the lesser-known Perl debugger, will be familiar to those using IDEs to set breakpoints in their code and to examine program state as it runs. Other tools of interest are available however to observe in more detail how a program is interacting with a system and using its resources. Debugging with gdb You can use gdb in a very similar fashion to the built-in debuggers in modern IDEs like Eclipse and Visual Studio. If you are debugging a program that you've just compiled, it makes sense to compile it with its debugging symbols added to the binary, which you can do with a gcc call containing the -g option. If you're having problems with some code, it helps to also use  -Wall to show any errors you may have otherwise missed: $ gcc -g -Wall example.c -o example

The classic way to use gdb is as the shell for a running program compiled in C or C++, to allow you to inspect the program's state as it proceeds towards its crash.

$gdb example ... Reading symbols from /home/tom/example...done. (gdb) At the (gdb) prompt, you can type run to start the program, and it may provide you with more detailed information about the causes of errors such as segmentation faults, including the source file and line number at which the problem occurred. If you're able to compile the code with debugging symbols as above and inspect its running state like this, it makes figuring out the cause of a particular bug a lot easier. (gdb) run Starting program: /home/tom/gdb/example Program received signal SIGSEGV, Segmentation fault. 0x000000000040072e in main () at example.c:43 43 printf("%d\n", *segfault);  After an error terminates the program within the (gdb) shell, you can type  backtrace to see what the calling function was, which can include the specific parameters passed that may have something to do with what caused the crash. (gdb) backtrace #0 0x000000000040072e in main () at example.c:43  You can set breakpoints for gdb using the break to halt the program's run if it reaches a matching line number or function call: (gdb) break 42 Breakpoint 1 at 0x400722: file example.c, line 42. (gdb) break malloc Breakpoint 1 at 0x4004c0 (gdb) run Starting program: /home/tom/gdb/example Breakpoint 1, 0x00007ffff7df2310 in malloc () from /lib64/ld-linux-x86-64.so.2  Thereafter it's helpful to step through successive lines of code using step . You can repeat this, like any gdb command, by pressing Enter repeatedly to step through lines one at a time: (gdb) step Single stepping until exit from function _start, which has no line number information. 0x00007ffff7a74db0 in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6  You can even attach gdb to a process that is already running, by finding the process ID and passing it to gdb : $ pgrep example
1524
$gdb -p 1524  This can be useful for redirecting streams of output for a task that is taking an unexpectedly long time to run. Debugging with valgrind The much newer valgrind can be used as a debugging tool in a similar way. There are many different checks and debugging methods this program can run, but one of the most useful is its Memcheck tool, which can be used to detect common memory errors like buffer overflow: $ valgrind --leak-check=yes ./example
==29557== Memcheck, a memory error detector
==29557== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==29557== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==29557== Command: ./example
==29557==
==29557== Invalid read of size 1
==29557==    at 0x40072E: main (example.c:43)
==29557==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
==29557==
...


The gdb and valgrind tools can be used together for a very thorough survey of a program's run. Zed Shaw's Learn C the Hard Way includes a really good introduction for elementary use of valgrind with a deliberately broken program.

Tracing system and library calls with ltrace

The strace and ltrace tools are designed to allow watching system calls and library calls respectively for running programs, and logging them to the screen or, more usefully, to files.

You can run ltrace and have it run the program you want to monitor in this way for you by simply providing it as the sole parameter. It will then give you a listing of all the system and library calls it makes until it exits.

$ltrace ./example __libc_start_main(0x4006ad, 1, 0x7fff9d7e5838, 0x400770, 0x400760 srand(4, 0x7fff9d7e5838, 0x7fff9d7e5848, 0, 0x7ff3aebde320) = 0 malloc(24) = 0x01070010 rand(0, 0x1070020, 0, 0x1070000, 0x7ff3aebdee60) = 0x754e7ddd malloc(24) = 0x01070030 rand(0x7ff3aebdee60, 24, 0, 0x1070020, 0x7ff3aebdeec8) = 0x11265233 malloc(24) = 0x01070050 rand(0x7ff3aebdee60, 24, 0, 0x1070040, 0x7ff3aebdeec8) = 0x18799942 malloc(24) = 0x01070070 rand(0x7ff3aebdee60, 24, 0, 0x1070060, 0x7ff3aebdeec8) = 0x214a541e malloc(24) = 0x01070090 rand(0x7ff3aebdee60, 24, 0, 0x1070080, 0x7ff3aebdeec8) = 0x1b6d90f3 malloc(24) = 0x010700b0 rand(0x7ff3aebdee60, 24, 0, 0x10700a0, 0x7ff3aebdeec8) = 0x2e19c419 malloc(24) = 0x010700d0 rand(0x7ff3aebdee60, 24, 0, 0x10700c0, 0x7ff3aebdeec8) = 0x35bc1a99 malloc(24) = 0x010700f0 rand(0x7ff3aebdee60, 24, 0, 0x10700e0, 0x7ff3aebdeec8) = 0x53b8d61b malloc(24) = 0x01070110 rand(0x7ff3aebdee60, 24, 0, 0x1070100, 0x7ff3aebdeec8) = 0x18e0f924 malloc(24) = 0x01070130 rand(0x7ff3aebdee60, 24, 0, 0x1070120, 0x7ff3aebdeec8) = 0x27a51979 --- SIGSEGV (Segmentation fault) --- +++ killed by SIGSEGV +++  You can also attach it to a process that's already running: $ pgrep example
5138
$ltrace -p 5138  Generally, there's quite a bit more than a couple of screenfuls of text generated by this, so it's helpful to use the -o option to specify an output file to which to log the calls: $ ltrace -o example.ltrace ./example


You can then view this trace in a text editor like Vim, which includes syntax highlighting for ltrace output:

Vim session with ltrace output

I've found ltrace very useful for debugging problems where I suspect improper linking may be at fault, or the absence of some needed resource in a chroot environment, since among its output it shows you its search for libraries at dynamic linking time and opening configuration files in /etc , and the use of devices like /dev/random or /dev/zero .

Tracking open files with lsof

If you want to view what devices, files, or streams a running process has open, you can do that with lsof :

$pgrep example 5051$ lsof -p 5051


For example, the first few lines of the apache2 process running on my home server are:

# lsof -p 30779
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
apache2 30779 root  cwd    DIR    8,1     4096       2 /
apache2 30779 root  rtd    DIR    8,1     4096       2 /
apache2 30779 root  txt    REG    8,1   485384  990111 /usr/lib/apache2/mpm-prefork/apache2
apache2 30779 root  DEL    REG    8,1          1087891 /lib/x86_64-linux-gnu/libgcc_s.so.1
apache2 30779 root  mem    REG    8,1    35216 1079715 /usr/lib/php5/20090626/pdo_mysql.so
...


Interestingly, another way to list the open files for a process is to check the corresponding entry for the process in the dynamic /proc directory:

# ls -l /proc/30779/fd


This can be very useful in confusing situations with file locks, or identifying whether a process is holding open files that it needn't.

Viewing memory allocation with pmap

As a final debugging tip, you can view the memory allocations for a particular process with pmap :

# pmap 30779
30779:   /usr/sbin/apache2 -k start
00007fdb3883e000     84K r-x--  /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted)
00007fdb38853000   2048K -----  /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted)
00007fdb38a53000      4K rw---  /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted)
00007fdb38a54000      4K -----    [ anon ]
00007fdb38a55000   8192K rw---    [ anon ]
00007fdb392e5000     28K r-x--  /usr/lib/php5/20090626/pdo_mysql.so
00007fdb392ec000   2048K -----  /usr/lib/php5/20090626/pdo_mysql.so
00007fdb394ec000      4K r----  /usr/lib/php5/20090626/pdo_mysql.so
00007fdb394ed000      4K rw---  /usr/lib/php5/20090626/pdo_mysql.so
...
total           152520K


This will show you what libraries a running process is using, including those in shared memory. The total given at the bottom is a little misleading as for loaded shared libraries, the running process is not necessarily the only one using the memory; determining "actual" memory usage for a given process is a little more in-depth than it might seem with shared libraries added to the picture. Posted in GNU/Linux Tagged backtrace , breakpoint , debugging , file , file handle , gdb , ltrace , memory , process , strace , trace , unix Unix as IDE: Building Posted on February 13, 2012 by Tom Ryder Because compiling projects can be such a complicated and repetitive process, a good IDE provides a means to abstract, simplify, and even automate software builds. Unix and its descendents accomplish this process with a Makefile , a prescribed recipe in a standard format for generating executable files from source and object files, taking account of changes to only rebuild what's necessary to prevent costly recompilation.

One interesting thing to note about make is that while it's generally used for compiled software build automation and has many shortcuts to that effect, it can actually effectively be used for any situation in which it's required to generate one set of files from another. One possible use is to generate web-friendly optimised graphics from source files for deployment for a website; another use is for generating static HTML pages from code, rather than generating pages on the fly. It's on the basis of this more flexible understanding of software "building" that modern takes on the tool like Ruby's rake have become popular, automating the general tasks for producing and installing code and files of all kinds.

Anatomy of a Makefile

The general pattern of a Makefile is a list of variables and a list of targets , and the sources and/or objects used to provide them. Targets may not necessarily be linked binaries; they could also constitute actions to perform using the generated files, such as install to instate built files into the system, and clean to remove built files from the source tree.

It's this flexibility of targets that enables make to automate any sort of task relevant to assembling a production build of software; not just the typical parsing, preprocessing, compiling proper and linking steps performed by the compiler, but also running tests ( make test ), compiling documentation source files into one or more appropriate formats, or automating deployment of code into production systems, for example, uploading to a website via a git push or similar content-tracking method.

An example Makefile for a simple software project might look something like the below:

all: example

example: main.o example.o library.o
gcc main.o example.o library.o -o example

main.o: main.c
gcc -c main.c -o main.o

example.o: example.c
gcc -c example.c -o example.o

library.o: library.c
gcc -c library.c -o library.o

clean:
rm *.o example

install: example
cp example /usr/bin


The above isn't the most optimal Makefile possible for this project, but it provides a means to build and install a linked binary simply by typing make . Each target definition contains a list of the dependencies required for the command that follows; this means that the definitions can appear in any order, and the call to make will call the relevant commands in the appropriate order.

Much of the above is needlessly verbose or repetitive; for example, if an object file is built directly from a single C file of the same name, then we don't need to include the target at all, and make will sort things out for us. Similarly, it would make sense to put some of the more repeated calls into variables so that we would not have to change them individually if our choice of compiler or flags changed. A more concise version might look like the following:

CC = gcc
OBJECTS = main.o example.o library.o
BINARY = example

all: example

example: $(OBJECTS)$(CC) $(OBJECTS) -o$(BINARY)

clean:
rm -f $(BINARY)$(OBJECTS)

install: example
cp $(BINARY) /usr/bin  More general uses of make In the interests of automation, however, it's instructive to think of this a bit more generally than just code compilation and linking. An example could be for a simple web project involving deploying PHP to a live webserver. This is not normally a task people associate with the use of make , but the principles are the same; with the source in place and ready to go, we have certain targets to meet for the build. PHP files don't require compilation, of course, but web assets often do. An example that will be familiar to web developers is the generation of scaled and optimised raster images from vector source files, for deployment to the web. You keep and version your original source file, and when it comes time to deploy, you generate a web-friendly version of it. Let's assume for this particular project that there's a set of four icons used throughout the site, sized to 64 by 64 pixels. We have the source files to hand in SVG vector format, safely tucked away in version control, and now need to generate the smaller bitmaps for the site, ready for deployment. We could therefore define a target icons , set the dependencies, and type out the commands to perform. This is where command line tools in Unix really begin to shine in use with Makefile syntax: icons: create.png read.png update.png delete.png create.png: create.svg convert create.svg create.raw.png && \ pngcrush create.raw.png create.png read.png: read.svg convert read.svg read.raw.png && \ pngcrush read.raw.png read.png update.png: update.svg convert update.svg update.raw.png && \ pngcrush update.raw.png update.png delete.png: delete.svg convert delete.svg delete.raw.png && \ pngcrush delete.raw.png delete.png  With the above done, typing make icons will go through each of the source icons files in a Bash loop, convert them from SVG to PNG using ImageMagick's convert , and optimise them with pngcrush , to produce images ready for upload. A similar approach can be used for generating help files in various forms, for example, generating HTML files from Markdown source: docs: README.html credits.html README.html: README.md markdown README.md > README.html credits.html: credits.md markdown credits.md > credits.html  And perhaps finally deploying a website with git push web , but only after the icons are rasterized and the documents converted: deploy: icons docs git push web  For a more compact and abstract formula for turning a file of one suffix into another, you can use the .SUFFIXES pragma to define these using special symbols. The code for converting icons could look like this; in this case, $< refers to the source file, $* to the filename with no extension, and $@ to the target.

icons: create.png read.png update.png delete.png

.SUFFIXES: .svg .png

.svg.png:
convert $<$*.raw.png && \
pngcrush $*.raw.png$@

Tools for building a Makefile

A variety of tools exist in the GNU Autotools toolchain for the construction of configure scripts and make files for larger software projects at a higher level, in particular autoconf and automake . The use of these tools allows generating configure scripts and make files covering very large source bases, reducing the necessity of building otherwise extensive makefiles manually, and automating steps taken to ensure the source remains compatible and compilable on a variety of operating systems.

Covering this complex process would be a series of posts in its own right, and is out of scope of this survey.

Thanks to user samwyse for the .SUFFIXES suggestion in the comments. Posted in GNU/Linux Tagged build , clean , dependency , generate , install , make , makefile , target , unix Unix as IDE: Compiling Posted on February 12, 2012 by Tom Ryder There are a lot of tools available for compiling and interpreting code on the Unix platform, and they tend to be used in different ways. However, conceptually many of the steps are the same. Here I'll discuss compiling C code with gcc from the GNU Compiler Collection, and briefly the use of perl as an example of an interpreter. GCC

GCC is a very mature GPL-licensed collection of compilers, perhaps best-known for working with C and C++ programs. Its free software license and near ubiquity on free Unix-like systems like GNU/Linux and BSD has made it enduringly popular for these purposes, though more modern alternatives are available in compilers using the LLVM infrastructure, such as Clang .

The frontend binaries for GNU Compiler Collection are best thought of less as a set of complete compilers in their own right, and more as drivers for a set of discrete programming tools, performing parsing, compiling, and linking, among other steps. This means that while you can use GCC with a relatively simple command line to compile straight from C sources to a working binary, you can also inspect in more detail the steps it takes along the way and tweak it accordingly.

I won't be discussing the use of make files here, though you'll almost certainly be wanting them for any C project of more than one file; that will be discussed in the next article on build automation tools.

Compiling and assembling object code

You can compile object code from a C source file like so:

$gcc -c example.c -o example.o  Assuming it's a valid C program, this will generate an unlinked binary object file called example.o in the current directory, or tell you the reasons it can't. You can inspect its assembler contents with the objdump tool: $ objdump -D example.o


Alternatively, you can get gcc to output the appropriate assembly code for the object directly with the -S parameter:

$gcc -c -S example.c -o example.s  This kind of assembly output can be particularly instructive, or at least interesting, when printed inline with the source code itself, which you can do with: $ gcc -c -g -Wa,-a,-ad example.c > example.lst

Preprocessor

The C preprocessor cpp is generally used to include header files and define macros, among other things. It's a normal part of gcc compilation, but you can view the C code it generates by invoking cpp directly:

$cpp example.c  This will print out the complete code as it would be compiled, with includes and relevant macros applied. Linking objects One or more objects can be linked into appropriate binaries like so: $ gcc example.o -o example


In this example, GCC is not doing much more than abstracting a call to ld , the GNU linker. The command produces an executable binary called example .

Compiling, assembling, and linking

All of the above can be done in one step with:

$gcc example.c -o example  This is a little simpler, but compiling objects independently turns out to have some practical performance benefits in not recompiling code unnecessarily, which I'll discuss in the next article. Including and linking C files and headers can be explicitly included in a compilation call with the -I parameter: $ gcc -I/usr/include/somelib.h example.c -o example


Similarly, if the code needs to be dynamically linked against a compiled system library available in common locations like /lib or /usr/lib , such as ncurses , that can be included with the -l parameter:

$gcc -lncurses example.c -o example  If you have a lot of necessary inclusions and links in your compilation process, it makes sense to put this into environment variables: $ export CFLAGS=-I/usr/include/somelib.h
$export CLIBS=-lncurses$ gcc $CFLAGS$CLIBS example.c -o example


This very common step is another thing that a Makefile is designed to abstract away for you.

Compilation plan

To inspect in more detail what gcc is doing with any call, you can add the  -v switch to prompt it to print its compilation plan on the standard error stream:

$gcc -v -c example.c -o example.o  If you don't want it to actually generate object files or linked binaries, it's sometimes tidier to use -### instead: $ gcc -### -c example.c -o example.o


This is mostly instructive to see what steps the gcc binary is abstracting away for you, but in specific cases it can be useful to identify steps the compiler is taking that you may not necessarily want it to.

More verbose error checking

You can add the -Wall and/or -pedantic options to the gcc call to prompt it to warn you about things that may not necessarily be errors, but could be:

$gcc -Wall -pedantic -c example.c -o example.o  This is good for including in your Makefile or in your makeprg definition in Vim, as it works well with the quickfix window discussed in the previous article and will enable you to write more readable, compatible, and less error-prone code as it warns you more extensively about errors. Profiling compilation time You can pass the flag -time to gcc to generate output showing how long each step is taking: $ gcc -time -c example.c -o example.o

Optimisation

You can pass generic optimisation options to gcc to make it attempt to build more efficient object files and linked binaries, at the expense of compilation time. I find -O2 is usually a happy medium for code going into production:

• gcc -O1
• gcc -O2
• gcc -O3

Like any other Bash command, all of this can be called from within Vim by:

:!gcc % -o example

Interpreters

The approach to interpreted code on Unix-like systems is very different. In these examples I'll use Perl, but most of these principles will be applicable to interpreted Python or Ruby code, for example.

Inline

You can run a string of Perl code directly into the interpreter in any one of the following ways, in this case printing the single line "Hello, world." to the screen, with a linebreak following. The first one is perhaps the tidiest and most standard way to work with Perl; the second uses a heredoc string, and the third a classic Unix shell pipe.

$perl -e 'print "Hello world.\n";'$ perl <<<'print "Hello world.\n";'
$echo 'print "Hello world.\n";' | perl  Of course, it's more typical to keep the code in a file, which can be run directly: $ perl hello.pl


In either case, you can check the syntax of the code without actually running it with the -c switch:

$perl -c hello.pl  But to use the script as a logical binary , so you can invoke it directly without knowing or caring what the script is, you can add a special first line to the file called the "shebang" that does some magic to specify the interpreter through which the file should be run. #!/usr/bin/env perl print "Hello, world.\n";  The script then needs to be made executable with a chmod call. It's also good practice to rename it to remove the extension, since it is now taking the shape of a logic binary: $ mv hello{.pl,}
$chmod +x hello  And can thereafter be invoked directly, as if it were a compiled binary: $ ./hello


This works so transparently that many of the common utilities on modern GNU/Linux systems, such as the adduser frontend to useradd , are actually Perl or even Python scripts.

In the next post, I'll describe the use of make for defining and automating building projects in a manner comparable to IDEs, with a nod to newer takes on the same idea with Ruby's rake . Posted in GNU/Linux Tagged assembler , compiler , gcc , interpreter , linker , perl , python , ruby , unix Unix as IDE: Editing Posted on February 11, 2012 by Tom Ryder The text editor is the core tool for any programmer, which is why choice of editor evokes such tongue-in-cheek zealotry in debate among programmers. Unix is the operating system most strongly linked with two enduring favourites, Emacs and Vi, and their modern versions in GNU Emacs and Vim, two editors with very different editing philosophies but comparable power.

Being a Vim heretic myself, here I'll discuss the indispensable features of Vim for programming, and in particular the use of shell tools called from within Vim to complement the editor's built-in functionality. Some of the principles discussed here will be applicable to those using Emacs as well, but probably not for underpowered editors like Nano.

This will be a very general survey, as Vim's toolset for programmers is enormous , and it'll still end up being quite long. I'll focus on the essentials and the things I feel are most helpful, and try to provide links to articles with a more comprehensive treatment of the topic. Don't forget that Vim's :help has surprised many people new to the editor with its high quality and usefulness.

Filetype detection

Vim has built-in settings to adjust its behaviour, in particular its syntax highlighting, based on the filetype being loaded, which it happily detects and generally does a good job at doing so. In particular, this allows you to set an indenting style conformant with the way a particular language is usually written. This should be one of the first things in your .vimrc file.

if has("autocmd")
filetype on
filetype indent on
filetype plugin on
endif

Syntax highlighting

Even if you're just working with a 16-color terminal, just include the following in your  .vimrc if you're not already:

syntax on


The colorschemes with a default 16-color terminal are not pretty largely by necessity, but they do the job, and for most languages syntax definition files are available that work very well. There's a tremendous array of colorschemes available, and it's not hard to tweak them to suit or even to write your own. Using a 256-color terminal or gVim will give you more options. Good syntax highlighting files will show you definite syntax errors with a glaring red background.

Line numbering

To turn line numbers on if you use them a lot in your traditional IDE:

set number


You might like to try this as well, if you have at least Vim 7.3 and are keen to try numbering lines relative to the current line rather than absolutely:

set relativenumber

Tags files

Vim works very well with the output from the ctags utility. This allows you to search quickly for all uses of a particular identifier throughout the project, or to navigate straight to the declaration of a variable from one of its uses, regardless of whether it's in the same file. For large C projects in multiple files this can save huge amounts of otherwise wasted time, and is probably Vim's best answer to similar features in mainstream IDEs.

You can run :!ctags -R on the root directory of projects in many popular languages to generate a tags file filled with definitions and locations for identifiers throughout your project. Once a tags file for your project is available, you can search for uses of an appropriate tag throughout the project like so:

:tag someClass


The commands :tn and :tp will allow you to iterate through successive uses of the tag elsewhere in the project. The built-in tags functionality for this already covers most of the bases you'll probably need, but for features such as a tag list window, you could try installing the very popular Taglist plugin . Tim Pope's Unimpaired plugin also contains a couple of useful relevant mappings.

Calling external programs

Until 2017, there were three major methods of calling external programs during a Vim session:

• :!<command> -- Useful for issuing commands from within a Vim context particularly in cases where you intend to record output in a buffer.
• :shell -- Drop to a shell as a subprocess of Vim. Good for interactive commands.
• Ctrl-Z -- Suspend Vim and issue commands from the shell that called it.

Since 2017, Vim 8.x now includes a :terminal command to bring up a terminal emulator buffer in a window. This seems to work better than previous plugin-based attempts at doing this, such as Conque . For the moment I still strongly recommend using one of the older methods, all of which also work in other vi -type editors.

Lint programs and syntax checkers

Checking syntax or compiling with an external program call (e.g. perl -c ,  gcc ) is one of the calls that's good to make from within the editor using :! commands. If you were editing a Perl file, you could run this like so:

:!perl -c %

/home/tom/project/test.pl syntax OK

Press Enter or type command to continue


The % symbol is shorthand for the file loaded in the current buffer. Running this prints the output of the command, if any, below the command line. If you wanted to call this check often, you could perhaps map it as a command, or even a key combination in your .vimrc file. In this case, we define a command :PerlLint which can be called from normal mode with \l :

command PerlLint !perl -c %


For a lot of languages there's an even better way to do this, though, which allows us to capitalise on Vim's built-in quickfix window. We can do this by setting an appropriate makeprg for the filetype, in this case including a module that provides us with output that Vim can use for its quicklist, and a definition for its two formats:

:set makeprg=perl\ -c\ -MVi::QuickFix\ %
:set errorformat+=%m\ at\ %f\ line\ %l\.
:set errorformat+=%m\ at\ %f\ line\ %l


You may need to install this module first via CPAN, or the Debian package libvi-quickfix-perl . This done, you can type :make after saving the file to check its syntax, and if errors are found, you can open the quicklist window with :copen to inspect the errors, and :cn and :cp to jump to them within the buffer.

Vim quickfix working on a Perl file

This also works for output from gcc , and pretty much any other compiler syntax checker that you might want to use that includes filenames, line numbers, and error strings in its error output. It's even possible to do this with web-focused languages like PHP , and for tools like JSLint for JavaScript . There's also an excellent plugin named Syntastic that does something similar.

Reading output from other commands

You can use :r! to call commands and paste their output directly into the buffer with which you're working. For example, to pull a quick directory listing for the current folder into the buffer, you could type:

:r!ls


This doesn't just work for commands, of course; you can simply read in other files this way with just :r , like public keys or your own custom boilerplate:

:r ~/.ssh/id_rsa.pub

Filtering output through other commands

You can extend this to actually filter text in the buffer through external commands, perhaps selected by a range or visual mode, and replace it with the command's output. While Vim's visual block mode is great for working with columnar data, it's very often helpful to bust out tools like column , cut , sort , or awk .

For example, you could sort the entire file in reverse by the second column by typing:

:%!sort -k2,2r


You could print only the third column of some selected text where the line matches the pattern /vim/ with:

:'<,'>!awk '/vim/ {print $3}'  You could arrange keywords from lines 1 to 10 in nicely formatted columns like: :1,10!column -t  Really any kind of text filter or command can be manipulated like this in Vim, a simple interoperability feature that expands what the editor can do by an order of magnitude. It effectively makes the Vim buffer into a text stream, which is a language that all of these classic tools speak. There is a lot more detail on this in my "Shell from Vi" post. Built-in alternatives It's worth noting that for really common operations like sorting and searching, Vim has built-in methods in :sort and :grep , which can be helpful if you're stuck using Vim on Windows, but don't have nearly the adaptability of shell calls. Diffing Vim has a diffing mode, vimdiff , which allows you to not only view the differences between different versions of a file, but also to resolve conflicts via a three-way merge and to replace differences to and fro with commands like :diffput and :diffget for ranges of text. You can call vimdiff from the command line directly with at least two files to compare like so: $ vimdiff file-v1.c file-v2.c


Vim diffing a .vimrc file Version control

You can call version control methods directly from within Vim, which is probably all you need most of the time. It's useful to remember here that % is always a shortcut for the buffer's current file:

:!svn status
:!git commit -a


Recently a clear winner for Git functionality with Vim has come up with Tim Pope's Fugitive , which I highly recommend to anyone doing Git development with Vim. There'll be a more comprehensive treatment of version control's basis and history in Unix in Part 7 of this series.

The difference

Part of the reason Vim is thought of as a toy or relic by a lot of programmers used to GUI-based IDEs is its being seen as just a tool for editing files on servers, rather than a very capable editing component for the shell in its own right. Its own built-in features being so composable with external tools on Unix-friendly systems makes it into a text editing powerhouse that sometimes surprises even experienced users.

#### [Oct 31, 2017] Understanding Shared Libraries in Linux by Aaron Kili

###### Oct 30, 2017 | sanctum.geek.nz
In programming, a library is an assortment of pre-compiled pieces of code that can be reused in a program. Libraries simplify life for programmers, in that they provide reusable functions, routines, classes, data structures and so on (written by a another programmer), which they can use in their programs.

For instance, if you are building an application that needs to perform math operations, you don't have to create a new math function for that, you can simply use existing functions in libraries for that programming language.

Examples of libraries in Linux include libc (the standard C library) or glibc (GNU version of the standard C library), libcurl (multiprotocol file transfer library), libcrypt (library used for encryption, hashing, and encoding in C) and many more.

Linux supports two classes of libraries, namely:

• Static libraries – are bound to a program statically at compile time.
• Dynamic or shared libraries – are loaded when a program is launched and loaded into memory and binding occurs at run time.

Dynamic or shared libraries can further be categorized into:

• Dynamically linked libraries – here a program is linked with the shared library and the kernel loads the library (in case it's not in memory) upon execution.
• Dynamically loaded libraries – the program takes full control by calling functions with the library.
Shared Library Naming Conventions

Shared libraries are named in two ways: the library name (a.k.a soname ) and a "filename" (absolute path to file which stores library code).

For example, the soname for libc is libc.so.6 : where lib is the prefix, is a descriptive name, so means shared object, and is the version. And its filename is: /lib64/libc.so.6 . Note that the soname is actually a symbolic link to the filename.

Locating Shared Libraries in Linux

Shared libraries are loaded by ld.so (or ld.so.x ) and ld-linux.so (or ld-linux.so.x ) programs, where is the version. In Linux, /lib/ld-linux.so.x searches and loads all shared libraries used by a program.

A program can call a library using its library name or filename, and a library path stores directories where libraries can be found in the filesystem. By default, libraries are located in /usr/local/lib /usr/local/lib64 /usr/lib and /usr/lib64 ; system startup libraries are in /lib and /lib64 . Programmers can, however, install libraries in custom locations.

The library path can be defined in /etc/ld.so.conf file which you can edit with a command line editor.

# vi /etc/ld.so.conf


The line(s) in this file instruct the kernel to load file in /etc/ld.so.conf.d . This way, package maintainers or programmers can add their custom library directories to the search list.

If you look into the /etc/ld.so.conf.d directory, you'll see .conf files for some common packages (kernel, mysql and postgresql in this case):

# ls /etc/ld.so.conf.d
kernel-2.6.32-642.6.2.el6.x86_64.conf   kernel-2.6.32-696.6.3.el6.x86_64.conf  postgresql-pgdg-libs.conf


If you take a look at the mariadb-x86_64.conf, you will see an absolute path to package's libraries.

# cat mariadb-x86_64.conf
/usr/lib64/mysql


The method above sets the library path permanently. To set it temporarily, use the LD_LIBRARY_PATH environment variable on the command line. If you want to keep the changes permanent, then add this line in the shell initialization file /etc/profile (global) or ~/.profile (user specific).

# export LD_LIBRARY_PATH=/path/to/library/file

Managing Shared Libraries in Linux

Let us now look at how to deal with shared libraries. To get a list of all shared library dependencies for a binary file, you can use the ldd utility . The output of ldd is in the form:

library name =>  filename (some hexadecimal value)
OR
filename (some hexadecimal value)  #this is shown when library name can't be read


This command shows all shared library dependencies for the ls command .

# ldd /usr/bin/ls
OR
# ldd /bin/ls

##### Sample Output
###### Oct 31, 2017 | www.tecmint.com
   linux-vdso.so.1 =>  (0x00007ffebf9c2000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x0000003b71e00000)
librt.so.1 => /lib64/librt.so.1 (0x0000003b71600000)
libcap.so.2 => /lib64/libcap.so.2 (0x0000003b76a00000)
libacl.so.1 => /lib64/libacl.so.1 (0x0000003b75e00000)
libc.so.6 => /lib64/libc.so.6 (0x0000003b70600000)
libdl.so.2 => /lib64/libdl.so.2 (0x0000003b70a00000)
/lib64/ld-linux-x86-64.so.2 (0x0000561abfc09000)
libattr.so.1 => /lib64/libattr.so.1 (0x0000003b75600000)


Because shared libraries can exist in many different directories, searching through all of these directories when a program is launched would be greatly inefficient: which is one of the likely disadvantages of dynamic libraries. Therefore a mechanism of caching employed, performed by a the program ldconfig

By default, ldconfig reads the content of /etc/ld.so.conf , creates the appropriate symbolic links in the dynamic link directories, and then writes a cache to /etc/ld.so.cache which is then easily used by other programs.

This is very important especially when you have just installed new shared libraries or created your own, or created new library directories. You need to run ldconfig command to effect the changes.

# ldconfig
OR
# ldconfig -V   #shows files and directories it works with


After creating your shared library, you need to install it. You can either move it into any of the standard directories mentioned above, and run the ldconfig command.

Alternatively, run the following command to create symbolic links from the soname to the filename:

# ldconfig -n /path/to/your/shared/libraries


To get started with creating your own libraries, check out this guide from The Linux Documentation Project(TLDP) .

Thats all for now! In this article, we gave you an introduction to libraries, explained shared libraries and how to manage them in Linux. If you have any queries or additional ideas to share, use the comment form below.

#### [Oct 26, 2017] Amazon.com Customer reviews Extreme Programming Explained Embrace Change

###### Oct 26, 2017 | www.amazon.com
2.0 out of 5 stars

By Mohammad B. Abdulfatah on February 10, 2003

Programming Malpractice Explained: Justifying Chaos

To fairly review this book, one must distinguish between the methodology it presents and the actual presentation. As to the presentation, the author attempts to win the reader over with emotional persuasion and pep talk rather than with facts and hard evidence. Stories of childhood and comradeship don't classify as convincing facts to me. A single case study-the C3 project-is often referred to, but with no specific information (do note that the project was cancelled by the client after staying in development for far too long).
As to the method itself, it basically boils down to four core practices:
1. Always have a customer available on site.
2. Unit test before you code.
3. Program in pairs.
4. Forfeit detailed design in favor of incremental, daily releases and refactoring.
If you do the above, and you have excellent staff on your hands, then the book promises that you'll reap the benefits of faster development, less overtime, and happier customers. Of course, the book fails to point out that if your staff is all highly qualified people, then the project is likely to succeed no matter what methodology you use. I'm sure that anyone who has worked in the software industry for sometime has noticed the sad state that most computer professionals are in nowadays.
However, assuming that you have all the topnotch developers that you desire, the outlined methodology is almost impossible to apply in real world scenarios. Having a customer always available on site would mean that the customer in question is probably a small, expendable fish in his organization and is unlikely to have any useful knowledge of its business practices. Unit testing code before it is written means that one would have to have a mental picture of what one is going to write before writing it, which is difficult without upfront design. And maintaining such tests as the code changes would be a nightmare. Programming in pairs all the time would assume that your topnotch developers are also sociable creatures, which is rarely the case, and even if they were, no one would be able to justify the practice in terms of productivity. I won't discuss why I think that abandoning upfront design is a bad practice; the whole idea is too ridiculous to debate.
Both book and methodology will attract fledgling developers with its promise of hacking as an acceptable software practice and a development universe revolving around the programmer. It's a cult, not a methodology, were the followers shall find salvation and 40-hour working weeks. Experience is a great teacher, but only a fool would learn from it alone. Listen to what the opponents have to say before embracing change, and don't forget to take the proverbial grain of salt.
Two stars out of five for the presentation for being courageous and attempting to defy the standard practices of the industry. Two stars for the methodology itself, because it underlines several common sense practices that are very useful once practiced without the extremity.

By wiredweird HALL OF FAME TOP 1000 REVIEWER on May 24, 2004
eXtreme buzzwording

Maybe it's an interesting idea, but it's just not ready for prime time.
Parts of Kent's recommended practice - including aggressive testing and short integration cycle - make a lot of sense. I've shared the same beliefs for years, but it was good to see them clarified and codified. I really have changed some of my practice after reading this and books like this.
I have two broad kinds of problem with this dogma, though. First is the near-abolition of documentation. I can't defend 2000 page specs for typical kinds of development. On the other hand, declaring that the test suite is the spec doesn't do it for me either. The test suite is code, written for machine interpretation. Much too often, it is not written for human interpretation. Based on the way I see most code written, it would be a nightmare to reverse engineer the human meaning out of any non-trivial test code. Some systematic way of ensuring human intelligibility in the code, traceable to specific "stories" (because "requirements" are part of the bad old way), would give me a lot more confidence in the approach.
The second is the dictatorial social engineering that eXtremity mandates. I've actually tried the pair programming - what a disaster. The less said the better, except that my experience did not actually destroy any professional relationships. I've also worked with people who felt that their slightest whim was adequate reason to interfere with my work. That's what Beck institutionalizes by saying that any request made of me by anyone on the team must be granted. It puts me completely at the mercy of anyone walking by. The requisite bullpen physical environment doesn't work for me either. I find that the visual and auditory distraction make intense concentration impossible.
I find revival tent spirit of the eXtremists very off-putting. If something works, it works for reasons, not as a matter of faith. I find much too much eXhortation to believe, to go ahead and leap in, so that I will eXperience the wonderfulness for myself. Isn't that what the evangelist on the subway platform keeps saying? Beck does acknowledge unbelievers like me, but requires their exile in order to maintain the group-think of the X-cult.
Beck's last chapters note a number of exceptions and special cases where eXtremism may not work - actually, most of the projects I've ever encountered.
There certainly is good in the eXtreme practice. I look to future authors to tease that good out from the positively destructive threads that I see interwoven.

By A customer on May 2, 2004
A work of fiction

The book presents extreme programming. It is divided into three parts:
(1) The problem
(2) The solution
(3) Implementing XP.
The problem, as presented by the author, is that requirements change but current methodologies are not agile enough to cope with this. This results in customer being unhappy. The solution is to embrace change and to allow the requirements to be changed. This is done by choosing the simplest solution, releasing frequently, refactoring with the security of unit tests.
The basic assumption which underscores the approach is that the cost of change is not exponential but reaches a flat asymptote. If this is not the case, allowing change late in the project would be disastrous. The author does not provide data to back his point of view. On the other hand there is a lot of data against a constant cost of change (see for example discussion of cost in Code Complete). The lack of reasonable argumentation is an irremediable flaw in the book. Without some supportive data it is impossible to believe the basic assumption, nor the rest of the book. This is all the more important since the only project that the author refers to was cancelled before full completion.
Many other parts of the book are unconvincing. The author presents several XP practices. Some of them are very useful. For example unit tests are a good practice. They are however better treated elsewhere (e.g., Code Complete chapter on unit test). On the other hand some practices seem overkill. Pair programming is one of them. I have tried it and found it useful to generate ideas while prototyping. For writing production code, I find that a quiet environment is by far the best (see Peopleware for supportive data). Again the author does not provide any data to support his point.
This book suggests an approach aiming at changing software engineering practices. However the lack of supportive data makes it a work of fiction.
I would suggest reading Code Complete for code level advice or Rapid Development for management level advice.

By A customer on November 14, 2002
Not Software Engineering.

Any Engineering discipline is based on solid reasoning and logic not on blind faith. Unfortunately, most of this book attempts to convince you that Extreme programming is better based on the author's experiences. A lot of the principles are counter - intutive and the author exhorts you just try it out and get enlightened. I'm sorry but these kind of things belong in infomercials not in s/w engineering.
The part about "code is the documentation" is the scariest part. It's true that keeping the documentation up to date is tough on any software project, but to do away with dcoumentation is the most ridiculous thing I have heard. It's like telling people to cut of their noses to avoid colds.
Yes we are always in search of a better software process. Let me tell you that this book won't lead you there.

By Philip K. Ronzone on November 24, 2000
The "gossip magazine diet plans" style of programming.

This book reminds me of the "gossip magazine diet plans", you know, the vinegar and honey diet, or the fat-burner 2000 pill diet etc. Occasionally, people actually lose weight on those diets, but, only because they've managed to eat less or exercise more. The diet plans themselves are worthless. XP is the same - it may sometimes help people program better, but only because they are (unintentionally) doing something different. People look at things like XP because, like dieters, they see a need for change. Overall, the book is a decently written "fad diet", with ideas that are just as worthless.

By A customer on August 11, 2003
Hackers! Salvation is nigh!!

It's interesting to see the phenomenon of Extreme Programming happening in the dawn of the 21st century. I suppose historians can explain such a reaction as a truly conservative movement. Of course, serious software engineering practice is hard. Heck, documentation is a pain in the neck. And what programmer wouldn't love to have divine inspiration just before starting to write the latest web application and so enlightened by the Almighty, write the whole thing in one go, as if by magic? No design, no documentation, you and me as a pair, and the customer too. Sounds like a hacker's dream with "Imagine" as the soundtrack (sorry, John).
The Software Engineering struggle is over 50 years old and it's only logical to expect some resistance, from time to time. In the XP case, the resistance comes in one of its worst forms: evangelism. A fundamentalist cult, with very little substance, no proof of any kind, but then again if you don't have faith you won't be granted the gift of the mystic revelation. It's Gnosticism for Geeks.
Take it with a pinch of salt.. well, maybe a sack of salt. If you can see through the B.S. that sells millions of dollars in books, consultancy fees, lectures, etc, you will recognise some common-sense ideas that are better explained, explored and detailed elsewhere.

By Ian K. VINE VOICE on February 27, 2015
Long have I hated this book

Kent is an excellent writer. He does an excellent job of presenting an approach to software development that is misguided for anything but user interface code. The argument that user interface code must be gotten into the hands of users to get feedback is used to suggest that complex system code should not be "designed up front". This is simply wrong. For example, if you are going to deply an application in the Amazon Cloud that you want to scale, you better have some idea of how this is going to happen. Simply waiting until you application falls over and fails is not an acceptable approach.

One of the things I despise the most about the software development culture is the mindless adoption of fads. Extreme programming has been adopted by some organizations like a religious dogma.

Engineering large software systems is one of the most difficult things that humans do. There are no silver bullets and there are no dogmatic solutions that will make the difficult simple.

By Anil Philip on March 24, 2005

Maybe I'm too cynical because I never got to work for the successful, whiz-kid companies; Maybe this book wasn't written for me!

This book reminds me of Jacobsen's "Use Cases" book of the 1990s. 'Use Cases' was all the rage but after several years, we slowly learned the truth: Uses Cases does not deal with the architecture - a necessary and good foundation for any piece of software.

Similarly, this book seems to be spotlighting Testing and taking it to extremes.

'the test plan is the design doc'

Not True. The design doc encapsulates wisdom and insight

a picture that accurately describes the interactions of the lower level software components is worth a thousand lines of code-reading.

Also present is an evangelistic fervor that reminds me of the rah-rah eighties' bestseller, "In Search Of Excellence" by Peters and Waterman. (Many people have since noted that most of the spotlighted companies of that book are bankrupt twenty five years later).

- in a room full of people with a bully supervisor (as I experienced in my last job at a major telco) innovation or good work is largely absent.

- deploy daily - are you kidding?

to run through the hundreds of test cases in a large application takes several hours if not days. Not all testing can be automated.

- I have found the principle of "baby steps", one of the principles in the book, most useful in my career - it is the basis for prototyping iteratively. However I heard it described in 1997 at a pep talk at MCI that the VP of our department gave to us. So I dont know who stole it from whom!

Lastly, I noted that the term 'XP' was used throughout the book, and the back cover has a blurb from an M$architect. Was it simply coincidence that Windows shares the same name for its XP release? I wondered if M$ had sponsored part of the book as good advertising for Windows XP! :)

#### [Oct 13, 2017] 1.3. Compatibility of Red Hat Developer Toolset 6.1

###### Oct 13, 2017 | access.redhat.com

Compatibility Figure 1.1, "Red Hat Developer Toolset 6.1 Compatibility Matrix" illustrates the support for binaries built with Red Hat Developer Toolset on a certain version of Red Hat Enterprise Linux when those binaries are run on various other versions of this system. For ABI compatibility information, see Section 2.2.4, "C++ Compatibility" .

Figure 1.1. Red Hat Developer Toolset 6.1 Compatibility Matrix

#### [Oct 13, 2017] What gcc versions are available in Red Hat Enterprise Linux

##### "... You will need an active Red Hat Enterprise Linux Developer subscription to gain access to Red Hat Developer Tool set. ..."
###### Oct 13, 2017 | access.redhat.com

Red Hat provides another option via the Red Hat Developer Toolset.

With the developer toolset, developers can choose to take advantage of the latest versions of the GNU developer tool chain, packaged for easy installation on Red Hat Enterprise Linux. This version of the GNU development tool chain is an alternative to the toolchain offered as part of each Red Hat Enterprise Linux release. Of course, developers can continue to use the version of the toolchain provided in Red Hat Enterprise Linux.

The developer toolset gives software developers the ability to develop and compile an application once to run on multiple versions of Red Hat Enterprise Linux (such as Red Hat Enterprise Linux 5 and 6). Compatible with all supported versions of Red Hat Enterprise Linux, the developer toolset is available for users who develop applications for Red Hat Enterprise Linux 5 and 6. Please see the release notes for support of specific minor releases.

Unlike the compatibility and preview gcc packages provided with RHEL itself, the developer toolset packages put their content under a /opt/rh path. The scl ("Software Collections") tool is provided to make use of the tool versions from the Developer Toolset easy while minimizing the potential for confusion with the regular RHEL tools.

Red Hat provides support to Red Hat Developer Tool Set for all Red Hat customers with an active Red Hat Enterprise Linux Developer subscription.

You will need an active Red Hat Enterprise Linux Developer subscription to gain access to Red Hat Developer Tool set.

For further information on Red Hat Developer Toolset, refer to the relevant release documentation:

For further information on Red Hat Enterprise Linux Developer subscription, you may reference the following links:
* Red Hat Discussion
* Red Hat Developer Toolset Support Policy

#### [Oct 13, 2017] Building GCC from source

###### Oct 13, 2017 | unix.stackexchange.com

xxx

I've built newer gcc versions for rhel6 for several versions now (since 4.7.x to 5.3.1).

The process is fairly easy thanks to Redhat's Jakub Jelinek fedora gcc builds found on koji

Simply grab the latest src rpm for whichever version you require (e.g. 5.3.1 ).

Basically you would start by determining the build requirements by issuing rpm -qpR src.rpm looking for any version requirements:

rpm -qpR gcc-5.3.1-4.fc23.src.rpm | grep -E '= [[:digit:]]'
binutils >= 2.24
doxygen >= 1.7.1
elfutils-devel >= 0.147
elfutils-libelf-devel >= 0.147
gcc-gnat >= 3.1
glibc-devel >= 2.4.90-13
gmp-devel >= 4.1.2-8
isl = 0.14
isl-devel = 0.14
libgnat >= 3.1
libmpc-devel >= 0.8.1
mpfr-devel >= 2.2.1
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1
systemtap-sdt-devel >= 1.3


Now comes the tedious part - any package which has a version higher than provided by yum fro your distro needs to be downloaded from koji , and recursively repeat the process until all dependency requirements are met.

I cheat, btw.

I usually repackage the rpm to contain a correct build tree using the gnu facility to use correctly placed and named requirements, so gmp/mpc/mpfr/isl (cloog is no longer required) are downloaded and untard into the correct path, and the new (bloated) tar is rebuilt into a new src rpm (with minor changes to spec file) with no dependency on their packaged (rpm) versions. Since I know of no one using ADA, I simply remove the portions pertaining to gnat from the specfile, further simplifying the build process, leaving me with just binutils to worry about.
Gcc can actually build with older binutils, so if you're in a hurry, further edit the specfile to require the binutils version already present on your system. This will result in a slightly crippled gcc, but mostly it will perform well enough.
This works quite well mostly.

UPDATE 1

The simplest method for opening a src rpm is probably yum install the rpm and access everything under ~/rpmbuild, but I prefer

mkdir gcc-5.3.1-4.fc23
cd gcc-5.3.1-4.fc23
rpm2cpio ../gcc-5.3.1-4.fc23.src.rpm | cpio -id
tar xf gcc-5.3.1-20160212.tar.bz2
cd gcc-5.3.1-20160212
cd ..
tar caf gcc-5.3.1-20160212.tar.bz2 gcc-5.3.1-20160212
rm -rf gcc-5.3.1-20160212
# remove gnat
sed -i '/%global build_ada 1/ s/1/0/' gcc.spec
sed -i '/%if !%{build_ada}/,/%endif/ s/^/#/' gcc.spec
# remove gmp/mpfr/mpc dependencies
sed -i '/BuildRequires: gmp-devel >= 4.1.2-8, mpfr-devel >= 2.2.1, libmpc-devel >= 0.8.1/ s/.*//' gcc.spec
# remove isl dependency
sed -i '/BuildRequires: isl = %{isl_version}/,/Requires: isl-devel = %{isl_version}/ s/^/#/' gcc.spec
# Either build binutils as I do, or lower requirements
sed -i '/Requires: binutils/ s/2.24/2.20/' gcc.spec
# Make sure you don't break on gcc-java
sed -i '/gcc-java/ s/^/#/' gcc.spec


You also have the choice to set prefix so this rpm will install side-by-side without breaking distro rpm (but requires changing name, and some modifications to internal package names). I usually add an environment-module so I can load and unload this gcc as required (similar to how collections work) as part of the rpm (so I add a new dependency).

Finally create the rpmbuild tree and place the files where hey should go and build:

yum install rpmdevtools rpm-build
rpmdev-setuptree
cp * ~/rpmbuild/SOURCES/
mv ~/rpmbuild/{SOURCES,SPECS}/gcc.spec
rpmbuild -ba ~/rpmbuild/SPECS/gcc.spec


UPDATE 2

Normally one should not use a "server" os for development - that's why you have fedora which already comes with latest gcc. I have some particular requirements, but you should really consider using the right tool for the task - rhel/centos to run production apps, fedora to develop those apps etc.

#### [Oct 13, 2017] devtoolset-3-gcc-4.9.1-10.el6.x86_64.rpm

##### This is a supported by RHEL package similar to one available from academic Linux
###### Oct 13, 2017 | access.redhat.com
Build Host
x86-027.build.eng.bos.redhat.com
Build Date
2014-09-22 12:43:02 UTC
Group
Development/Languages
GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD
Available From
Product (Variant, Version, Architecture) Repo Label
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.7 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.6 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.5 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.4 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6 x86_64 rhel-server-rhscl-6-rpms
Red Hat Software Collections (for RHEL Workstation) 1 for RHEL 6 x86_64 rhel-workstation-rhscl-6-rpms
Red Hat Software Collections (for RHEL Server) from RHUI 1 for RHEL 6 x86_64 rhel-server-rhscl-6-rhui-rpms
• Package - devtoolset-3-gcc-4.9.1-10.el6.x86_64.rpm
SHA-256:
ab57db4e882fa21030579b04d486336b6feaab87078f029ed28ea431f6a72a4d
• Source Package - devtoolset-3-gcc-4.9.1-10.el6.src.rpm
SHA-256:
6fd4d0e5c3de2a16f47413a1783d85c986b94e8618ba88a6d94169683a2a7259
• Debug Info Package - devtoolset-3-gcc-debuginfo-4.9.1-10.el6.x86_64.rpm
SHA-256:
ae0e2dd2fc5e58a7193cf0a9fecf02b998892d139092aca8bc51da102770c139

#### [Oct 13, 2017] Installing GCC 4.8.2 on Red Hat Enterprise linux 6.5

###### Oct 13, 2017 | stackoverflow.com

suny6 , answered Jan 29 '16 at 21:53

The official way to have gcc 4.8.2 on RHEL 6 is via installing Red Hat Developer Toolset (yum install devtoolset-2), and in order to have it you need to have one of the below subscriptions:
• Red Hat Enterprise Linux Developer Support, Professional
• Red Hat Enterprise Linux Developer Support, Enterprise
• Red Hat Enterprise Linux Developer Suite
• Red Hat Enterprise Linux Developer Workstation, Professional
• Red Hat Enterprise Linux Developer Workstation, Enterprise
• 30 day Self-Supported Red Hat Enterprise Linux Developer Workstation Evaluation
• 60 day Supported Red Hat Enterprise Linux Developer Workstation Evaluation
• 90 day Supported Red Hat Enterprise Linux Developer Workstation Evaluation
• 1-year Unsupported Partner Evaluation Red Hat Enterprise Linux
• 1-year Unsupported Red Hat Advanced Partner Subscription

You can check whether you have any of these subscriptions by running:

subscription-manager list --available

and

subscription-manager list --consumed .

If you don't have any of these subscriptions, you won't succeed in "yum install devtoolset-2". However, luckily cern provide a "back door" for their SLC6 which can also be used in RHEL 6. Run below three lines via root, and you should be able to have it:

wget -O /etc/yum.repos.d/slc6-devtoolset.repo http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo

wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-cern http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern

yum install devtoolset-2

Once it's done completely, you should have the new development package in /opt/rh/devtoolset-2/root/.

answered Oct 29 '14 at 21:53

For some reason the mpc/mpfr/gmp packages aren't being downloaded. Just look in your gcc source directory, it should have created symlinks to those packages:
gcc/4.9.1/install$ls -ad gmp mpc mpfr gmp mpc mpfr If those don't show up then simply download them from the gcc site: ftp://gcc.gnu.org/pub/gcc/infrastructure/ Then untar and symlink/rename them so you have the directories like above. Then when you ./configure and make, gcc's makefile will automatically build them for you. #### [Oct 08, 2017] >Disbelieving the 'many eyes' myth Opensource.com ##### Notable quotes: ##### "... This article originally appeared on Alice, Eve, and Bob – a security blog and is republished with permission. ..." ###### Oct 08, 2017 | opensource.com Review by many eyes does not always prevent buggy code There is a view that because open source software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth. 06 Oct 2017 Mike Bursell (Red Hat) Feed 8 up Image credits : Internet Archive Book Images . CC BY-SA 4.0 Writing code is hard. Writing secure code is harder -- much harder. And before you get there, you need to think about design and architecture. When you're writing code to implement security functionality, it's often based on architectures and designs that have been pored over and examined in detail. They may even reflect standards that have gone through worldwide review processes and are generally considered perfect and unbreakable. * However good those designs and architectures are, though, there's something about putting things into actual software that's, well, special. With the exception of software proven to be mathematically correct, ** being able to write software that accurately implements the functionality you're trying to realize is somewhere between a science and an art. This is no surprise to anyone who's actually written any software, tried to debug software, or divine software's correctness by stepping through it; however, it's not the key point of this article. Nobody *** actually believes that the software that comes out of this process is going to be perfect, but everybody agrees that software should be made as close to perfect and bug-free as possible. This is why code review is a core principle of software development. And luckily -- in my view, at least -- much of the code that we use in our day-to-day lives is open source, which means that anybody can look at it, and it's available for tens or hundreds of thousands of eyes to review. And herein lies the problem: There is a view that because open source software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth. A dangerous myth. The problems with this view are at least twofold. The first is the "if you build it, they will come" fallacy. I remember when there was a list of all the websites in the world, and if you added your website to that list, people would visit it. **** In the same way, the number of open source projects was (maybe) once so small that there was a good chance that people might look at and review your code. Those days are past -- long past. Second, for many areas of security functionality -- crypto primitives implementation is a good example -- the number of suitably qualified eyes is low. Don't think that I am in any way suggesting that the problem is any less in proprietary code: quite the opposite. Not only are the designs and architectures in proprietary software often hidden from review, but you have fewer eyes available to look at the code, and the dangers of hierarchical pressure and groupthink are dramatically increased. "Proprietary code is more secure" is less myth, more fake news. I completely understand why companies like to keep their security software secret, and I'm afraid that the "it's to protect our intellectual property" line is too often a platitude they tell themselves when really, it's just unsafe to release it. So for me, it's open source all the way when we're looking at security software. So, what can we do? Well, companies and other organizations that care about security functionality can -- and have, I believe a responsibility to -- expend resources on checking and reviewing the code that implements that functionality. Alongside that, the open source community, can -- and is -- finding ways to support critical projects and improve the amount of review that goes into that code. ***** And we should encourage academic organizations to train students in the black art of security software writing and review, not to mention highlighting the importance of open source software. We can do better -- and we are doing better. Because what we need to realize is that the reason the "many eyes hypothesis" is a myth is not that many eyes won't improve code -- they will -- but that we don't have enough expert eyes looking. Yet. * Yeah, really: "perfect and unbreakable." Let's just pretend that's true for the purposes of this discussion. ** and that still relies on the design and architecture to actually do what you want -- or think you want -- of course, so good luck. *** Nobody who's actually written more than about five lines of code (or more than six characters of Perl). **** I added one. They came. It was like some sort of magic. ***** See, for instance, the Linux Foundation 's Core Infrastructure Initiative . This article originally appeared on Alice, Eve, and Bob – a security blog and is republished with permission. #### [Oct 03, 2017] Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model. ##### Notable quotes: ##### "... That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as the high school teachers who teach their kids. And these are the top coders in the country! ..." ##### "... I don't see why more Americans would want to be coders. These companies want to drive down wages for workers here and then also ship jobs offshore... ..." ##### "... Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model. ..." ##### "... There are quite a few highly qualified American software engineers who lose their jobs to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with assembling products in China by slave labor ..." ##### "... If you want a high tech executive to suffer a stroke, mention the words "labor unions". ..." ##### "... India isn't being hired for the quality, they're being hired for cheap labor. ..." ##### "... Enough people have had their hands burnt by now with shit companies like TCS (Tata) that they are starting to look closer to home again... ..." ##### "... Globalisation is the reason, and trying to force wages up in one country simply moves the jobs elsewhere. The only way I can think of to limit this happening is to keep the company and coders working at the cutting edge of technology. ..." ##### "... I'd be much more impressed if I saw that the hordes of young male engineers here in SF expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd say 91.3% are oblivious, idiotic children. ..." ##### "... Not maybe. Too late. American corporations objective is to low ball wages here in US. In India they spoon feed these pupils with affordable cutting edge IT training for next to nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the western world to dominate the IT industry. I've seen it with my own eyes in action. Those in charge will anything/everything to maintain their grip on power. No brag. Just fact. ..." ###### Oct 02, 2017 | profile.theguardian.com Terryl Dorian , 21 Sep 2017 13:26 That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as the high school teachers who teach their kids. And these are the top coders in the country! Ray D Wright -> RogTheDodge , , 21 Sep 2017 14:52 I don't see why more Americans would want to be coders. These companies want to drive down wages for workers here and then also ship jobs offshore... Richard Livingstone -> KatieL , , 21 Sep 2017 14:50 +++1 to all of that. Automated coding just pushes the level of coding further up the development food chain, rather than gets rid of it. It is the wrong approach for current tech. AI that is smart enough to model new problems and create their own descriptive and runnable language - hopefully after my lifetime but coming sometime. Arne Babenhauserheide -> Evelita , , 21 Sep 2017 14:48 What coding does not teach is how to improve our non-code infrastructure and how to keep it running (that's the stuff which actually moves things). Code can optimize stuff, but it needs actual actuators to affect reality. Sometimes these actuators are actual people walking on top of a roof while fixing it. WyntonK , 21 Sep 2017 14:47 Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model. There are quite a few highly qualified American software engineers who lose their jobs to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with assembling products in China by slave labor . If you want a high tech executive to suffer a stroke, mention the words "labor unions". TheEgg -> UncommonTruthiness , , 21 Sep 2017 14:43 The ship has sailed on this activity as a career. Nope. Married to a highly-technical skillset, you can still make big bucks. I say this as someone involved in this kind of thing academically and our Masters grads have to beat the banks and fintech companies away with dog shits on sticks. You're right that you can teach anyone to potter around and throw up a webpage but at the prohibitively difficult maths-y end of the scale, someone suitably qualified will never want for a job. Mike_Dexter -> Evelita , , 21 Sep 2017 14:43 In a similar vein, if you accept the argument that it does drive down wages, wouldn't the culprit actually be the multitudes of online and offline courses and tutorials available to an existing workforce? Terryl Dorian -> CountDooku , , 21 Sep 2017 14:42 Funny you should pick medicine, law, engineering... 3 fields that are *not* taught in high school. The writer is simply adding "coding" to your list. So it seems you agree with his "garbage" argument after all. anticapitalist -> RogTheDodge , , 21 Sep 2017 14:42 Key word is "good". Teaching everyone is just going to increase the pool of programmers code I need to fix. India isn't being hired for the quality, they're being hired for cheap labor. As for women sure I wouldn't mind more women around but why does no one say their needs to be more equality in garbage collection or plumbing? (And yes plumbers are a high paid professional). In the end I don't care what the person is, I just want to hire and work with the best and not someone I have to correct their work because they were hired by quota. If women only graduate at 15% why should IT contain more than that? And let's be a bit honest with the facts, of those 15% how many spend their high school years staying up all night hacking? Very few. Now the few that did are some of the better developers I work with but that pool isn't going to increase by forcing every child to program... just like sports aren't better by making everyone take gym class. WithoutPurpose , 21 Sep 2017 14:42 I ran a development team for 10 years and I never had any trouble hiring programmers - we just had to pay them enough. Every job would have at least 10 good applicants. Two years ago I decided to scale back a bit and go into programming (I can code real-time low latency financial apps in 4 languages) and I had four interviews in six months with stupidly low salaries. I'm lucky in that I can bounce between tech and the business side so I got a decent job out of tech. My entirely anecdotal conclusion is that there is no shortage of good programmers just a shortage of companies willing to pay them. oddbubble -> Tori Turner , , 21 Sep 2017 14:41 I've worn many hats so far, I started out as a started out as a sysadmin, then I moved on to web development, then back end and now I'm doing test automation because I am on almost the same money for half the effort. peter nelson -> raffine , , 21 Sep 2017 14:38 But the concepts won't. Good programming requires the ability to break down a task, organise the steps in performing it, identify parts of the process that are common or repetitive so they can be bundled together, handed-off or delegated, etc. These concepts can be applied to any programming language, and indeed to many non-software activities. Oliver Jones -> Trumbledon , , 21 Sep 2017 14:37 In the city maybe with a financial background, the exception. anticapitalist -> Ethan Hawkins , 21 Sep 2017 14:32 Well to his point sort of... either everything will go php or all those entry level php developers will be on the street. A good Java or C developer is hard to come by. And to the others, being a being a developer, especially a good one, is nothing like reading and writing. The industry is already saturated with poor coders just doing it for a paycheck. peter nelson -> Tori Turner , 21 Sep 2017 14:31 I'm just going to say this once: not everyone with a computer science degree is a coder. And vice versa. I'm retiring from a 40-year career as a software engineer. Some of the best software engineers I ever met did not have CS degrees. KatieL -> Mishal Almohaimeed , 21 Sep 2017 14:30 "already developing automated coding scripts. " Pretty much the entire history of the software industry since FORAST was developed for the ORDVAC has been about desperately trying to make software development in some way possible without driving everyone bonkers. The gulf between FORAST and today's IDE-written, type-inferring high level languages, compilers, abstracted run-time environments, hypervisors, multi-computer architectures and general tech-world flavour-of-2017-ness is truly immense[1]. And yet software is still fucking hard to write. There's no sign it's getting easier despite all that work. Automated coding was promised as the solution in the 1980s as well. In fact, somewhere in my archives, I've got paper journals which include adverts for automated systems that would programmers completely redundant by writing all your database code for you. These days, we'd think of those tools as automated ORM generators and they don't fix the problem; they just make a new one -- ORM impedance mismatch -- which needs more engineering on top to fix... The tools don't change the need for the humans, they just change what's possible for the humans to do. [1] FORAST executed in about 20,000 bytes of memory without even an OS. The compile artifacts for the map-reduce system I built today are an astonishing hundred million bytes... and don't include the necessary mapreduce environment, management interface, node operating system and distributed filesystem... raffine , 21 Sep 2017 14:29 Whatever they are taught today will be obsolete tomorrow. yannick95 -> savingUK , , 21 Sep 2017 14:27 "There are already top quality coders in China and India" AHAHAHAHAHAHAHAHAHAHAHA *rolls on the floor laughting* Yes........ 1%... and 99% of incredibly bad, incompetent, untalented one that produce cost 50% of a good developer but produce only 5% in comparison. And I'm talking with a LOT of practical experience through more than a dozen corporations all over the world which have been outsourcing to India... all have been disasters for the companies (but good for the execs who pocketed big bonuses and left the company before the disaster blows up in their face) Wiretrip -> mcharts , , 21 Sep 2017 14:25 Enough people have had their hands burnt by now with shit companies like TCS (Tata) that they are starting to look closer to home again... TomRoche , 21 Sep 2017 14:11 Tech executives have pursued [the goal of suppressing workers' compensation] in a variety of ways. One is collusion – companies conspiring to prevent their employees from earning more by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department antitrust complaint in 2010, along with a class action suit that culminated in a$415m settlement.

Folks interested in the story of the Techtopus (less drily presented than in the links in this article) should check out Mark Ames' reporting, esp this overview article and this focus on the egregious Steve Jobs (whose canonization by the US corporate-funded media is just one more impeachment of their moral bankruptcy).

Another, more sophisticated method is importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status.

Folks interested in H-1B and US technical visas more generally should head to Norm Matloff 's summary page , and then to his blog on the subject .

I have watched as schools run by trade unions have done the opposite for the 5 decades. By limiting the number of graduates, they were able to help maintain living wages and benefits. This has been stopped in my area due to the pressure of owners run "trade associations".

During that same time period I have witnessed trade associations controlled by company owners, while publicising their support of the average employee, invest enormous amounts of membership fees in creating alliances with public institutions. Their goal has been that of flooding the labor market and thus keeping wages low. A double hit for the average worker because membership fees were paid by employees as well as those in control.

And so it goes....

savingUK , 21 Sep 2017 13:38
Coding jobs are just as susceptible to being moved to lower cost areas of the world as hardware jobs already have. It's already happening. There are already top quality coders in China and India. There is a much larger pool to chose from and they are just as good as their western counterparts and work harder for much less money.

Globalisation is the reason, and trying to force wages up in one country simply moves the jobs elsewhere. The only way I can think of to limit this happening is to keep the company and coders working at the cutting edge of technology.

I'd be much more impressed if I saw that the hordes of young male engineers here in SF expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd say 91.3% are oblivious, idiotic children.

They would definitely not survive the zombie apocalypse.

P.S. not every kid wants or needs to have their soul sucked out of them sitting in front of a screen full of code for some idiotic service that some other douchbro thinks is the next iteration of sliced bread.

UncommonTruthiness , 21 Sep 2017 14:10
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!

I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.

Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.

William Fitch III , 21 Sep 2017 13:52
Hi: As I have said many times before, there is no shortage of people who fully understand the problem and can see all the connections.

However, they all fall on their faces when it comes to the solution. To cut to the chase, Concentrated Wealth needs to go, permanently. Of course the challenge is how to best accomplish this.....

.....Bill

Damn engineers and their black and white world view, if they weren't so inept they would've unionized instead of being trampled again and again in the name of capitalism.
mcharts -> Aldous0rwell , , 21 Sep 2017 13:07
Not maybe. Too late. American corporations objective is to low ball wages here in US. In India they spoon feed these pupils with affordable cutting edge IT training for next to nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the western world to dominate the IT industry. I've seen it with my own eyes in action. Those in charge will anything/everything to maintain their grip on power. No brag. Just fact.

Woe to our children and grandchildren.

Where's Bernie Sanders when we need him.

#### [Oct 03, 2017] The dream of coding automation remain illusive... Very illusive...

###### Oct 03, 2017 | discussion.theguardian.com
Wrong again, that approach has been tried since the 80s and will keep failing only because software development is still more akin to a technical craft than an engineering discipline. The number of elements required to assemble a working non trivial system is way beyond scriptable.
freeandfair -> Taylor Dotson , 21 Sep 2017 14:26
> That's some crystal ball you have there. English teachers will need to know how to code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service industry?

You don't believe there will be robots to do plumbing and cleaning? The cleaner's job will be to program robots to do what they need.
CEOs? Absolutely.

English teachers? Both of my kids have school laptops and everything is being done on the computers. The teachers use software and create websites and what not. Yes, even English teachers.

Not knowing / understanding how to code will be the same as not knowing how to use Word/ Excel. I am assuming there are people who don't, but I don't know any above the age of 6.

Wiretrip -> Mishal Almohaimeed , 21 Sep 2017 14:20
We've had 'automated coding scripts' for years for small tasks. However, anyone who says they're going to obviate programmers, analysts and designers doesn't understand the software development process.
Ethan Hawkins -> David McCaul , 21 Sep 2017 13:22
Even if expert systems (an 80's concept, BTW) could code, we'd still have a huge need for managers. The hard part of software isn't even the coding. It's determining the requirements and working with clients. It will require general intelligence to do 90% of what we do right now. The 10% we could automate right now, mostly gets in the way. I agree it will change, but it's going to take another 20-30 years to really happen.
Mishal Almohaimeed -> PolydentateBrigand , , 21 Sep 2017 13:17
wrong, software companies are already developing automated coding scripts. You'll get a bunch of door to door knives salespeople once the dust settles that's what you'll get.
freeandfair -> rgilyead , , 21 Sep 2017 14:22
> In 20 years time AI will be doing the coding

Possible, but your still have to understand how AI operates and what it can and cannot do.

#### [Oct 03, 2017] Coding and carpentry are not so distant, are they ?

##### "... Many people can write, but few become journalists, and fewer still become real authors. ..."
###### Oct 03, 2017 | discussion.theguardian.com
Coding has little or nothing to do with Silicon Valley. They may or may not have ulterior motives, but ultimately they are nothing in the scheme of things.

I disagree with teaching coding as a discrete subject. I think it should be combined with home economics and woodworking because 90% of these subjects consist of transferable skills that exist in all of them. Only a tiny residual is actually topic-specific.

In the case of coding, the residual consists of drawing skills and typing skills. Programming language skills? Irrelevant. You should choose the tools to fit the problem. Neither of these needs a computer. You should only ever approach the computer at the very end, after you've designed and written the program.

Is cooking so very different? Do you decide on the ingredients before or after you start? Do you go shopping half-way through cooking an omelette?

With woodwork, do you measure first or cut first? Do you have a plan or do you randomly assemble bits until it does something useful?

Real coding, taught correctly, is barely taught at all. You teach the transferable skills. ONCE. You then apply those skills in each area in which they apply.

What other transferable skills apply? Top-down design, bottom-up implementation. The correct methodology in all forms of engineering. Proper testing strategies, also common across all forms of engineering. However, since these tests are against logic, they're a test of reasoning. A good thing to have in the sciences and philosophy.

Technical writing is the art of explaining things to idiots. Whether you're designing a board game, explaining what you like about a house, writing a travelogue or just seeing if your wild ideas hold water, you need to be able to put those ideas down on paper in a way that exposes all the inconsistencies and errors. It doesn't take much to clean it up to be readable by humans. But once it is cleaned up, it'll remain free of errors.

So I would teach a foundation course that teaches top-down reasoning, bottom-up design, flowcharts, critical path analysis and symbolic logic. Probably aimed at age 7. But I'd not do so wholly in the abstract. I'd have it thoroughly mixed in with one field, probably cooking as most kids do that and it lacks stigma at that age.

I'd then build courses on various crafts and engineering subjects on top of that, building further hierarchies where possible. Eliminate duplication and severely reduce the fictions we call disciplines.

oldzealand, 21 Sep 2017 14:58
I used to employ 200 computer scientists in my business and now teach children so I'm apparently as guilty as hell. To be compared with a carpenter is, however, a true compliment, if you mean those that create elegant, aesthetically-pleasing, functional, adaptable and long-lasting bespoke furniture, because our crafts of problem-solving using limited resources in confined environments to create working, life-improving artifacts both exemplify great human ingenuity in action. Capitalism or no.
peter nelson, 21 Sep 2017 14:29
"But coding is not magic. It is a technical skill, akin to carpentry."

But some people do it much better than others. Just like journalism. This article is complete nonsense, as I discuss in another comment. The author might want to consider a career in carpentry.

Fanastril, 21 Sep 2017 14:13
"But coding is not magic. It is a technical skill, akin to carpentry."

It is a way of thinking. Perhaps carpentry is too, but the arrogance of the above statement shows a soul who is done thinking.

NDReader, 21 Sep 2017 14:12
"But coding is not magic. It is a technical skill, akin to carpentry."

I was about to take offence on behalf of programmers, but then I realized that would be snobbish and insulting to carpenters too. Many people can code, but only a few can code well, and fewer still become the masters of the profession. Many people can learn carpentry, but few become joiners, and fewer still become cabinetmakers.

Many people can write, but few become journalists, and fewer still become real authors.

MostlyHarmlessD, 21 Sep 2017 13:08
A carpenter!? Good to know that engineers are still thought of as jumped up tradesmen.

#### [Oct 02, 2017] Techs push to teach coding isnt about kids success – its about cutting wages by Ben Tarnoff

##### "... Guest workers and wage-fixing are useful tools for restraining labor costs. But nothing would make programming cheaper than making millions more programmers. ..."
##### "... Silicon Valley has been unusually successful in persuading our political class and much of the general public that its interests coincide with the interests of humanity as a whole. But tech is an industry like any other. It prioritizes its bottom line, and invests heavily in making public policy serve it. The five largest tech firms now spend twice as much as Wall Street on lobbying Washington – nearly $50m in 2016. The biggest spender, Google, also goes to considerable lengths to cultivate policy wonks favorable to its interests – and to discipline the ones who aren't. ..." ##### "... Silicon Valley is not a uniquely benevolent force, nor a uniquely malevolent one. Rather, it's something more ordinary: a collection of capitalist firms committed to the pursuit of profit. And as every capitalist knows, markets are figments of politics. They are not naturally occurring phenomena, but elaborately crafted contraptions, sustained and structured by the state – which is why shaping public policy is so important. If tech works tirelessly to tilt markets in its favor, it's hardly alone. What distinguishes it is the amount of money it has at its disposal to do so. ..." ##### "... The problem isn't training. The problem is there aren't enough good jobs to be trained for ..." ##### "... Everyone should have the opportunity to learn how to code. Coding can be a rewarding, even pleasurable, experience, and it's useful for performing all sorts of tasks. More broadly, an understanding of how code works is critical for basic digital literacy – something that is swiftly becoming a requirement for informed citizenship in an increasingly technologized world. ..." ##### "... But coding is not magic. It is a technical skill, akin to carpentry. Learning to build software does not make you any more immune to the forces of American capitalism than learning to build a house. Whether a coder or a carpenter, capital will do what it can to lower your wages, and enlist public institutions towards that end. ..." ##### "... Exposing large portions of the school population to coding is not going to magically turn them into coders. It may increase their basic understanding but that is a long way from being a software engineer. ..." ##### "... All schools teach drama and most kids don't end up becoming actors. You need to give all kids access to coding in order for some can go on to make a career out of it. ..." ##### "... it's ridiculous because even out of a pool of computer science B.Sc. or M.Sc. grads - companies are only interested in the top 10%. Even the most mundane company with crappy IT jobs swears that they only hire "the best and the brightest." ..." ##### "... It's basically a con-job by the big Silicon Valley companies offshoring as many US jobs as they can, or "inshoring" via exploitation of the H1B visa ..." ##### "... Masters is the new Bachelors. ..." ##### "... I taught CS. Out of around 100 graduates I'd say maybe 5 were reasonable software engineers. The rest would be fine in tech support or other associated trades, but not writing software. Its not just a set of trainable skills, its a set of attitudes and ways of perceiving and understanding that just aren't that common. ..." ##### "... Yup, rings true. I've been in hi tech for over 40 years and seen the changes. I was in Silicon Valley for 10 years on a startup. India is taking over, my current US company now has a majority Indian executive and is moving work to India. US politicians push coding to drive down wages to Indian levels. ..." ###### Oct 02, 2017 | www.theguardian.com This month, millions of children returned to school. This year, an unprecedented number of them will learn to code. Computer science courses for children have proliferated rapidly in the past few years. A 2016 Gallup report found that 40% of American schools now offer coding classes – up from only 25% a few years ago. New York, with the largest public school system in the country, has pledged to offer computer science to all 1.1 million students by 2025. Los Angeles, with the second largest, plans to do the same by 2020. And Chicago, the fourth largest, has gone further, promising to make computer science a high school graduation requirement by 2018. The rationale for this rapid curricular renovation is economic. Teaching kids how to code will help them land good jobs, the argument goes. In an era of flat and falling incomes, programming provides a new path to the middle class – a skill so widely demanded that anyone who acquires it can command a livable, even lucrative, wage. This narrative pervades policymaking at every level, from school boards to the government. Yet it rests on a fundamentally flawed premise. Contrary to public perception, the economy doesn't actually need that many more programmers. As a result, teaching millions of kids to code won't make them all middle-class. Rather, it will proletarianize the profession by flooding the market and forcing wages down – and that's precisely the point. At its root, the campaign for code education isn't about giving the next generation a shot at earning the salary of a Facebook engineer. It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry. As software mediates more of our lives, and the power of Silicon Valley grows, it's tempting to imagine that demand for developers is soaring. The media contributes to this impression by spotlighting the genuinely inspiring stories of those who have ascended the class ladder through code. You may have heard of Bit Source, a company in eastern Kentucky that retrains coalminers as coders. They've been featured by Wired , Forbes , FastCompany , The Guardian , NPR and NBC News , among others. A former coalminer who becomes a successful developer deserves our respect and admiration. But the data suggests that relatively few will be able to follow their example. Our educational system has long been producing more programmers than the labor market can absorb. A study by the Economic Policy Institute found that the supply of American college graduates with computer science degrees is 50% greater than the number hired into the tech industry each year. For all the talk of a tech worker shortage, many qualified graduates simply can't find jobs. More tellingly, wage levels in the tech industry have remained flat since the late 1990s. Adjusting for inflation, the average programmer earns about as much today as in 1998. If demand were soaring, you'd expect wages to rise sharply in response. Instead, salaries have stagnated. Still, those salaries are stagnating at a fairly high level. The Department of Labor estimates that the median annual wage for computer and information technology occupations is$82,860 – more than twice the national average. And from the perspective of the people who own the tech industry, this presents a problem. High wages threaten profits. To maximize profitability, one must always be finding ways to pay workers less.

Tech executives have pursued this goal in a variety of ways. One is collusion – companies conspiring to prevent their employees from earning more by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department antitrust complaint in 2010, along with a class action suit that culminated in a $415m settlement . Another, more sophisticated method is importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status. Guest workers and wage-fixing are useful tools for restraining labor costs. But nothing would make programming cheaper than making millions more programmers. And where better to develop this workforce than America's schools? It's no coincidence, then, that the campaign for code education is being orchestrated by the tech industry itself. Its primary instrument is Code.org, a nonprofit funded by Facebook, Microsoft, Google and others . In 2016, the organization spent nearly$20m on training teachers, developing curricula, and lobbying policymakers.

Silicon Valley has been unusually successful in persuading our political class and much of the general public that its interests coincide with the interests of humanity as a whole. But tech is an industry like any other. It prioritizes its bottom line, and invests heavily in making public policy serve it. The five largest tech firms now spend twice as much as Wall Street on lobbying Washington – nearly $50m in 2016. The biggest spender, Google, also goes to considerable lengths to cultivate policy wonks favorable to its interests – and to discipline the ones who aren't. Silicon Valley is not a uniquely benevolent force, nor a uniquely malevolent one. Rather, it's something more ordinary: a collection of capitalist firms committed to the pursuit of profit. And as every capitalist knows, markets are figments of politics. They are not naturally occurring phenomena, but elaborately crafted contraptions, sustained and structured by the state – which is why shaping public policy is so important. If tech works tirelessly to tilt markets in its favor, it's hardly alone. What distinguishes it is the amount of money it has at its disposal to do so. Money isn't Silicon Valley's only advantage in its crusade to remake American education, however. It also enjoys a favorable ideological climate. Its basic message – that schools alone can fix big social problems – is one that politicians of both parties have been repeating for years. The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric. That if we teach students the right skills, we can solve poverty, inequality and stagnation. The school becomes an engine of economic transformation, catapulting young people from challenging circumstances into dignified, comfortable lives. This argument is immensely pleasing to the technocratic mind. It suggests that our core economic malfunction is technical – a simple asymmetry. You have workers on one side and good jobs on the other, and all it takes is training to match them up. Indeed, every president since Bill Clinton has talked about training American workers to fill the "skills gap". But gradually, one mainstream economist after another has come to realize what most workers have known for years: the gap doesn't exist. Even Larry Summers has concluded it's a myth. The problem isn't training. The problem is there aren't enough good jobs to be trained for . The solution is to make bad jobs better, by raising the minimum wage and making it easier for workers to form a union, and to create more good jobs by investing for growth. This involves forcing business to put money into things that actually grow the productive economy rather than shoveling profits out to shareholders. It also means increasing public investment, so that people can make a decent living doing socially necessary work like decarbonizing our energy system and restoring our decaying infrastructure. Everyone should have the opportunity to learn how to code. Coding can be a rewarding, even pleasurable, experience, and it's useful for performing all sorts of tasks. More broadly, an understanding of how code works is critical for basic digital literacy – something that is swiftly becoming a requirement for informed citizenship in an increasingly technologized world. But coding is not magic. It is a technical skill, akin to carpentry. Learning to build software does not make you any more immune to the forces of American capitalism than learning to build a house. Whether a coder or a carpenter, capital will do what it can to lower your wages, and enlist public institutions towards that end. Silicon Valley has been extraordinarily adept at converting previously uncommodified portions of our common life into sources of profit. Our schools may prove an easy conquest by comparison. See also: "Everyone should have the opportunity to learn how to code. " OK, and that's what's being done. And that's what the article is bemoaning. What would be better: teach them how to change tires or groom pets? Or pick fruit? Amazingly condescending article. MrFumoFumo , 21 Sep 2017 14:54 However, training lots of people to be coders won't automatically result in lots of people who can actually write good code. Nor will it give managers/recruiters the necessary skills to recognize which programmers are any good. A valid rebuttal but could I offer another observation? Exposing large portions of the school population to coding is not going to magically turn them into coders. It may increase their basic understanding but that is a long way from being a software engineer. Just as children join art, drama or biology classes so they do not automatically become artists, actors or doctors. I would agree entirely that just being able to code is not going to guarantee the sort of income that might be aspired to. As with all things, it takes commitment, perseverance and dogged determination. I suppose ultimately it becomes the Gattaca argument. alfredooo -> racole , 24 Sep 2017 06:51 Fair enough, but, his central argument, that an overabundance of coders will drive wages in that sector down, is generally true, so in the future if you want your kids to go into a profession that will earn them 80k+ then being a "coder" is not the route to take. When coding is - like reading, writing, and arithmetic - just a basic skill, there's no guarantee having it will automatically translate into getting a "good" job. Wiretrip , 21 Sep 2017 14:14 This article lumps everyone in computing into the 'coder' bin, without actually defining what 'coding' is. Yes there is a glut of people who can knock together a bit of HTML and JavaScript, but that is not really programming as such. There are huge shortages of skilled developers however; people who can apply computer science and engineering in terms of analysis and design of software. These are the real skills for which relatively few people have a true aptitude. The lack of really good skills is starting to show in some terrible software implementation decisions, such as Slack for example; written as a web app running in Electron (so that JavaScript code monkeys could knock it out quickly), but resulting in awful performance. We will see more of this in the coming years... Taylor Dotson -> youngsteveo , 21 Sep 2017 13:53 My brother is a programmer, and in his experience these coding exams don't test anything but whether or not you took (and remember) a very narrow range of problems introduce in the first years of a computer science degree. The entire hiring process seems premised on a range of ill-founded ideas about what skills are necessary for the job and how to assess them in people. They haven't yet grasped that those kinds of exams mostly test test-taking ability, rather than intelligence, creativity, diligence, communication ability, or anything else that a job requires beside coughing up the right answer in a stressful, timed environment without outside resources. I'm an embedded software/firmware engineer. Every similar engineer I've ever met has had the same background - starting in electronics and drifting into embedded software writing in C and assembler. It's virtually impossible to do such software without an understanding of electronics. When it goes wrong you may need to get the test equipment out to scope the hardware to see if it's a hardware or software problem. Coming from a pure computing background just isn't going to get you a job in this type of work. waltdangerfield , 23 Sep 2017 14:42 All schools teach drama and most kids don't end up becoming actors. You need to give all kids access to coding in order for some can go on to make a career out of it. TwoSugarsPlease , 23 Sep 2017 06:13 Coding salaries will inevitably fall over time, but such skills give workers the option, once they discover that their income is no longer sustainable in the UK, of moving somewhere more affordable and working remotely. DiGiT81 -> nixnixnix , 23 Sep 2017 03:29 Completely agree. Coding is a necessary life skill for 21st century but there are levels to every skill. From basic needs for an office job to advanced and specialised. nixnixnix , 23 Sep 2017 00:46 Lots of people can code but very few of us ever get to the point of creating something new that has a loyal and enthusiastic user-base. Everyone should be able to code because it is or will be the basis of being able to create almost anything in the future. If you want to make a game in Unity, knowing how to code is really useful. If you want to work with large data-sets, you can't rely on Excel and so you need to be able to code (in R?). The use of code is becoming so pervasive that it is going to be like reading and writing. All the science and engineering graduates I know can code but none of them have ever sold a stand-alone software. The argument made above is like saying that teaching everyone to write will drive down the wages of writers. Writing is useful for anyone and everyone but only a tiny fraction of people who can write, actually write novels or even newspaper columns. DolyGarcia -> Carl Christensen , 22 Sep 2017 19:24 Immigrants have always a big advantage over locals, for any company, including tech companies: the government makes sure that they will stay in their place and never complain about low salaries or bad working conditions because, you know what? If the company sacks you, an immigrant may be forced to leave the country where they live because their visa expires, which is never going to happen with a local. Companies always have more leverage over immigrants. Given a choice between more and less exploitable workers, companies will choose the most exploitable ones. Which is something that Marx figured more than a century ago, and why he insisted that socialism had to be international, which led to the founding of the First International Socialist. If worker's fights didn't go across country boundaries, companies would just play people from one country against the other. Unfortunately, at some point in time socialists forgot this very important fact. xxxFred -> Tomix Da Vomix , 22 Sep 2017 18:52 SO what's wrong with having lots of people able to code? The only argument you seem to have is that it'll lower wages. So do you think that we should stop teaching writing skills so that journalists can be paid more? And no one os going to "force" kids into high-level abstract coding practices in kindergarten, fgs. But there is ample empirical proof that young children can learn basic principles. In fact the younger that children are exposed to anything, the better they can enhance their skills adn knowlege of it later in life, and computing concepts are no different. Tomix Da Vomix -> xxxFred , 22 Sep 2017 18:40 You're completely missing the point. Kids are forced into the programming field (even STEM as a more general term), before they evolve their abstract reasoning. For that matter, you're not producing highly skilled people, but functional imbeciles and a decent labor that will eventually lower the wages. Conspiracy theory? So Google, FB and others paying hundreds of millions of dollars for forming a cartel to lower the wages is not true? It sounds me that you're sounding more like a 1969 denier that Guardian is. Tech companies are not financing those incentives because they have a good soul. Their primary drive has always been money, otherwise they wouldn't sell your personal data to earn money. But hey, you can always sleep peacefully when your kid becomes a coder. When he is 50, everyone will want to have a Cobol, Ada programmer with 25 years of experience when you can get 16 year old kid from a high school for 1/10 of a price. Go back to sleep... Carl Christensen -> xxxFred , 22 Sep 2017 16:49 it's ridiculous because even out of a pool of computer science B.Sc. or M.Sc. grads - companies are only interested in the top 10%. Even the most mundane company with crappy IT jobs swears that they only hire "the best and the brightest." Carl Christensen , 22 Sep 2017 16:47 It's basically a con-job by the big Silicon Valley companies offshoring as many US jobs as they can, or "inshoring" via exploitation of the H1B visa - so they can say "see, we don't have 'qualified' people in the US - maybe when these kids learn to program in a generation." As if American students haven't been coding for decades -- and saw their salaries plummet as the H1B visa and Indian offshore firms exploded...... Declawed -> KDHughes , 22 Sep 2017 16:40 Dude, stow the attitude. I've tested code from various entities, and seen every kind of crap peddled as gold. But I've also seen a little 5-foot giggly lady with two kids, grumble a bit and save a$100,000 product by rewriting another coder's man-month of work in a few days, without any flaws or cracks. Almost nobody will ever know she did that. She's so far beyond my level it hurts.

And yes, the author knows nothing. He's genuinely crying wolf while knee-deep in amused wolves. The last time I was in San Jose, years ago , the room was already full of people with Indian surnames. If the problem was REALLY serious, a programmer from POLAND was called in.

If you think fighting for a violinist spot is hard, try fighting for it with every spare violinist in the world . I am training my Indian replacement to do my job right now . At least the public can appreciate a good violin. Can you appreciate Duff's device ?

So by all means, don't teach local kids how to think in a straight line, just in case they make a dent in the price of wages IN INDIA.... *sheesh*

Declawed -> IanMcLzzz , 22 Sep 2017 15:35
That's the best possible summarisation of this extremely dumb article. Bravo.

For those who don't know how to think of coding, like the article author, here's a few analogies :

A computer is a box that replays frozen thoughts, quickly. That is all.

Coding is just the art of explaining. Anyone who can explain something patiently and clearly, can code. Anyone who can't, can't.

Making hardware is very much like growing produce while blind. Making software is very much like cooking that produce while blind.

Imagine looking after a room full of young eager obedient children who only do exactly, *exactly*, what you told them to do, but move around at the speed of light. Imagine having to try to keep them from smashing into each other or decapitating themselves on the corners of tables, tripping over toys and crashing into walls, etc, while you get them all to play games together.

The difference between a good coder and a bad coder is almost life and death. Imagine a broth prepared with ingredients from a dozen co-ordinating geniuses and one idiot, that you'll mass produce. The soup is always far worse for the idiot's additions. The more cooks you involve, the more chance your mass produced broth will taste bad.

People who hire coders, typically can't tell a good coder from a bad coder.

Zach Dyer -> Mystik Al , 22 Sep 2017 15:18
Tech jobs will probably always be available long after your gone or until another mass extinction.
edmundberk -> AmyInNH , 22 Sep 2017 14:59
No you do it in your own time. If you're not prepared to put in long days IT is not for you in any case. It was ever thus, but more so now due to offshoring - rather than the rather obscure forces you seem to believe are important.
WithoutPurpose -> freeandfair , 22 Sep 2017 13:21
Bit more rhan that.
peter nelson -> offworldguy , 22 Sep 2017 12:44
Sorry, offworldguy, but you're losing this one really badly. I'm a professional software engineer in my 60's and I know lots of non-professionals in my age range who write little programs, scripts and apps for fun. I know this because they often contact me for help or advice.

So you've now been told by several people in this thread that ordinary people do code for fun or recreation. The fact that you don't know any probably says more about your network of friends and acquaintances than about the general population.

xxxFred , 22 Sep 2017 12:18
This is one of the daftest articles I've come across in a long while.
If it's possible that so many kids can be taught to code well enough so that wages come down, then that proves that the only reason we've been paying so much for development costs is the scarcity of people able to do it, not that it's intrinsically so hard that only a select few could anyway. In which case, there is no ethical argument for keeping the pools of skilled workers to some select group. Anyone able to do it should have an equal opportunity to do it.
What is the argument for not teaching coding (other than to artificially keep wages high)? Why not stop teaching the three R's, in order to boost white-collar wages in general?
Computing is an ever-increasingly intrinsic part of life, and people need to understand it at all levels. It is not just unfair, but tantamount to neglect, to fail to teach children all the skills they may require to cope as adults.
Having said that, I suspect that in another generation or two a good many lower-level coding jobs will be redundant anyway, with such code being automatically generated, and "coders" at this level will be little more than technicians setting various parameters. Even so, understanding the basics behind computing is a part of understanding the world they live in, and every child needs that.
Suggesting that teaching coding is some kind of conspiracy to force wages down is well, it makes the moon-landing conspiracy looks sensible by comparison.
timrichardson -> offworldguy , 22 Sep 2017 12:16
I think it is important to demystify advanced technology, I think that has importance in its own right.Plus, schools should expose kids to things which may spark their interest. Not everyone who does a science project goes on years later to get a PhD, but you'd think that it makes it more likely. Same as giving a kid some music lessons. There is a big difference between serious coding and the basic steps needed to automate a customer service team or a marketing program, but the people who have some mastery over automation will have an advantage in many jobs. Advanced machines are clearly going to be a huge part of our future. What should we do about it, if not teach kids how to understand these tools?
rogerfederere -> William Payne , 22 Sep 2017 12:13
tl;dr.
Mystik Al , 22 Sep 2017 12:08
As automation is about to put 40% of the workforce permanently out of work getting into to tech seems like a good idea!
timrichardson , 22 Sep 2017 12:04
This is like arguing that teaching kids to write is nothing more than a plot to flood the market for journalists. Teaching first aid and CPR does not make everyone a doctor.
Coding is an essential skill for many jobs already: 50 years ago, who would have thought you needed coders to make movies? Being a software engineer, a serious coder, is hard. IN fact, it takes more than technical coding to be a software engineer: you can learn to code in a week. Software Engineering is a four year degree, and even then you've just started a career. But depriving kids of some basic insights may mean they won't have the basic skills needed in the future, even for controlling their car and house. By all means, send you kids to a school that doesn't teach coding. I won't.
James Jones -> vimyvixen , 22 Sep 2017 11:41
Did you learn SNOBOL, or is Snowball a language I'm not familiar with? (Entirely possible, as an American I never would have known Extended Mercury Autocode existed we're it not for a random book acquisition at my home town library when I was a kid.)
William Payne , 22 Sep 2017 11:17
The tide that is transforming technology jobs from "white collar professional" into "blue collar industrial" is part of a larger global economic cycle.

Successful "growth" assets inevitably transmogrify into "value" and "income" assets as they progress through the economic cycle. The nature of their work transforms also. No longer focused on innovation; on disrupting old markets or forging new ones; their fundamental nature changes as they mature into optimising, cost reducing, process oriented and most importantly of all -- dividend paying -- organisations.

First, the market invests. And then, .... it squeezes.

Immature companies must invest in their team; must inspire them to be innovative so that they can take the creative risks required to create new things. This translates into high skills, high wages and "white collar" social status.

Mature, optimising companies on the other hand must necessarily avoid risks and seek variance-minimising predictability. They seek to control their human resources; to eliminate creativity; to to make the work procedural, impersonal and soulless. This translates into low skills, low wages and "blue collar" social status.

This is a fundamental part of the economic cycle; but it has been playing out on the global stage which has had the effect of hiding some of its' effects.

Over the past decades, technology knowledge and skills have flooded away from "high cost" countries and towards "best cost" countries at a historically significant rate. Possibly at the maximum rate that global infrastructure and regional skills pools can support. Much of this necessarily inhumane and brutal cost cutting and deskilling has therefore been hidden by the tide of outsourcing and offshoring. It is hard to see the nature of the jobs change when the jobs themselves are changing hands at the same time.

The ever tighter ratchet of dehumanising industrialisation; productivity and efficiency continues apace, however, and as our global system matures and evens out, we see the seeds of what we have sown sail home from over the sea.

Technology jobs in developed nations have been skewed towards "growth" activities since for the past several decades most "value" and "income" activities have been carried out in developing nations. Now, we may be seeing the early preparations for the diffusion of that skewed, uneven and unsustainable imbalance.

The good news is that "Growth" activities are not going to disappear from the world. They just may not be so geographically concentrated as they are today. Also, there is a significant and attention-worthy argument that the re-balancing of skills will result in a more flexible and performant global economy as organisations will better be able to shift a wider variety of work around the world to regions where local conditions (regulation, subsidy, union activity etc...) are supportive.

For the individuals concerned it isn't going to be pretty. And of course it is just another example of the race to the bottom that pits states and public sector purse-holders against one another to win the grace and favour of globally mobile employers.

As a power play move it has a sort of inhumanly psychotic inevitability to it which is quite awesome to observe.

I also find it ironic that the only way to tame the leviathan that is the global free-market industrial system might actually be effective global governance and international cooperation within a rules-based system.

Both "globalist" but not even slightly both the same thing.

Vereto -> Wiretrip , 22 Sep 2017 11:17
not just coders, it put even IT Ops guys into this bin. Basically good old - so you are working with computers sentence I used to hear a lot 10-15 years ago.
Sangmin , 22 Sep 2017 11:15
You can teach everyone how to code but it doesn't necessarily mean everyone will be able to work as one. We all learn math but that doesn't mean we're all mathematicians. We all know how to write but we're not all professional writers.

I have a graduate degree in CS and been to a coding bootcamp. Not everyone's brain is wired to become a successful coder. There is a particular way how coders think. Quality of a product will stand out based on these differences.

Vereto -> Jared Hall , 22 Sep 2017 11:12
Very hyperbolic is to assume that the profit in those companies is done by decreasing wages. In my company the profit is driven by ability to deliver products to the market. And that is limited by number of top people (not just any coder) you can have.
KDHughes -> kcrane , 22 Sep 2017 11:06
You realise that the arts are massively oversupplied and that most artists earn very little, if anything? Which is sort of like the situation the author is warning about. But hey, he knows nothing. Congratulations, though, on writing one of the most pretentious posts I've ever read on CIF.
offworldguy -> Melissa Boone , 22 Sep 2017 10:21
So you know kids, college age people and software developers who enjoy doing it in their leisure time? Do you know any middle aged mothers, fathers, grandparents who enjoy it and are not software developers?

Sorry, I don't see coding as a leisure pursuit that is going to take off beyond a very narrow demographic and if it becomes apparent (as I believe it will) that there is not going to be a huge increase in coding job opportunities then it will likely wither in schools too, perhaps replaced by music lessons.

Bread Eater , 22 Sep 2017 10:02
From their perspective yes. But there are a lot of opportunities in tech so it does benefit students looking for jobs.
Melissa Boone -> jamesbro , 22 Sep 2017 10:00
No, because software developer probably fail more often than they succeed. Building anything worthwhile is an iterative process. And it's not just the compiler but the other devs, oyur designer, your PM, all looking at your work.
Melissa Boone -> peterainbow , 22 Sep 2017 09:57
It's not shallow or lazy. I also work at a tech company and it's pretty common to do that across job fields. Even in HR marketing jobs, we hire students who can't point to an internship or other kind of experience in college, not simply grades.
Vereto -> savingUK , 22 Sep 2017 09:50
It will take ages, the issue of Indian programmers is in the education system and in "Yes boss" culture.

But on the other hand most of Americans are just as bad as Indians

Melissa Boone -> offworldguy , 22 Sep 2017 09:50
A lot of people do find it fun. I know many kids - high school and young college age - who code in the leisure time because they find it pleasurable to make small apps and video games. I myself enjoy it too. Your argument is like saying since you don't like to read books in your leisure time, nobody else must.

The point is your analogy isn't a good one - people who learn to code can not only enjoy it in their spare time just like music, but they can also use it to accomplish all kinds of basic things. I have a friend who's a software developer who has used code to program his Roomba to vacuum in a specific pattern and to play Candy Land with his daughter when they lost the spinner.

Owlyrics -> CapTec , 22 Sep 2017 09:44
Creativity could be added to your list. Anyone can push a button but only a few can invent a new one.
One company in the US (after it was taken over by a new owner) decided it was more profitable to import button pushers from off-shore, they lost 7 million customers (gamers) and had to employ more of the original American developers to maintain their high standard and profits.
Owlyrics -> Maclon , 22 Sep 2017 09:40
Masters is the new Bachelors.
Maclon , 22 Sep 2017 09:22
So similar to 500k a year people going to university ( UK) now when it used to be 60k people a year( 1980). There was never enough graduate jobs in 1980 so can't see where the sudden increase in need for graduates has come from.
PaulDavisTheFirst -> Ethan Hawkins , 22 Sep 2017 09:17

They aren't really crucial pieces of technology except for their popularity

It's early in the day for me, but this is the most ridiculous thing I've read so far, and I suspect it will be high up on the list by the end of the day.

There's no technology that is "crucial" unless it's involved in food, shelter or warmth. The rest has its "crucialness" decided by how widespread its use is, and in the case of those 3 languages, the answer is "very".

You (or I) might not like that very much, but that's how it is.

Julian Williams -> peter nelson , 22 Sep 2017 09:12
My benchmark would be if the average new graduate in the discipline earns more or less than one of the "professions", Law, medicine, Economics etc. The short answer is that they don't. Indeed, in my experience of professions, many good senior SW developers, say in finance, are paid markedly less than the marketing manager, CTO etc. who are often non-technical.

My benchmark is not "has a car, house etc." but what does 10, 15 20 years of experience in the area generate as a relative income to another profession, like being a GP or a corporate solicitor or a civil servant (which is usually the benchmark academics use for pay scaling). It is not to denigrate, just to say that markets don't always clear to a point where the most skilled are the highest paid.

I was also suggesting that even if you are not intending to work in the SW area, being able to translate your imagination into a program that reflects your ideas is a nice life skill.

AmyInNH -> freeandfair , 22 Sep 2017 09:05
Your assumption has no basis in reality. In my experience, as soon as Clinton ramped up H1Bs, my employer would invite 6 same college/degree/curriculum in for interviews, 5 citizen, 1 foreign student and default offer to foreign student without asking interviewers a single question about the interview. Eventually, the skipped the farce of interviewing citizens all together. That was in 1997, and it's only gotten worse. Wall St's been pretty blunt lately. Openly admits replacing US workers for import labor, as it's the "easiest" way to "grow" the economy, even though they know they are ousting citizens from their jobs to do so.
AmyInNH -> peter nelson , 22 Sep 2017 08:59
"People who get Masters and PhD's in computer science" Feed western universities money, for degree programs that would otherwise not exist, due to lack of market demand. "someone has a Bachelor's in CS" As citizens, having the same college/same curriculum/same grades, as foreign grad. But as citizens, they have job market mobility, and therefore are shunned. "you can make something real and significant on your own" If someone else is paying your rent, food and student loans while you do so.
Ethan Hawkins -> farabundovive , 22 Sep 2017 07:40
While true, it's not the coders' fault. The managers and execs above them have intentionally created an environment where these things are secondary. What's primary is getting the stupid piece of garbage out the door for Q profit outlook. Ship it amd patch it.
offworldguy -> millartant , 22 Sep 2017 07:38
Do most people find it fun? I can code. I don't find it 'fun'. Thirty years ago as a young graduate I might have found it slightly fun but the 'fun' wears off pretty quick.
Ethan Hawkins -> anticapitalist , 22 Sep 2017 07:35
In my estimation PHP is an utter abomination. Python is just a little better but still very bad. Ruby is a little better but still not at all good.

Languages like PHP, Python and JS are popular for banging out prototypes and disposable junk, but you greatly overestimate their importance. They aren't really crucial pieces of technology except for their popularity and while they won't disappear they won't age well at all. Basically they are big long-lived fads. Java is now over 20 years old and while Java 8 is not crucial, the JVM itself actually is crucial. It might last another 20 years or more. Look for more projects like Ceylon, Scala and Kotlin. We haven't found the next step forward yet, but it's getting more interesting, especially around type systems.

A strong developer will be able to code well in a half dozen languages and have fairly decent knowledge of a dozen others. For me it's been many years of: Z80, x86, C, C++, Java. Also know some Perl, LISP, ANTLR, Scala, JS, SQL, Pascal, others...

millartant -> Islingtonista , 22 Sep 2017 07:26
You need a decent IDE
millartant -> offworldguy , 22 Sep 2017 07:24

One is hardly likely to 'do a bit of coding' in ones leisure time

Why not? The right problem is a fun and rewarding puzzle to solve. I spend a lot of my leisure time "doing a bit of coding"

Ethan Hawkins -> Wiretrip , 22 Sep 2017 07:12
The worst of all are the academics (on average).
Ethan Hawkins -> KatieL , 22 Sep 2017 07:09
This makes people like me with 35 years of experience shipping products on deadlines up and down every stack (from device drivers and operating systems to programming languages, platforms and frameworks to web, distributed computing, clusters, big data and ML) so much more valuable. Been there, done that.
Ethan Hawkins -> Taylor Dotson , 22 Sep 2017 07:01
It's just not true. In SV there's this giant vacuum created by Apple, Google, FB, etc. Other good companies struggle to fill positions. I know from being on the hiring side at times.
TheBananaBender -> peter nelson , 22 Sep 2017 07:00
You don't work for a major outsourcer then like Serco, Atos, Agilisys
offworldguy -> LabMonkey , 22 Sep 2017 06:59
Plenty of people? I don't know of a single person outside of my work which is teaming with programmers. Not a single friend, not my neighbours, not my wife or her extended family, not my parents. Plenty of people might do it but most people don't.
Ethan Hawkins -> finalcentury , 22 Sep 2017 06:56
Your ignorance of coding is showing. Coding IS creative.
Ricardo111 -> peter nelson , 22 Sep 2017 06:56
Agreed: by gifted I did not meant innate. It's more of a mix of having the interest, the persistence, the time, the opportunity and actually enjoying that kind of challenge.

While some of those things are to a large extent innate personality traits, others are not and you don't need max of all of them, you just need enough to drive you to explore that domain.

That said, somebody that goes into coding purelly for the money and does it for the money alone is extremely unlikelly to become an exceptional coder.

Ricardo111 -> eirsatz , 22 Sep 2017 06:50
I'm as senior as they get and have interviewed quite a lot of programmers for several positions, including for Technical Lead (in fact, to replace me) and so far my experience leads me to believe that people who don't have a knack for coding are much less likely to expose themselves to many different languages and techniques, and also are less experimentalist, thus being far less likely to have those moments of transcending merely being aware of the visible and obvious to discover the concerns and concepts behind what one does. Without those moments that open the door to the next Universe of concerns and implications, one cannot do state transitions such as Coder to Technical Designer or Technical Designer to Technical Architect.

Sure, you can get the title and do the things from the books, but you will not get WHY are those things supposed to work (and when they will not work) and thus cannot adjust to new conditions effectively and will be like a sailor that can't sail away from sight of the coast since he can't navigate.

All this gets reflected in many things that enhance productivity, from the early ability to quickly piece together solutions for a new problem out of past solutions for different problems to, later, conceiving software architecture designs fittted to the typical usage pattern in the industry for which the software is going to be made.

LabMonkey , 22 Sep 2017 06:50
From the way our IT department is going, needing millions of coders is not the future. It'll be a minority of developers at the top, and an army of low wage monkeys at the bottom who can troubleshoot from a script - until AI comes along that can code faster and more accurately.
LabMonkey -> offworldguy , 22 Sep 2017 06:46

One is hardly likely to 'do a bit of coding' in ones leisure time

Really? I've programmed a few simple videogames in my spare time. Plenty of people do.

CapTec , 22 Sep 2017 06:29
Interesting piece that's fundamentally flawed. I'm a software engineer myself. There is a reason a University education of a minimum of three years is the base line for a junior developer or 'coder'.

Software engineering isn't just writing code. I would say 80% of my time is spent designing and structuring software before I even touch the code.

Explaining software engineering as a discipline at a high level to people who don't understand it is simple.

Most of us who learn to drive learn a few basics about the mechanics of a car. We know that brake pads need to be replaced, we know that fuel is pumped into an engine when we press the gas pedal. Most of us know how to change a bulb if it blows.

The vast majority of us wouldn't be able to replace a head gasket or clutch though. Just knowing the basics isn't enough to make you a mechanic.

Studying in school isn't enough to produce software engineers. Software engineering isn't just writing code, it's cross discipline. We also need to understand the science behind the computer, we need too understand logic, data structures, timings, how to manage memory, security, how databases work etc.

A few years of learning at school isn't nearly enough, a degree isn't enough on its own due to the dynamic and ever evolving nature of software engineering. Schools teach technology that is out of date and typically don't explain the science very well.

This is why most companies don't want new developers, they want people with experience and multiple skills.

Programming is becoming cool and people think that because of that it's easy to become a skilled developer. It isn't. It takes time and effort and most kids give up.

French was on the national curriculum when I was at school. Most people including me can't hold a conversation in French though.

Ultimately there is a SKILL shortage. And that's because skill takes a long time, successes and failures to acquire. Most people just give up.

This article is akin to saying 'schools are teaching basic health to reduce the wages of Doctors'. It didn't happen.

offworldguy -> thecurio , 22 Sep 2017 06:19
There is a difference. When you teach people music you teach a skill that can be used for a lifetimes enjoyment. One might sit at a piano in later years and play. One is hardly likely to 'do a bit of coding' in ones leisure time.

The other thing is how good are people going to get at coding and how long will they retain the skill if not used? I tend to think maths is similar to coding and most adults have pretty terrible maths skills not venturing far beyond arithmetic. Not many remember how to solve a quadratic equation or even how to rearrange some algebra.

One more thing is we know that if we teach people music they will find a use for it, if only in their leisure time. We don't know that coding will be in any way useful because we don't know if there will be coding jobs in the future. AI might take over coding but we know that AI won't take over playing piano for pleasure.

If we want to teach logical thinking then I think maths has always done this and we should make sure people are better at maths.

Alex Mackaness , 22 Sep 2017 06:08
Am I missing something here? Being able to code is a skill that is a useful addition to the skill armoury of a youngster entering the work place. Much like reading, writing, maths... Not only is it directly applicable and pervasive in our modern world, it is built upon logic.

The important point is that American schools are not ONLY teaching youngsters to code, and producing one dimensional robots... instead coding makes up one part of their overall skill set. Those who wish to develop their coding skills further certainly can choose to do so. Those who specialise elsewhere are more than likely to have found the skills they learnt whilst coding useful anyway.

I struggle to see how there is a hidden capitalist agenda here. I would argue learning the basics of coding is simply becoming seen as an integral part of the school curriculum.

thecurio , 22 Sep 2017 05:56
The word "coding" is shorthand for "computer programming" or "software development" and it masks the depth and range of skills that might be required, depending on the application.

This subtlety is lost, I think, on politicians and perhaps the general public. Asserting that teaching lots of people to code is a sneaky way to commodotise an industry might have some truth to it, but remember that commodotisation (or "sharing and re-use" as developers might call it) is nothing new. The creation of freely available and re-usable software components and APIs has driven innovation, and has put much power in the hands of developers who would not otherwise have the skill or time to tackle such projects.

There's nothing to fear from teaching more people to "code", just as there's nothing to fear from teaching more people to "play music". These skills simply represent points on a continuum.

There's room for everyone, from the kid on a kazoo all the way to Coltrane at the Village Vanguard.

sbw7 -> ragingbull , 22 Sep 2017 05:44
I taught CS. Out of around 100 graduates I'd say maybe 5 were reasonable software engineers. The rest would be fine in tech support or other associated trades, but not writing software. Its not just a set of trainable skills, its a set of attitudes and ways of perceiving and understanding that just aren't that common.
offworldguy , 22 Sep 2017 05:02
I can't understand the rush to teach coding in schools. First of all I don't think we are going to be a country of millions of coders and secondly if most people have the skills then coding is hardly going to be a well paid job. Thirdly you can learn coding from scratch after school like people of my generation did. You could argue that it is part of a well rounded education but then it is as important for your career as learning Shakespeare, knowing what an oxbow lake is or being able to do calculus: most jobs just won't need you to know.
savingUK -> yannick95 , 22 Sep 2017 04:35
While you roll on the floor laughing, these countries will slowly but surely get their act together. That is how they work. There are top quality coders over there and they will soon promoted into a position to organise the others.

You are probably too young to remember when people laughed at electronic products when they were made in Japan then Taiwan. History will repeat it's self.

zii000 -> JohnFreidburg , 22 Sep 2017 04:04
Yes it's ironic and no different here in the UK. Traditionally Labour was the party focused on dividing the economic pie more fairly, Tories on growing it for the benefit of all. It's now completely upside down with Tories paying lip service to the idea of pay rises but in reality supporting this deflationary race to the bottom, hammering down salaries and so shrinking discretionary spending power which forces price reductions to match and so more pressure on employers to cut costs ... ad infinitum.
Labour now favour policies which would cause an expansion across the entire economy through pay rises and dramatically increased investment with perhaps more tolerance of inflation to achieve it.
ID0193985 -> jamesbro , 22 Sep 2017 03:46
Not surprising if they're working for a company that is cold-calling people - which should be banned in my opinion. Call centres providing customer support are probably less abuse-heavy since the customer is trying to get something done.
vimyvixen , 22 Sep 2017 02:04
I taught myself to code in 1974. Fortran, COBOL were first. Over the years as a aerospace engineer I coded in numerous languages ranging from PLM, Snowball, Basic, and more assembly languages than I can recall, not to mention deep down in machine code on more architectures than most know even existed. Bottom line is that coding is easy. It doesn't take a genius to code, just another way of thinking. Consider all the bugs in the software available now. These "coders", not sufficiently trained need adult supervision by engineers who know what they are doing for computer systems that are important such as the electrical grid, nuclear weapons, and safety critical systems. If you want to program toy apps then code away, if you want to do something important learn engineering AND coding.
Dwight Spencer , 22 Sep 2017 01:44
Laughable. It takes only an above-average IQ to code. Today's coders are akin to the auto mechanics of the 1950s where practically every high school had auto shop instruction . . . nothing but a source of cheap labor for doing routine implementations of software systems using powerful code libraries built by REAL software engineers.
sieteocho -> Islingtonista , 22 Sep 2017 01:19
That's a bit like saying that calculus is more valuable than arithmetic, so why teach children arithmetic at all?

Because without the arithmetic, you're not going to get up to the calculus.

JohnFreidburg -> Tommyward , 22 Sep 2017 01:15
I disagree. Technology firms are just like other firms. Why then the collusion not to pay more to workers coming from other companies? To believe that they are anything else is naive. The author is correct. We need policies that actually grow the economy and not leaders who cave to what the CEOs want like Bill Clinton did. He brought NAFTA at the behest of CEOs and all it ended up doing was ripping apart the rust belt and ushering in Trump.
Tommyward , 22 Sep 2017 00:53
So the media always needs some bad guys to write about, and this month they seem to have it in for the tech industry. The article is BS. I interview a lot of people to join a large tech company, and I can guarantee you that we aren't trying to find cheaper labor, we're looking for the best talent.

I know that lots of different jobs have been outsourced to low cost areas, but these days the top companies are instead looking for the top talent globally.

I see this article as a hit piece against Silicon Valley, and it doesn't fly in the face of the evidence.

finalcentury , 22 Sep 2017 00:46
This has got to be the most cynical and idiotic social interest piece I have ever read in the Guardian. Once upon a time it was very helpful to learn carpentry and machining, but now, even if you are learning those, you will get a big and indispensable headstart if you have some logic and programming skills. The fact is, almost no matter what you do, you can apply logic and programming skills to give you an edge. Even journalists.
hoplites99 , 22 Sep 2017 00:02
Yup, rings true. I've been in hi tech for over 40 years and seen the changes. I was in Silicon Valley for 10 years on a startup. India is taking over, my current US company now has a majority Indian executive and is moving work to India. US politicians push coding to drive down wages to Indian levels.

On the bright side I am old enough and established enough to quit tomorrow, its someone else's problem, but I still despise those who have sold us out, like the Clintons, the Bushes, the Googoids, the Zuckerboids.

liberalquilt -> yannick95 , 21 Sep 2017 23:45
Sure markets existed before governments, but capitalism didn't, can't in fact. It needs the organs of state, the banking system, an education system, and an infrastructure.
thegarlicfarmer -> canprof , 21 Sep 2017 23:36
Then teach them other things but not coding! Here in Australia every child of school age has to learn coding. Now tell me that everyone of them will need it? Look beyond computers as coding will soon be automated just like every other job.
Islingtonista , 21 Sep 2017 22:25
If you have never coded then you will not appreciate how labour intensive it is. Coders effectively use line editors to type in, line by line, the instructions. And syntax is critical; add a comma when you meant a semicolon and the code doesn't work properly. Yeah, we use frameworks and libraries of already written subroutines, but, in the end, it is all about manually typing in the code.

Which is an expensive way of doing things (hence the attractions of 'off-shoring' the coding task to low cost economies in Asia).

And this is why teaching kids to code is a waste of time.

Already, AI based systems are addressing the task of interpreting high level design models and simply generating the required application.

One of the first uses templates and a smart chatbot to enable non-tech business people to build their websites. By describe in non-coding terms what they want, the chatbot is able to assemble the necessary components and make the requisite template amendments to build a working website.

Much cheaper than hiring expensive coders to type it all in manually.

It's early days yet, but coding may well be one of the big losers to AI automation along with all those back office clerical jobs.

Teaching kids how to think about design rather than how to code would be much more valuable.

jamesbro -> peter nelson , 21 Sep 2017 21:31
Thick-skinned? Just because you might get a few error messages from the compiler? Call centre workers have to put up with people telling them to fuck off eight hours a day.
Joshua Ian Lee , 21 Sep 2017 21:03
Spot on. Society will never need more than 1% of its people to code. We will need far more garbage men. There are only so many (relatively) good jobs to go around and its about competing to get them.
canprof , 21 Sep 2017 20:53
I'm a professor (not of computer science) and yet, I try to give my students a basic understanding of algorithms and logic, to spark an interest and encourage them towards programming. I have no skin in the game, except that I've seen unemployment first-hand, and want them to avoid it. The best chance most of them have is to learn to code.
Evelita , 21 Sep 2017 14:35
Educating youth does not drive wages down. It drives our economy up. China, India, and other countries are training youth in programming skills. Educating our youth means that they will be able to compete globally. This is the standard GOP stand that we don't need to educate our youth, but instead fantasize about high-paying manufacturing jobs miraculously coming back.

Many jobs, including new manufacturing jobs have an element of coding because they are automated. Other industries require coding skills to maintain web sites and keep computer systems running. Learning coding skills opens these doors.

Coding teaches logic, an essential thought process. Learning to code, like learning anything, increases the brains ability to adapt to new environments which is essential to our survival as a species. We must invest in educating our youth.

cwblackwell , 21 Sep 2017 13:38
"Contrary to public perception, the economy doesn't actually need that many more programmers." This really looks like a straw man introducing a red herring. A skill can be extremely valuable for those who do not pursue it as a full time profession.

The economy doesn't actually need that many more typists, pianists, mathematicians, athletes, dietitians. So, clearly, teaching typing, the piano, mathematics, physical education, and nutrition is a nefarious plot to drive down salaries in those professions. None of those skills could possibly enrich the lives or enhance the productivity of builders, lawyers, public officials, teachers, parents, or store managers.

DJJJJJC , 21 Sep 2017 14:23

A study by the Economic Policy Institute found that the supply of American college graduates with computer science degrees is 50% greater than the number hired into the tech industry each year.

You're assuming that all those people are qualified to work in software because they have a piece of paper that says so, but that's not a valid assumption. The quality of computer science degree courses is generally poor, and most people aren't willing or able to teach themselves. Universities are motivated to award degrees anyway because if they only awarded degrees to students who are actually qualified then that would reflect very poorly on their quality of teaching.

A skills shortage doesn't mean that everyone who claims to have a skill gets hired and there are still some jobs left over that aren't being done. It means that employers are forced to hire people who are incompetent in order to fill all their positions. Many people who get jobs in programming can't really do it and do nothing but create work for everyone else. That's why most of the software you use every day doesn't work properly. That's why competent programmers' salaries are still high in spite of the apparently large number of "qualified" people who aren't employed as programmers.

#### [Oct 02, 2017] Programming vs coding

##### "... A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job. Secondarily, while I agree that one day our field might be replaced by automation, there's a level of creativity involved with good software engineering that makes your carpenter comparison a bit flawed. ..."
###### Oct 02, 2017 | profile.theguardian.com
Wiretrip -> Mark Mauvais , 21 Sep 2017 14:23
Yes, 'engineers' (and particularly mathematicians) write appalling code.
Trumbledon , 21 Sep 2017 14:23
A good developer can easily earn £600-800 per day, which suggests to me that they are in high demand, and society needs more of them.
Wiretrip -> KatieL , 21 Sep 2017 14:22
Agreed, to many people 'coding' consists of copying other people's JavaScript snippets from StackOverflow... I tire of the many frauds in the business...
stratplaya , 21 Sep 2017 14:21
You can learn to code, but that doesn't mean you'll be good at it. There will be a few who excel but most will not. This isn't a reflection on them but rather the reality of the situation. In any given area some will do poorly, more will do fairly, and a few will excel. The same applies in any field.
peter nelson -> UncommonTruthiness , 21 Sep 2017 14:21

The ship has sailed on this activity as a career.

Oh, rubbish. I'm in the process of retiring from my job as an Android software designer so I'm tasked with hiring a replacement for my organisation. It pays extremely well, the work is interesting, and the company is successful and serves an important worldwide industry.

Still, finding highly-qualified people is hard and they get snatched up in mid-interview because the demand is high. Not only that but at these pay scales, we can pretty much expect the Guardian will do yet another article about the unconscionable gap between what rich, privileged techies like software engineers make and everyone else.

Really, we're damned if we do and damned if we don't. If tech workers are well-paid we're castigated for gentrifying neighbourhoods and living large, and yet anything that threatens to lower what we're paid produces conspiracy-theory articles like this one.

Fanastril -> Taylor Dotson , 21 Sep 2017 14:17
I learned to cook in school. Was there a shortage of cooks? No. Did I become a professional cook? No. but I sure as hell would not have missed the skills I learned for the world, and I use them every day.
KatieL -> Taylor Dotson , 21 Sep 2017 14:13
Oh no, there's loads of people who say they're coders, who have on their CV that they're coders, that have been paid to be coders. Loads of them. Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a problem to do it, definitely a coder, not a problem being "hands on"... can't actually write working code when we actually ask them to.
youngsteveo -> Taylor Dotson , 21 Sep 2017 14:12
I feel for your brother, and I've experienced the exact same BS "test" that you're describing. However, when I said "rudimentary coding exam", I wasn't talking about classic fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply ask people to write a small amount of code that will solve a simple real world problem. Something that they would be asked to do if they got hired. We let them take a long time to do it. We let them use Google to look things up if they need. You would be shocked how many "qualified applicants" can't do it.
Fanastril -> Taylor Dotson , 21 Sep 2017 14:11
It is not zero-sum: If you teach something empowering, like programming, motivating is a lot easier, and they will learn more.
UncommonTruthiness , 21 Sep 2017 14:10
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!

I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.

Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented.

Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.

KatieL -> Taylor Dotson , 21 Sep 2017 14:10
"intelligence, creativity, diligence, communication ability, or anything else that a job"

None of those are any use if, when asked to turn your intelligent, creative, diligent, communicated idea into some software, you perform as well as most candidates do at simple coding assessments... and write stuff that doesn't work.

peter nelson , 21 Sep 2017 14:09

At its root, the campaign for code education isn't about giving the next generation a shot at earning the salary of a Facebook engineer. It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry.

Of course the writer does not offer the slightest shred of evidence to support the idea that this is the actual goal of these programs. So it appears that the tinfoil-hat conspiracy brigade on the Guardian is operating not only below the line, but above it, too.

The fact is that few of these students will ever become software engineers (which, incidentally, is my profession) but programming skills are essential in many professions for writing little scripts to automate various tasks, or to just understand 21st century technology.

kcrane , 21 Sep 2017 14:07
Sadly this is another article by a partial journalist who knows nothing about the software industry, but hopes to subvert what he had read somewhere to support a position he had already assumed. As others had said, understanding coding had already become akin to being able to use a pencil. It is a basic requirement of many higher level roles.

But knowing which end of a pencil to put on the paper (the equivalent of the level of coding taught in schools) isn't the same as being an artist. Moreover anyone who knows the field recognises that top coders are gifted, they embody genius. There are coding Caravaggio's out there, but few have the experience to know that. No amount of teaching will produce high level coders from average humans, there is an intangible something needed, as there is in music and art, to elevate the merely good to genius.

All to say, however many are taught the basics, it won't push down the value of the most talented coders, and so won't reduce the costs of the technology industry in any meaningful way as it is an industry, like art, that relies on the few not the many.

DebuggingLife , 21 Sep 2017 14:06
Not all of those children will want to become programmers but at least the barrier to entry, - for more to at least experience it - will be lower.

Teaching music to only the children whose parents can afford music tuition means than society misses out on a greater potential for some incredible gifted musicians to shine through.

Moreover, learning to code really means learning how to wrangle with the practical application of abstract concepts, algorithms, numerical skills, logic, reasoning, etc. which are all transferrable skills some of which are not in the scope of other classes, certainly practically.
Like music, sport, literature etc. programming a computer, a website, a device, a smartphone is an endeavour that can be truly rewarding as merely a pastime, and similarly is limited only by ones imagination.

rgilyead , 21 Sep 2017 14:01
"...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a severe underestimation of the level of expertise required to conceptualise and deliver robust and maintainable code. The complexity of integrating software is more equivalent to constructing an entire building with components of different materials. If you think teaching coding is enough to enable software design and delivery then good luck.
Taylor Dotson -> cwblackwell , 21 Sep 2017 14:00
Yeah, but mania over coding skills inevitably pushes over skills out of the curriculum (or deemphasizes it). Education is zero-sum in that there's only so much time and energy to devote to it. Hence, you need more than vague appeals to "enhancement," especially given the risks pointed out by the author.
Taylor Dotson -> PolydentateBrigand , 21 Sep 2017 13:57
"Talented coders will start new tech businesses and create more jobs."

That could be argued for any skill set, including those found in the humanities and social sciences likely to pushed out by the mania over coding ability. Education is zero-sum: Time spent on one subject is time that invariably can't be spent learning something else.

Taylor Dotson -> WumpieJr , 21 Sep 2017 13:49
"If they can't literally fix everything let's just get rid of them, right?"

That's a strawman. His point is rooted in the recognition that we only have so much time, energy, and money to invest in solutions. One's that feel good but may not do anything distract us for the deeper structural issues in our economy. The probably with thinking "education" will fix everything is that it leaves the status quo unquestioned.

martinusher , 21 Sep 2017 13:31
Being able to write code and being able to program are two very different skills. In language terms its the difference between being able to read and write (say) English and being able to write literature; obviously you need a grasp of the language to write literature but just knowing the language is not the same as being able to assemble and marshal thought into a coherent pattern prior to setting it down.

To confuse things further there's various levels of skill that all look the same to the untutored eye. Suppose you wished to bridge a waterway. If that waterway was a narrow ditch then you could just throw a plank across. As the distance to be spanned got larger and larger eventually you'd have to abandon intuition for engineering and experience. Exactly the same issues happen with software but they're less tangible; anyone can build a small program but a complex system requires a lot of other knowledge (in my field, that's engineering knowledge -- coding is almost an afterthought).

Its a good idea to teach young people to code but I wouldn't raise their expectations of huge salaries too much. For children educating them in wider, more general, fields and abstract activities such as music will pay off huge dividends, far more than just teaching them whatever the fashionable language du jour is. (...which should be Logo but its too subtle and abstract, it doesn't look "real world" enough!).

freeandfair , 21 Sep 2017 13:30
I don't see this is an issue. Sure, there could be ulterior motives there, but anyone who wants to still be employed in 20 years has to know how to code . It is not that everyone will be a coder, but their jobs will either include part-time coding or will require understanding of software and what it can and cannot do. AI is going to be everywhere.
WumpieJr , 21 Sep 2017 13:23
What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra.

But is isn't just about coding for Tarnoff. He seems to hold education in contempt generally. "The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric." If they can't literally fix everything let's just get rid of them, right?

Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it.

youngsteveo , 21 Sep 2017 13:16
I'm not going to argue that the goal of mass education isn't to drive down wages, but the idea that the skills gap is a myth doesn't hold water in my experience. I'm a software engineer and manager at a company that pays well over the national average, with great benefits, and it is downright difficult to find a qualified applicant who can pass a rudimentary coding exam.

A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job. Secondarily, while I agree that one day our field might be replaced by automation, there's a level of creativity involved with good software engineering that makes your carpenter comparison a bit flawed.

#### [Oct 02, 2017] Does programming provides a new path to the middle class? Probably no longer, unless you are really talanted. In the latter case it is not that different from any other fields, but the pressure from H1B makes is harder for programmers. The neoliberal USA have a real problem with the social mobility

##### "... Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers. Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected to children being taught carpentry in the twenties and thirties, so it had to be renamed "manual instruction" to get round it. Denying children useful skills is indefensible. ..."
###### Oct 02, 2017 | discussion.theguardian.com
swelle , 21 Sep 2017 17:36
I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find the talent here' is the main excuse, though many 'older' (read: over 40) native-born tech workers will tell your that's plenty of talent here already, but even with the immigration hassles, H1B workers will be cheaper overall...
This is interesting. Indeed, I do think there is excess supply of software programmers. There is only a modest number of decent jobs, say as an algorithms developer in finance, general architecture of complex systems or to some extent in systems security. However, these jobs are usually occupied and the incumbents are not likely to move on quickly. Road blocks are also put up by creating sub networks of engineers who ensure that some knowledge is not ubiquitous.

Most very high paying jobs in the technology sector are in the same standard upper management roles as in every other industry.

Still, the ability to write a computer program in an enabler, knowing how it works means you have an ability to imagine something and make it real. To me it is a bit like language, some people can use language to make more money than others, but it is still important to be able to have a basic level of understanding.

FabBlondie -> peter nelson , 21 Sep 2017 17:42
And yet I know a lot of people that has happened to. Better to replace a $125K a year programmer with one who will do the same, or even less, job for$50K.
This could backfire if the programmers don't find the work or pay to match their expectations... Programmers, after all tend to make very good hackers if their minds are turned to it.

freeandfair -> FabBlondie , 21 Sep 2017 18:23

> While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done.

Well, I am a software architect and what he says sounds correct for a certain type of applications. Maybe you do a different type of programming.

peter nelson -> FabBlondie , 21 Sep 2017 18:23

While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done.

How else can you do it?

Java is popular because it's a very versatile language - On this list it's the most popular general-purpose programming language. (Above it javascript is just a scripting language and HTML/CSS aren't even programming languages) https://fossbytes.com/most-used-popular-programming-languages/ ... and below it you have to go down to C# at 20% to come to another general-purpose language, and even that's a Microsoft house language.

Also the "correct" choice of programming languages is also based on how many people in the shop know it so they maintain code that's written in it by someone else.

freeandfair -> FabBlondie , 21 Sep 2017 18:22
> job-specific training is completely different. What a joke to persuade public school districts to pick up the tab on job training.

Well, it is either that or the kids themselves who have to pay for it and they are even less prepared to do so. Ideally, college education should be tax payer paid but this is not the case in the US. And the employer ideally should pay for the job related training, but again, it is not the case in the US.

freeandfair -> mlzarathustra , 21 Sep 2017 18:20
> The bigger problem is that nobody cares about the arts, and as expensive as education is, nobody wants to carry around a debt on a skill that won't bring in the buck

Plenty of people care about the arts but people can't survive on what the arts pay. That was pretty much the case all through human history.

theindyisbetter -> Game Cabbage , 21 Sep 2017 18:18
No. The amount of work is not a fixed sum. That's the lump of labour fallacy. We are not tied to the land.
ConBrio , 21 Sep 2017 18:10
Since newspaper are consolidating and cutting jobs gotta clamp down on colleges offering BA degrees, particularly in English Literature and journalism.

And then... and...then...and...

LMichelle -> chillisauce , 21 Sep 2017 18:03
This article focuses on the US schools, but I can imagine it's the same in the UK. I don't think these courses are going to be about creating great programmers capable of new innovations as much as having a work force that can be their own IT Help Desk.

They'll learn just enough in these classes to do that.

Then most companies will be hiring for other jobs, but want to make sure you have the IT skills to serve as your own "help desk" (although they will get no salary for their IT work).

edmundberk -> FabBlondie , 21 Sep 2017 17:57
I find that quite remarkable - 40 years ago you must have been using assembler and with hardly any memory to work with. If you blitzed through that without applying the thought processes described, well...I'm surprised.
James Dey , 21 Sep 2017 17:55
Funny. Every day in the Brexit articles, I read that increasing the supply of workers has negligible effect on wages.
peter nelson -> peterainbow , 21 Sep 2017 17:54
I was laid off at your age in the depths of the recent recession and I got a job. As I said in another posting, it usually comes down to fresh skills and good personal references who will vouch for your work-habits and how well you get on with other members of your team.

The great thing about software , as opposed to many other jobs, is that it can be done at home which you're laid off. Write mobile (IOS or Android) apps or work on open source projects and get stuff up on github. I've been to many job interviews with my apps loaded on mobile devices so I could show them what I've done.

Game Cabbage -> theindyisbetter , 21 Sep 2017 17:52
The situation has a direct comparison to today. It has nothing to do with land. There was a certain amount of profit making work and not enough labour to satisfy demand. There is currently a certain amount of profit making work and in many situations (especially unskilled low paid work) too much labour.
edmundberk , 21 Sep 2017 17:52
So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?

Or is this the most obtuse argument yet for avoiding what everyone in tech knows - even more blatantly than in many other industries, wages are curtailed by offshoring; and in the US, by having offshoring centres on US soil.

chillisauce , 21 Sep 2017 17:48
Well, speaking as someone who spends a lot of time trying to find really good programmers... frankly there aren't that many about. We take most of ours from Eastern Europe and SE Asia, which is quite expensive, given the relocation costs to the UK. But worth it.

So, yes, if more British kids learnt about coding, it might help a bit. But not much; the real problem is that few kids want to study IT in the first place, and that the tuition standards in most UK universities are quite low, even if they get there.

Baobab73 , 21 Sep 2017 17:48
True......
peter nelson -> rebel7 , 21 Sep 2017 17:47
There was recently an programme/podcast on ABC/RN about the HUGE shortage in Australia of techies with specialized security skills.
peter nelson -> jigen , 21 Sep 2017 17:46
Robots, or AI, are already making us more productive. I can write programs today in an afternoon that would have taken me a week a decade or two ago.

I can create a class and the IDE will take care of all the accessors, dependencies, enforce our style-guide compliance, stub-in the documentation ,even most test cases, etc, and all I have to write is very-specific stuff required by my application - the other 90% is generated for me. Same with UI/UX - stubs in relevant event handlers, bindings, dependencies, etc.

Programmers are a zillion times more productive than in the past, yet the demand keeps growing because so much more stuff in our lives has processors and code. Your car has dozens of processors running lots of software; your TV, your home appliances, your watch, etc.

Quaestor , 21 Sep 2017 17:43

Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers. Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected to children being taught carpentry in the twenties and thirties, so it had to be renamed "manual instruction" to get round it. Denying children useful skills is indefensible.

jamesupton , 21 Sep 2017 17:42
Getting children to learn how to write code, as part of core education, will be the first step to the long overdue revolution. The rest of us will still have to stick to burning buildings down and stringing up the aristocracy.
cjenk415 -> LMichelle , 21 Sep 2017 17:40
did you misread? it seemed like he was emphasizing that learning to code, like learning art (and sports and languages), will help them develop skills that benefit them in whatever profession they choose.
FabBlondie -> peter nelson , 21 Sep 2017 17:40
While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done. And, FWIW, IMHO choosing the tool (programming language) might reasonably be expected to follow designing a solution, in practice this rarely happens. No, these days it's Java all the way, from day one.
theindyisbetter -> Game Cabbage , 21 Sep 2017 17:40
There was a fixed supply of land and a reduced supply of labour to work the land.

Nothing like then situation in a modern economy.

LMichelle , 21 Sep 2017 17:39
I'd advise parents that the classes they need to make sure their kids excel in are acting/drama. There is no better way to getting that promotion or increasing your pay like being a skilled actor in the job market. It's a fake it till you make it deal.
theindyisbetter , 21 Sep 2017 17:36
What a ludicrous argument.

Let's not teach maths or science or literacy either - then anyone with those skills will earn more.

SheriffFatman -> Game Cabbage , 21 Sep 2017 17:36

After the Black Death in the middle ages there was a huge under supply of labour. It produced a consistent rise in wages and conditions

It also produced wage-control legislation (which admittedly failed to work).

peter nelson -> peterainbow , 21 Sep 2017 17:32
if there were truly a shortage i wouldn't be unemployed

I've heard that before but when I've dug deeper I've usually found someone who either let their skills go stale, or who had some work issues.

LMichelle -> loveyy , 21 Sep 2017 17:26
Really? You think they are going to emphasize things like the importance of privacy and consumer rights?
loveyy , 21 Sep 2017 17:25
This really has to be one of the silliest articles I read here in a very long time.
People, let your children learn to code. Even more, educate yourselves and start to code just for the fun of it - look at it like a game.
The more people know how to code the less likely they are to understand how stuff works. If you were ever frustrated by how impossible it seems to shop on certain websites, learn to code and you will be frustrated no more. You will understand the intent behind the process.
Even more, you will understand the inherent limitations and what is the meaning of safety. You will be able to better protect yourself in a real time connected world.

Learning to code won't turn your kid into a programmer, just like ballet or piano classes won't mean they'll ever choose art as their livelihood. So let the children learn to code and learn along with them

Game Cabbage , 21 Sep 2017 17:24
Tipping power to employers in any profession by oversupply of labour is not a good thing. Bit of a macabre example here but...After the Black Death in the middle ages there was a huge under supply of labour. It produced a consistent rise in wages and conditions and economic development for hundreds of years after this. Not suggesting a massive depopulation. But you can achieve the same effects by altering the power balance. With decades of Neoliberalism, the employers side of the power see-saw is sitting firmly in the mud and is producing very undesired results for the vast majority of people.
Zuffle -> peterainbow , 21 Sep 2017 17:23
Perhaps you're just not very good. I've been a developer for 20 years and I've never had more than 1 week of unemployment.
Kevin P Brown -> peterainbow , 21 Sep 2017 17:20
" at 55 finding it impossible to get a job"

I am 59, and it is not just the age aspect it is the money aspect. They know you have experience and expectations, and yet they believe hiring someone half the age and half the price, times 2 will replace your knowledge. I have been contracting in IT for 30 years, and now it is obvious it is over. Experience at some point no longer mitigates age. I think I am at that point now.

TheLane82 , 21 Sep 2017 17:20
Completely true! What needs to happen instead is to teach the real valuable subjects.

Gender studies. Islamic studies. Black studies. All important issues that need to be addressed.

peter nelson -> mlzarathustra , 21 Sep 2017 17:06
Dear, dear, I know, I know, young people today . . . just not as good as we were. Everything is just going down the loo . . . Just have a nice cuppa camomile (or chamomile if you're a Yank) and try to relax ... " hey you kids, get offa my lawn !"
FabBlondie , 21 Sep 2017 17:06
There are good reasons to teach coding. Too many of today's computer users are amazingly unaware of the technology that allows them to send and receive emails, use their smart phones, and use websites. Few understand the basic issues involved in computer security, especially as it relates to their personal privacy. Hopefully some introductory computer classes could begin to remedy this, and the younger the students the better.

Security problems are not strictly a matter of coding.

Security issues persist in tech. Clearly that is not a function of the size of the workforce. I propose that it is a function of poor management and design skills. These are not taught in any programming class I ever took. I learned these on the job and in an MBA program, and because I was determined.

Don't confuse basic workforce training with an effective application of tech to authentic needs.

How can the "disruption" so prized in today's Big Tech do anything but aggravate our social problems? Tech's disruption begins with a blatant ignorance of and disregard for causes, and believes to its bones that a high tech app will truly solve a problem it cannot even describe.

Kool Aid anyone?

peterainbow -> brady , 21 Sep 2017 17:05
indeed that idea has been around as long as cobol and in practice has just made things worse, the fact that many people outside of software engineering don;t seem to realise is that the coding itself is a relatively small part of the job
FabBlondie -> imipak , 21 Sep 2017 17:04
Hurrah.
peterainbow -> rebel7 , 21 Sep 2017 17:04
so how many female and old software engineers are there who are unable to get a job, i'm one of them at 55 finding it impossible to get a job and unlike many 'developers' i know what i'm doing
peterainbow , 21 Sep 2017 17:02
meanwhile the age and sex discrimination in IT goes on, if there were truly a shortage i wouldn't be unemployed
Jared Hall -> peter nelson , 21 Sep 2017 17:01
Training more people for an occupation will result in more people becoming qualified to perform that occupation, irregardless of the fact that many will perform poorly at it. A CS degree is no guarantee of competency, but it is one of the best indicators of general qualification we have at the moment. If you can provide a better metric for analyzing the underlying qualifications of the labor force, I'd love to hear it.

Regarding your anecdote, while interesting, it poor evidence when compared to the aggregate statistical data analyzed in the EPI study.

peter nelson -> FabBlondie , 21 Sep 2017 17:00

Job-specific training is completely different.

Good grief. It's not job-specific training. You sound like someone who knows nothing about computer programming.

Designing a computer program requires analysing the task; breaking it down into its components, prioritising them and identifying interdependencies, and figuring out which parts of it can be broken out and done separately. Expressing all this in some programming language like Java, C, or C++ is quite secondary.

So once you learn to organise a task properly you can apply it to anything - remodeling a house, planning a vacation, repairing a car, starting a business, or administering a (non-software) project at work.

#### [Oct 02, 2017] Evaluation of potential job candidates for programming job should include evaluation of thier previous projects and code written

##### "... most of what software designers do is not coding. It requires domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will be a long time before we get where you describe. ..."
###### Oct 02, 2017 | discussion.theguardian.com
Instant feedback is one of the things I really like about programming, but it's also the thing that some people can't handle. As I'm developing a program all day long the compiler is telling me about build errors or warnings or when I go to execute it it crashes or produces unexpected output, etc. Software engineers are bombarded all day with negative feedback and little failures. You have to be thick-skinned for this work.
peter nelson -> peterainbow , 21 Sep 2017 19:42
How is it shallow and lazy? I'm hiring for the real world so I want to see some real world accomplishments. If the candidate is fresh out of university they can't point to work projects in industry because they don't have any. But they CAN point to stuff they've done on their own. That shows both motivation and the ability to finish something. Why do you object to it?
anticapitalist -> peter nelson , 21 Sep 2017 14:47
Thank you. The kids that spend high school researching independently and spend their nights hacking just for the love of it and getting a job without college are some of the most competent I've ever worked with. Passionless college grads that just want a paycheck are some of the worst.
John Kendall , 21 Sep 2017 19:42
There is a big difference between "coding" and programming. Coding for a smart phone app is a matter of calling functions that are built into the device. For example, there are functions for the GPS or for creating buttons or for simulating motion in a game. These are what we used to call subroutines. The difference is that whereas we had to write our own subroutines, now they are just preprogrammed functions. How those functions are written is of little or no importance to today's coders.

Nor are they able to program on that level. Real programming requires not only a knowledge of programming languages, but also a knowledge of the underlying algorithms that make up actual programs. I suspect that "coding" classes operate on a quite superficial level.

Game Cabbage -> theindyisbetter , 21 Sep 2017 19:40
Its not about the amount of work or the amount of labor. Its about the comparative availability of both and how that affects the balance of power, and that in turn affects the overall quality of life for the 'majority' of people.
c mm -> Ed209 , 21 Sep 2017 19:39
Most of this is not true. Peter Nelson gets it right by talking about breaking steps down and thinking rationally. The reason you can't just teach the theory, however, is that humans learn much better with feedback. Think about trying to learn how to build a fast car, but you never get in and test its speed. That would be silly. Programming languages take the system of logic that has been developed for centuries and gives instant feedback on the results. It's a language of rationality.
peter nelson -> peterainbow , 21 Sep 2017 19:37
This article is about the US. The tech industry in the EU is entirely different, and basically moribund. Where is the EU's Microsoft, Apple, Google, Amazon, Oracle, Intel, Facebook, etc, etc? The opportunities for exciting interesting work, plus the time and schedule pressures that force companies to overlook stuff like age because they need a particular skill Right Now, don't exist in the EU. I've done very well as a software engineer in my 60's in the US; I cannot imagine that would be the case in the EU.
peterainbow -> peter nelson , 21 Sep 2017 19:37
sorry but that's just not true, i doubt you are really programming still, or quasi programmer but really a manager who like to keep their hand in, you certainly aren't busy as you've been posting all over this cif. also why would you try and hire someone with such disparate skillsets, makes no sense at all

oh and you'd be correct that i do have workplace issues, ie i have a disability and i also suffer from depression, but that shouldn't bar me from employment and again regarding my skills going stale, that again contradicts your statement that it's about planning/analysis/algorithms etc that you said above ( which to some extent i agree with )

c mm -> peterainbow , 21 Sep 2017 19:36
Not at all, it's really egalitarian. If I want to hire someone to paint my portrait, the best way to know if they're any good is to see their previous work. If they've never painted a portrait before then I may want to go with the girl who has
c mm -> ragingbull , 21 Sep 2017 19:34
There is definitely not an excess. Just look at projected jobs for computer science on the Bureau of Labor statistics.
c mm -> perble conk , 21 Sep 2017 19:32
Right? It's ridiculous. "Hey, there's this industry you can train for that is super valuable to society and pays really well!"
Then Ben Tarnoff, "Don't do it! If you do you'll drive down wages for everyone else in the industry. Build your fire starting and rock breaking skills instead."
peterainbow -> peter nelson , 21 Sep 2017 19:29
how about how new labor tried to sign away IT access in England to India in exchange for banking access there, how about the huge loopholes in bringing in cheap IT workers from elsewhere in the world, not conspiracies, but facts
peter nelson -> eirsatz , 21 Sep 2017 19:25
I think the difference between gifted and not is motivation. But I agree it's not innate. The kid who stayed up all night in high school hacking into the school server to fake his coding class grade is probably more gifted than the one who spent 4 years in college getting a BS in CS because someone told him he could get a job when he got out.

I've done some hiring in my life and I always ask them to tell me about stuff they did on their own.

peter nelson -> TheBananaBender , 21 Sep 2017 19:20

Most coding jobs are bug fixing.

The only bugs I have to fix are the ones I make.

peter nelson -> Ed209 , 21 Sep 2017 19:19
As several people have pointed out, writing a computer program requires analyzing and breaking down a task into steps, identifying interdependencies, prioritizing the order, figuring out what parts can be organized into separate tasks that be done separately, etc.

These are completely independent of the language - I've been programming for 40 years in everything from FORTRAN to APL to C to C# to Java and it's all the same. Not only that but they transcend programming - they apply to planning a vacation, remodeling a house, or fixing a car.

peter nelson -> ragingbull , 21 Sep 2017 19:14
Neither coding nor having a bachelor's degree in computer science makes you a suitable job candidate. I've done a lot of recruiting and interviews in my life, and right now I'm trying to hire someone. And I've never recommended hiring anyone right out of school who could not point me to a project they did on their own, i.e., not just grades and test scores. I'd like to see an IOS or Android app, or a open-source component, or utility or program of theirs on GitHub, or something like that.

That's the thing that distinguishes software from many other fields - you can do something real and significant on your own. If you haven't managed to do so in 4 years of college you're not a good candidate.

peter nelson -> nickGregor , 21 Sep 2017 19:07
Within the next year coding will be old news and you will simply be able to describe things in ur native language in such a way that the machine will be able to execute any set of instructions you give it.

In a sense that's already true, as i noted elsewhere. 90% of the code in my projects (Java and C# in their respective IDEs) is machine generated. I do relatively little "coding". But the flaw in your idea is this: most of what software designers do is not coding. It requires domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will be a long time before we get where you describe.

Ricardo111 -> martinusher , 21 Sep 2017 19:03
Completely agree. At the highest levels there is more work that goes into managing complexity and making sure nothing is missed than in making the wheels turn and the beepers beep.
ragingbull , 21 Sep 2017 19:02
Hang on... if the current excess of computer science grads is not driving down wages, why would training more kids to code make any difference?
Ricardo111 -> youngsteveo , 21 Sep 2017 18:59
I've actually interviewed people for very senior technical positions in Investment Banks who had all the fancy talk in the world and yet failed at some very basic "write me a piece of code that does X" tests.

Next hurdle on is people who have learned how to deal with certain situations and yet don't really understand how it works so are unable to figure it out if you change the problem parameters.

That said, the average coder is only slightly beyond this point. The ones who can take in account maintenability and flexibility for future enhancements when developing are already a minority, and those who can understand the why of software development process steps, design software system architectures or do a proper Technical Analysis are very rare.

eirsatz -> Ricardo111 , 21 Sep 2017 18:57
Hubris. It's easy to mistake efficiency born of experience as innate talent. The difference between a 'gifted coder' and a 'non gifted junior coder' is much more likely to be 10 or 15 years sitting at a computer, less if there are good managers and mentors involved.
Ed209 , 21 Sep 2017 18:57
Politicians love the idea of teaching children to 'code', because it sounds so modern, and nobody could possible object... could they? Unfortunately it simply shows up their utter ignorance of technical matters because there isn't a language called 'coding'. Computer programming languages have changed enormously over the years, and continue to evolve. If you learn the wrong language you'll be about as welcome in the IT industry as a lamp-lighter or a comptometer operator.

The pace of change in technology can render skills and qualifications obsolete in a matter of a few years, and only the very best IT employers will bother to retrain their staff - it's much cheaper to dump them. (Most IT posts are outsourced through agencies anyway - those that haven't been off-shored. )

peter nelson -> YEverKnot , 21 Sep 2017 18:54
And this isn't even a good conspiracy theory; it's a bad one. He offers no evidence that there's an actual plan or conspiracy to do this. I'm looking for an account of where the advocates of coding education met to plot this in some castle in Europe or maybe a secret document like "The Protocols of the Elders of Google", or some such.
TheBananaBender , 21 Sep 2017 18:52
Most jobs in IT are shit - desktop support, operations droids. Most coding jobs are bug fixing.
Ricardo111 -> Wiretrip , 21 Sep 2017 18:49
Tool Users Vs Tool Makers. The really good coders actually get why certain things work as they do and can adjust them for different conditions. The mass produced coders are basically code copiers and code gluing specialists.
peter nelson -> AmyInNH , 21 Sep 2017 18:49
People who get Masters and PhD's in computer science are not usually "coders" or software engineers - they're usually involved in obscure, esoteric research for which there really is very little demand. So it doesn't surprise me that they're unemployed. But if someone has a Bachelor's in CS and they're unemployed I would have to wonder what they spent their time at university doing.

The thing about software that distinguishes it from lots of other fields is that you can make something real and significant on your own . I would expect any recent CS major I hire to be able to show me an app or an open-source component or something similar that they made themselves, and not just test scores and grades. If they could not then I wouldn't even think about hiring them.

Ricardo111 , 21 Sep 2017 18:44
Fortunately for those of us who are actually good at coding, the difference in productivity between a gifted coder and a non-gifted junior developer is something like 100-fold. Knowing how to code and actually being efficient at creating software programs and systems are about as far apart as knowing how to write and actually being able to write a bestselling exciting Crime trilogy.
peter nelson -> jamesupton , 21 Sep 2017 18:36

The rest of us will still have to stick to burning buildings down and stringing up the aristocracy.

If you know how to write software you can get a robot to do those things.

peter nelson -> Julian Williams , 21 Sep 2017 18:34
I do think there is excess supply of software programmers. There is only a modest number of decent jobs, say as an algorithms developer in finance, general architecture of complex systems or to some extent in systems security.

Most very high paying jobs in the technology sector are in the same standard upper management roles as in every other industry.

How do you define "high paying". Everyone I know (and I know a lot because I've been a sw engineer for 40 years) who is working fulltime as a software engineer is making a high-middle-class salary, and can easily afford a home, travel on holiday, investments, etc.

YEverKnot , 21 Sep 2017 18:32

Tech's push to teach coding isn't about kids' success – it's about cutting wages

Nowt like a good conspiracy theory.
freeandfair -> WithoutPurpose , 21 Sep 2017 18:31
What is a stupidly low salary? 100K?
freeandfair -> AmyInNH , 21 Sep 2017 18:30
> Already there. I take it you skipped right past the employment prospects for US STEM grads - 50% chance of finding STEM work.

That just means 50% of them are no good and need to develop their skills further or try something else.
Not every with a STEM degree from some 3rd rate college is capable of doing complex IT or STEM work.

peter nelson -> edmundberk , 21 Sep 2017 18:30

So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?

Yes. Haven't you noticed how wage growth has flattened? That's because some do-gooders" thought it would be a fine idea to educate the peasants. There was a time when only the well-to do knew how to read and write, and that's why they well-to-do were well-to-do. Education is evil. Stop educating people and then those of us who know how to read and write can charge them for reading and writing letters and email. Better yet, we can have Chinese and Indians do it for us and we just charge a transaction fee.

AmyInNH -> peter nelson , 21 Sep 2017 18:27
Massive amounts of public use cars, it doesn't mean millions need schooling in auto mechanics. Same for software coding. We aren't even using those who have Bachelors, Masters and PhDs in CS.
carlospapafritas , 21 Sep 2017 18:27
"..importing large numbers of skilled guest workers from other countries through the H1-B visa program..."

"skilled" is good. H1B has long ( appx 17 years) been abused and turned into trafficking scheme. One can buy H1B in India. Powerful ethnic networks wheeling & dealing in US & EU selling IT jobs to essentially migrants.

The real IT wages haven't been stagnant but steadily falling from the 90s. It's easy to see why. $82K/year IT wage was about average in the 90s. Comparing the prices of housing (& pretty much everything else) between now gives you the idea. freeandfair -> whitehawk66 , 21 Sep 2017 18:27 > not every kid wants or needs to have their soul sucked out of them sitting in front of a screen full of code for some idiotic service that some other douchbro thinks is the next iteration of sliced bread Taking a couple of years of programming are not enough to do this as a job, don't worry. But learning to code is like learning maths, - it helps to develop logical thinking, which will benefit you in every area of your life. James Dey , 21 Sep 2017 18:25 We should stop teaching our kids to be journalists, then your wage might go up. peter nelson -> AmyInNH , 21 Sep 2017 18:23 What does this even mean? #### [Oct 02, 2017] Programming is a culturally important skill ##### Notable quotes: ##### "... A lot of basic entry level jobs require a good level of Excel skills. ..." ##### "... Programming is a cultural skill; master it, or even understand it on a simple level, and you understand how the 21st century works, on the machinery level. To bereave the children of this crucial insight is to close off a door to their future. ..." ##### "... What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. ..." ##### "... Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. ..." ##### "... We've seen this kind of tactic for some time now. Silicon Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all about) with little room for genuine creativity, or even understanding of what that actually means. I've seen how impossible it is to explain to upper level management how crappy cheap developers actually diminish productivity and value. All they see is that the requisition is filled for less money. ..." ##### "... Libertarianism posits that everyone should be free to sell their labour or negotiate their own arrangements without the state interfering. So if cheaper foreign labour really was undercutting American labout the Libertarians would be thrilled. ..." ##### "... Not producing enough to fill vacancies or not producing enough to keep wages at Google's preferred rate? Seeing as research shows there is no lack of qualified developers, the latter option seems more likely. ..." ##### "... We're already using Asia as a source of cheap labor for the tech industry. Why do we need to create cheap labor in the US? ..." ###### www.moonofalabama.org David McCaul -> IanMcLzzz , 21 Sep 2017 13:03 There are very few professional Scribes nowadays, a good level of reading & writing is simplely a default even for the lowest paid jobs. A lot of basic entry level jobs require a good level of Excel skills. Several years from now basic coding will be necessary to manipulate basic tools for entry level jobs, especially as increasingly a lot of real code will be generated by expert systems supervised by a tiny number of supervisors. Coding jobs will go the same way that trucking jobs will go when driverless vehicles are perfected. Offer the class but not mandatory. Just like I could never succeed playing football others will not succeed at coding. The last thing the industry needs is more bad developers showing up for a paycheck. Programming is a cultural skill; master it, or even understand it on a simple level, and you understand how the 21st century works, on the machinery level. To bereave the children of this crucial insight is to close off a door to their future. What's next, keep them off Math, because, you know . . Taylor Dotson -> freeandfair , 21 Sep 2017 13:59 That's some crystal ball you have there. English teachers will need to know how to code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service industry? PolydentateBrigand , 21 Sep 2017 12:59 The economy isn't a zero-sum game. Developing a more skilled workforce that can create more value will lead to economic growth and improvement in the general standard of living. Talented coders will start new tech businesses and create more jobs. What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. But is isn't just about coding for Tarnoff. He seems to hold education in contempt generally. "The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric." If they can't literally fix everything let's just get rid of them, right? Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. mlzarathustra , 21 Sep 2017 16:52 I agree with the basic point. We've seen this kind of tactic for some time now. Silicon Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all about) with little room for genuine creativity, or even understanding of what that actually means. I've seen how impossible it is to explain to upper level management how crappy cheap developers actually diminish productivity and value. All they see is that the requisition is filled for less money. The bigger problem is that nobody cares about the arts, and as expensive as education is, nobody wants to carry around a debt on a skill that won't bring in the bucks. And smartphone-obsessed millennials have too short an attention span to fathom how empty their lives are, devoid of the aesthetic depth as they are. I can't draw a definite link, but I think algorithm fails, which are based on fanatical reliance on programmed routines as the solution to everything, are rooted in the shortage of education and cultivation in the arts. Economics is a social science, and all this is merely a reflection of shared cultural values. The problem is, people think it's math (it's not) and therefore set in stone. AmyInNH -> peter nelson , 21 Sep 2017 16:51 Geeze it'd be nice if you'd make an effort. rucore.libraries.rutgers.edu/rutgers-lib/45960/PDF/1/ https://rucore.libraries.rutgers.edu/rutgers-lib/46156 / https://rucore.libraries.rutgers.edu/rutgers-lib/46207 / peter nelson -> WyntonK , 21 Sep 2017 16:45 Libertarianism posits that everyone should be free to sell their labour or negotiate their own arrangements without the state interfering. So if cheaper foreign labour really was undercutting American labout the Libertarians would be thrilled. But it's not. I'm in my 60's and retiring but I've been a software engineer all my life. I've worked for many different companies, and in different industries and I've never had any trouble competing with cheap imported workers. The people I've seen fall behind were ones who did not keep their skills fresh. When I was laid off in 2009 in my mid-50's I made sure my mobile-app skills were bleeding edge (in those days ANYTHING having to do with mobile was bleeding edge) and I used to go to job interviews with mobile devices to showcase what I could do. That way they could see for themselves and not have to rely on just a CV. They older guys who fell behind did so because their skills and toolsets had become obsolete. Now I'm trying to hire a replacement to write Android code for use in industrial production and struggling to find someone with enough experience. So where is this oversupply I keep hearing about? Jared Hall -> RogTheDodge , 21 Sep 2017 16:42 Not producing enough to fill vacancies or not producing enough to keep wages at Google's preferred rate? Seeing as research shows there is no lack of qualified developers, the latter option seems more likely. JayThomas , 21 Sep 2017 16:39 It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry. We're already using Asia as a source of cheap labor for the tech industry. Why do we need to create cheap labor in the US? That just seems inefficient. FabBlondie -> RogTheDodge , 21 Sep 2017 16:39 There was never any need to give our jobs to foreigners. That is, if you are comparing the production of domestic vs. foreign workers. The sole need was, and is, to increase profits. peter nelson -> AmyInNH , 21 Sep 2017 16:34 Link? FabBlondie , 21 Sep 2017 16:34 Schools MAY be able to fix big social problems, but only if they teach a well-rounded curriculum that includes classical history and the humanities. Job-specific training is completely different. What a joke to persuade public school districts to pick up the tab on job training. The existing social problems were not caused by a lack of programmers, and cannot be solved by Big Tech. I agree with the author that computer programming skills are not that limited in availability. Big Tech solved the problem of the well-paid professional some years ago by letting them go, these were mostly workers in their 50s, and replacing them with H1-B visa-holders from India -- who work for a fraction of their experienced American counterparts. It is all about profits. Big Tech is no different than any other "industry." peter nelson -> Jared Hall , 21 Sep 2017 16:31 Supply of apples does not affect the demand for oranges. Teaching coding in high school does not necessarily alter the supply of software engineers. I studied Chinese History and geology at University but my doing so has had no effect on the job prospects of people doing those things for a living. johnontheleft -> Taylor Dotson , 21 Sep 2017 16:30 You would be surprised just how much a little coding knowledge has transformed my ability to do my job (a job that is not directly related to IT at all). peter nelson -> Jared Hall , 21 Sep 2017 16:29 Because teaching coding does not affect the supply of actual engineers. I've been a professional software engineer for 40 years and coding is only a small fraction of what I do. peter nelson -> Jared Hall , 21 Sep 2017 16:28 You and the linked article don't know what you're talking about. A CS degree does not equate to a productive engineer. A few years ago I was on the recruiting and interviewing committee to try to hire some software engineers for a scientific instrument my company was making. The entire team had about 60 people (hw, sw, mech engineers) but we needed 2 or 3 sw engineers with math and signal-processing expertise. The project was held up for SIX months because we could not find the people we needed. It would have taken a lot longer than that to train someone up to our needs. Eventually we brought in some Chinese engineers which cost us MORE than what we would have paid for an American engineer when you factor in the agency and visa paperwork. Modern software engineers are not just generic interchangable parts - 21st century technology often requires specialised scientific, mathematical, production or business domain-specific knowledge and those people are hard to find. freeluna -> freeluna , 21 Sep 2017 16:18 ...also, this article is alarmist and I disagree with it. Dear Author, Phphphphtttt! Sincerely, freeluna AmyInNH , 21 Sep 2017 16:16 Regimentation of the many, for benefit of the few. AmyInNH -> Whatitsaysonthetin , 21 Sep 2017 16:15 Visa jobs are part of trade agreements. To be very specific, US gov (and EU) trade Western jobs for market access in the East. http://www.marketwatch.com/story/in-india-british-leader-theresa-may-preaches-free-trade-2016-11-07 There is no shortage. This is selling off the West's middle class. Take a look at remittances in wikipedia and you'll get a good idea just how much it costs the US and EU economies, for sake of record profits to Western industry. jigen , 21 Sep 2017 16:13 And thanks to the author for not using the adjective "elegant" in describing coding. freeluna , 21 Sep 2017 16:13 I see advantages in teaching kids to code, and for kids to make arduino and other CPU powered things. I don't see a lot of interest in science and tech coming from kids in school. There are too many distractions from social media and game platforms, and not much interest in developing tools for future tech and science. jigen , 21 Sep 2017 16:13 Let the robots do the coding. Sorted. FluffyDog -> rgilyead , 21 Sep 2017 16:13 Although coding per se is a technical skill it isn't designing or integrating systems. It is only a small, although essential, part of the whole software engineering process. Learning to code just gets you up the first steps of a high ladder that you need to climb a fair way if you intend to use your skills to earn a decent living. rebel7 , 21 Sep 2017 16:11 BS. Friend of mine in the SV tech industry reports that they are about 100,000 programmers short in just the internet security field. Y'all are trying to create a problem where there isn't one. Maybe we shouldn't teach them how to read either. They might want to work somewhere besides the grill at McDonalds. AmyInNH -> WyntonK , 21 Sep 2017 16:11 To which they will respond, offshore. AmyInNH -> MrFumoFumo , 21 Sep 2017 16:10 They're not looking for good, they're looking for cheap + visa indentured. Non-citizens. nickGregor , 21 Sep 2017 16:09 Within the next year coding will be old news and you will simply be able to describe things in ur native language in such a way that the machine will be able to execute any set of instructions you give it. Coding is going to change from its purely abstract form that is not utilized at peak- but if you can describe what you envision in an effective concise manner u could become a very good coder very quickly -- and competence will be determined entirely by imagination and the barriers of entry will all but be extinct AmyInNH -> unclestinky , 21 Sep 2017 16:09 Already there. I take it you skipped right past the employment prospects for US STEM grads - 50% chance of finding STEM work. AmyInNH -> User10006 , 21 Sep 2017 16:06 Apparently a whole lot of people are just making it up, eh? http://www.motherjones.com/politics/2017/09/inside-the-growing-guest-worker-program-trapping-indian-students-in-virtual-servitude / From today, http://www.computerworld.com/article/2915904/it-outsourcing/fury-rises-at-disney-over-use-of-foreign-workers.html All the way back to 1995, https://www.youtube.com/watch?v=vW8r3LoI8M4&feature=youtu.be JCA1507 -> whitehawk66 , 21 Sep 2017 16:04 Bravo JCA1507 -> DirDigIns , 21 Sep 2017 16:01 Total... utter... no other way... huge... will only get worse... everyone... (not a very nuanced commentary is it). I'm glad pieces like this are mounting, it is relevant that we counter the mix of messianism and opportunism of Silicon Valley propaganda with convincing arguments. RogTheDodge -> WithoutPurpose , 21 Sep 2017 16:01 That's not my experience. AmyInNH -> TTauriStellarbody , 21 Sep 2017 16:01 It's a stall tactic by Silicon Valley, "See, we're trying to resolve the [non-existant] shortage." AmyInNH -> WyntonK , 21 Sep 2017 16:00 They aren't immigrants. They're visa indentured foreign workers. Why does that matter? It's part of the cheap+indentured hiring criteria. If it were only cheap, they'd be lowballing offers to citizen and US new grads. RogTheDodge -> Jared Hall , 21 Sep 2017 15:59 No. Because they're the ones wanting them and realizing the US education system is not producing enough RogTheDodge -> Jared Hall , 21 Sep 2017 15:58 Except the demand is increasing massively. RogTheDodge -> WyntonK , 21 Sep 2017 15:57 That's why we are trying to educate American coders - so we don't need to give our jobs to foreigners. AmyInNH , 21 Sep 2017 15:56 Correct premises, - proletarianize programmers - many qualified graduates simply can't find jobs. Invalid conclusion: - The problem is there aren't enough good jobs to be trained for. That conclusion only makes sense if you skip right past ... " importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status" Hiring Americans doesn't "hurt" their record profits. It's incessant greed and collusion with our corrupt congress. Oldvinyl , 21 Sep 2017 15:51 This column was really annoying. I taught my students how to program when I was given a free hand to create the computer studies curriculum for a new school I joined. (Not in the UK thank Dog). 7th graders began with studying the history and uses of computers and communications tech. My 8th grade learned about computer logic (AND, OR, NOT, etc) and moved on with QuickBASIC in the second part of the year. My 9th graders learned about databases and SQL and how to use HTML to make their own Web sites. Last year I received a phone call from the father of one student thanking me for creating the course, his son had just received a job offer and now works in San Francisco for Google. I am so glad I taught them "coding" (UGH) as the writer puts it, rather than arty-farty subjects not worth a damn in the jobs market. WyntonK -> DirDigIns , 21 Sep 2017 15:47 I live and work in Silicon Valley and you have no idea what you are talking about. There's no shortage of coders at all. Terrific coders are let go because of their age and the availability of much cheaper foreign coders(no, I am not opposed to immigration). Sean May , 21 Sep 2017 15:43 Looks like you pissed off a ton of people who can't write code and are none to happy with you pointing out the reason they're slinging insurance for geico. I think you're quite right that coding skills will eventually enter the mainstream and slowly bring down the cost of hiring programmers. The fact is that even if you don't get paid to be a programmer you can absolutely benefit from having some coding skills. There may however be some kind of major coding revolution with the advent of quantum computing. The way code is written now could become obsolete. Jared Hall -> User10006 , 21 Sep 2017 15:43 Why is it a fantasy? Does supply and demand not apply to IT labor pools? Jared Hall -> ninianpark , 21 Sep 2017 15:42 Why is it a load of crap? If you increase the supply of something with no corresponding increase in demand, the price will decrease. pictonic , 21 Sep 2017 15:40 A well-argued article that hits the nail on the head. Amongst any group of coders, very few are truly productive, and they are self starters; training is really needed to do the admin. Jared Hall -> DirDigIns , 21 Sep 2017 15:39 There is not a huge skills shortage. That is why the author linked this EPI report analyzing the data to prove exactly that. This may not be what people want to believe, but it is certainly what the numbers indicate. There is no skills gap. Axel Seaton -> Jaberwocky , 21 Sep 2017 15:34 Yeah, but the money is crap DirDigIns -> IanMcLzzz , 21 Sep 2017 15:32 Perfect response for the absolute crap that the article is pushing. DirDigIns , 21 Sep 2017 15:30 Total and utter crap, no other way to put it. There is a huge skills shortage in key tech areas that will only get worse if we don't educate and train the young effectively. Everyone wants youth to have good skills for the knowledge economy and the ability to earn a good salary and build up life chances for UK youth. So we get this verbal diarrhoea of an article. Defies belief. Whatitsaysonthetin -> Evelita , 21 Sep 2017 15:27 Yes. China and India are indeed training youth in coding skills. In order that they take jobs in the USA and UK! It's been going on for 20 years and has resulted in many experienced IT staff struggling to get work at all and, even if they can, to suffer stagnating wages. WmBoot , 21 Sep 2017 15:23 Wow. Congratulations to the author for provoking such a torrent of vitriol! Job well done. TTauriStellarbody , 21 Sep 2017 15:22 Has anyones job is at risk from a 16 year old who can cobble together a couple of lines of javascript since the dot com bubble? Good luck trying to teach a big enough pool of US school kids regular expressions let alone the kind of test driven continuous delivery that is the norm in the industry now. freeandfair -> youngsteveo , 21 Sep 2017 13:27 > A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job I have exactly the same experience. There is undeniable a skill gap. It takes about a year for a skilled professional to adjust and learn enough to become productive, it takes about 3-5 years for a college grad. It is nothing new. But the issue is, as the college grad gets trained, another company steal him/ her. And also keep in mind, all this time you are doing job and training the new employee as time permits. Many companies in the US cut the non-profit department (such as IT) to the bone, we cannot afford to lose a person and then train another replacement for 3-5 years. The solution? Hire a skilled person. But that means nobody is training college grads and in 10-20 years we are looking at the skill shortage to the point where the only option is brining foreign labor. American cut-throat companies that care only about the bottom line cannibalized themselves. Heh. You are not a coder, I take it. :) Going to be a few decades before even the easiest coding jobs vanish. Given how shit most coders of my acquaintance have been - especially in matters of work ethic, logic, matching s/w to user requirements and willingness to test and correct their gormless output - most future coding work will probably be in the area of disaster recovery. Sorry, since the poor snowflakes can't face the sad facts, we have to call it "business continuation" these days, don't we? UncommonTruthiness , 21 Sep 2017 14:10 The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope! I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia. Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career. #### [Sep 24, 2017] Do Strongly Typed Languages Reduce Bugs? ###### Sep 24, 2017 | developers.slashdot.org (acolyer.org) Posted by EditorDavid on Saturday September 23, 2017 @05:19PM from the dynamic-discussions dept. "Static vs dynamic typing is always one of those topics that attracts passionately held positions," writes the Morning Paper -- reporting on an "encouraging" study that attempted to empirically evaluate the efficacy of statically-typed systems on mature, real-world code bases. The study was conducted by Christian Bird at Microsoft's "Research in Software Engineering" group with two researchers from University College London. Long-time Slashdot reader phantomfive writes: This study looked at bugs found in open source Javascript code. Looking through the commit history, they enumerated the bugs that would have been caught if a more strongly typed language (like Typescript) had been used. They found that a strongly typed language would have reduced bugs by 15%. Does this make you want to avoid Python? #### [Jun 28, 2017] PBS Pro Tutorial by Krishna Arutwar ###### www.nakedcapitalism.com What is PBS Pro? Portable Batch System (PBS) is a software which is used in cluster computing to schedule jobs on multiple nodes. PBS was started as contract project by NASA. PBS is available in three different versions as below 1) Torque: Terascale Open-source Resource and QUEue Manager (Torque) is developed from OpenPBS. It is developed and maintain by Adaptive Computing Enterprises. It is used as a distributed resource manager can perform well when integrated with Maui cluster scheduler to improve performance. 2) PBS Professional (PBS Pro): It is commercial version of PBS offered by Altair Engineering. 3) OpenPBS: It is open source version released in 1998 developed by NASA. It is not actively developed. In this article we are going to concentrate on tutorial of PBS Pro it is similar to some extent with Torque. PBS contain three basic units server, MoM (execution host), scheduler. 1. Server: It is heart of the PBS, with executable named "pbs_server". It uses IP network to communicate with the MoMs. PBS server create a batch job, modify the job requested from different MoMs. It keeps track of all resources available, assigned in the PBS complex from different MoMs. It will also monitor the PBS license for jobs. If your license expires it will throw an error. 2. Scheduler: PBS scheduler uses various algorithms to decide when job should get executed on which node or vnode by using detail of resources available from server. It has executable as "pbs_sched". 3. MoM: MoM is the mother of all execution job with executable "pbs_mom". When MoM gets job from server it will actually execute that job on the host. Each node must have MoM running to get participate in execution. Installation and Setting up of environment (cluster with multiple nodes) Extract compressed software of PBS Pro and go the path of extracted folder it contain "INSTALL" file, make that file executable you may use command like "chmod +x ./INSTALL". As shown in the image below run this executable. It will ask for the "execution directory" where you want to store the executable (such as qsub, pbsnodes, qdel etc.) used for different PBS operations and "home directory" which contain different configuration files. Keep both as default for simplicity. There are three kind of installation available as shown in figure: 1) Server node: PBS server, scheduler, MoM and commands are installed on this node. PBS server will keep track of all execution MoMs present in the cluster. It will schedule jobs on this execution nodes. As MoM and commands are also installed on server node it can be used to submit and execute the jobs. 2) Execution node: This type installs MoM and commands. This nodes are added as available nodes for execution in a cluster. They are also allowed to submit the jobs at server side with specific permission by server as we are going to see below. They are not involved in scheduling. This kind of installation ask for PBS server which is used to submit jobs, get status of jobs etc. 3 ) Client node: This are the nodes which are only allowed to submit a PBS job at server with specific permission by the server and allowed to see the status of the jobs. They are not involved in execution or scheduling. Creating vnode in PBS Pro: We can create multiple vnodes in a single node which contain some part of resources in a node. We can execute job on this vnodes with specified allocated resources. We can create vnode using qmgr command which is command line interface to PBS server. We can use command given below to create vnode using qmgr. Qmgr: create node Vnode1,Vnode2 resources_available.ncpus=8, resources_available.mem=10gb, resources_available.ngpus=1, sharing=default_excl  The command above will create two vnodes named Vnode1 and Vnode2 with 8 cpus cores, 10gb of memory and 1 GPU with sharing mode as default_excl which means this vnode can execute exclusively only one job at a time independent of number of resources free. This sharing mode can be default_shared which means any number of jobs can run on that vnode until all resources are busy. To know more about all attributes which can be used with vnode creation are available in PBS Pro reference guide. You can also create a file in " /var/spool/PBS/mom_priv/config.d/ " this folder with any name you want I prefer hostname -vnode with sample given below. It will select all files even temporary files with (~) and replace configuration for same vnode so delete unnecessary files to get proper configuration of vnodes. e.g. $configversion 2
hostname
:resources_available.ncpus=0
hostname
:resources_available.mem=0
hostname
:resources_available.ngpus=0
hostname
[0]:resources_available.ncpus=8
hostname
[0]:resources_available.mem=16gb
hostname
[0]:resources_available.ngpus=1
hostname
[0]:sharing=default_excl
hostname
[1]:resources_available.ncpus=8
hostname
[1]:resources_available.mem=16gb

hostname
[1]:resources_available.ngpus=1

hostname
[1]:sharing=default_excl

hostname
[2]:resources_available.ncpus=8

hostname
[2]:resources_available.mem=16gb

hostname
[2]:resources_available.ngpus=1

hostname
[2]:sharing=default_excl

hostname
[3]:resources_available.ncpus=8

hostname
[3]:resources_available.mem=16gb

hostname
[3]:resources_available.ngpus=1

hostname
[3]:sharing=default_excl

Here in this example we assigned default node configuration to resource available as 0 because by default it will detect and allocate all available resources to default node with sharing attribute as is default_shared.

Which cause problem as all the jobs will by default get scheduled on that default vnode because its sharing type is default_shared . If you want to schedule jobs on your customized vnodes you should allocate resources available as 0 on default vnode . Every time whenever you restart the PBS server

PBS get status:

get status of Jobs:

qstat will give details about jobs there states etc.

useful options:

To print detail about all jobs which are running or in hold state: qstat -a

To print detail about subjobs in JobArray which are running or in hold state: qstat -ta

get status of PBS nodes and vnodes:

"pbsnode -a" command will provide list of all nodes present in PBS complex with there resources available, assigned, status etc.

To get details of all nodes and vnodes you created use " pbsnodes -av" command.

You can also specify node or vnode name to get detail information of that specific node or vnode.

e.g.

pbsnodes wolverine (here wolverine is hostname of the node in PBS complex which is mapped with IP address in /etc/hosts file)

Job submission (qsub):

PBS MoM will submit jobs to the PBS server. Server maintain queue of jobs by default all jobs are submitted to default queue named "workq". You may create multiple queues by using "qmgr" command which is administrator interface mainly used to create, delete & modify queues and vnodes. PBS server will decide which job to be scheduled on which node or vnode based on scheduling policy and privileges set by user. To schedule jobs server will continuously ping to all MoMs in the PBS complex to get detail of resources available and assigned. PBS assigns unique job identifier to each and every job called JobID. For job submission PBS uses "qsub" command. It has syntax as shown below qsub script Here script may be a shell (sh, csh, tchs, ksh, bash) script. PBS by default uses /bin/sh. You may refer simple script given below #!/bin/sh

echo "This is PBS job"

When PBS completes execution of job it will store errors in file name with JobName.e{JobID} e.g. Job1.e1492

Output with file name

JobName.o{JobID} e.g. Job1.o1492

By default it will store this files in the current working directory (can be seen by pwd command) . You can change this location by giving path with -o option.

you may specify job name with -N option while submitting the job

qsub -N firstJob ./test.sh

If you don't specify job name it will store files by replacing JobName with script name. e.g. qsub ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in current working directory.

OR

qsub -N firstJob -o /home/user1/ ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in /home/user1/ directory.

If submitted job terminate abnormally (errors in job is not abnormal, this errors get stored in JobName.e{JobID} file) it will store its error and output files in "/var/spool/PBS/undelivered/ " folder.

Useful Options:

Select resources:

qsub -l select="chunks":ncpus=3:ngpus=1:mem=2gb script


e.g.

This Job will selects 2 copies with 3 cpus, 1 gpu and 2gb memory which mean it will select 6 cpus, 2 gpus and 4 gb ram.

qsub -l nodes=megamind:ncpus=3 /home/titan/PBS/input/in.sh

This job will select one node specified with hostname.

To select multiple nodes you may use command given below

qsub -l nodes=megamind+titan:ncpus=3 /home/titan/PBS/input/in.sh
Submit multiple jobs with same script (JobArray):

qsub -J 1-20 script

Submit dependant jobs:

In some cases you may require job which should run after successful or unsuccessful completion of some specified jobs for that PBS provide some options such as

qsub -W depend=afterok:316.megamind /home/titan/PBS/input/in.sh



This specified job will start only after successful completion of job with job ID "316.megamind". Like afterok PBS has other options such as beforeok

beforenotok to , afternotok. You may find all this details in the man page of qsub .

Submit Job with priority :

There are two ways using which we can set priority to jobs which are going to execute.

1) Using single queue with different jobs with different priority:

To change sequence of jobs queued in a execution queue open "$PBS_HOME/sched_priv/sched_config" file, normally$PBS_HOME is present in "/var/spool/PBS/" folder. Open this file and uncomment the line below if present otherwise add it .

job_sort_key : "job_priority HIGH"

After saving this file you will need to restart the pbs_sched daemon on head node you may use command below

service pbs restart

After completing this task you have to submit the job with -p option to specify priority of job within queue. This value may range between (-1024) to 1023, where -1024 is the lowest priority and 1023 is the highest priority in the queue.

e.g.

qsub -p 100 ./X.sh

qsub -p 101 ./Y.sh

qsub -p 102 ./Z.sh 
In this case PBS will execute jobs as explain in the diagram given below

2) Using different queues with specified priority: We are going to discuss this point in PBS Queue section.

In this example all jobs in queue 2 will complete first then queue 3 then queue 1, since priority of queue 2 > queue 3 > queue 1. Because of this job execution flow is as shown below

J4=> J5=> J6=>J7=> J8=> J9=> J1=> J2=> J3 PBS Queue:

PBS Pro can manage multiple queue as per users requirement. By default every job is queued in "workq" for execution. There are two types of queue are available execution and routing queue. Jobs in execution queue are used by pbs server for execution. Jobs in routing queue can not be executed they can be redirected to execution queue or another routing queue by using command qmove command. By default queue "workq" is an execution queue. The sequence of job in queue may change by using priority defined while job submission as specified above in job submission section.

Useful qmgr commands:

First type qmgr which is Manager interface of PBS Pro.

To create queue:


Qmgr:
create queue test2



To set type of queue you created:


Qmgr:
set queue test2 queue_type=execution



OR


Qmgr:
set queue test2 queue_type=route



To enable queue:


Qmgr:
set queue test2 enabled=True



To set priority of queue:


Qmgr:
set queue test2 priority=50



Jobs in queue with higher priority will get first preference. After completion of all jobs in the queue with higher priority jobs in lower priority queue are scheduled. There is huge probability of job starvation in queue with lower priority.

To start queue:


Qmgr:
set queue test2 started = True



To activate all queue (present at particular node):


Qmgr:
active queue @default



To set queue for specified users : You require to set acl_user_enable attribute to true which indicate PBS to only allow user present in acl_users list to submit the job.


Qmgr:
set queue test2 acl_user_enable=True



To set users permitted (to submit job in a queue):


Qmgr:
set queue test2 acl_users="user1@
..
,user2@
..
,user3@
..
"

(in place of .. you have to specify hostname of compute node in PBS complex. Only user name without hostname will allow users ( with same name ) to submit job from all nodes ( permitted to submit job ) in a PBS Complex).

To delete queues we created:


Qmgr:
delete queue test2



To see details of all queue status:

qstat -Q



You may specify specific queue name: qstat -Q test2

To see full details of all queue: qstat -Q -f

You may specify specific queue name: qstat -Q -f test2

#### [May 08, 2017] Betteridge's law of headlines

###### Apr 27, 2017 | en.wikipedia.org

The maxim has been cited by other names since as early as 1991, when a published compilation of Murphy's Law variants called it " Davis's law ", [5] a name that also crops up online, without any explanation of who Davis was. [6] [7] It has also been called just the " journalistic principle ", [8] and in 2007 was referred to in commentary as "an old truism among journalists". [9]

Ian Betteridge's name became associated with the concept after he discussed it in a February 2009 article, which examined a previous TechCrunch article that carried the headline "Did Last.fm Just Hand Over User Listening Data To the RIAA ?": [10]

This story is a great demonstration of my maxim that any headline which ends in a question mark can be answered by the word "no." The reason why journalists use that style of headline is that they know the story is probably bullshit, and don't actually have the sources and facts to back it up, but still want to run it. [1]

A similar observation was made by British newspaper editor Andrew Marr in his 2004 book My Trade , among Marr's suggestions for how a reader should interpret newspaper articles:

If the headline asks a question, try answering 'no'. Is This the True Face of Britain's Young? (Sensible reader: No.) Have We Found the Cure for AIDS? (No; or you wouldn't have put the question mark in.) Does This Map Provide the Key for Peace? (Probably not.) A headline with a question mark at the end means, in the vast majority of cases, that the story is tendentious or over-sold. It is often a scare story, or an attempt to elevate some run-of-the-mill piece of reporting into a national controversy and, preferably, a national panic. To a busy journalist hunting for real information a question mark means 'don't bother reading this bit'. [11]

Outside journalism

In the field of particle physics , the concept is known as Hinchliffe's Rule , [12] [13] after physicist Ian Hinchliffe , [14] who stated that if a research paper's title is in the form of a yes–no question, the answer to that question will be "no". [14] The adage was humorously led into a Liar's paradox by a pseudonymous 1988 paper which bore the title "Is Hinchliffe's Rule True?" [13] [14]

However, at least one article found that the "law" does not apply in research literature. [15]

#### [Nov 08, 2015] 2013 Keynote: Dan Quinlan: C++ Use in High Performance Computing Within DOE: Past and Future

The most brilliant idea takes great execution to be worth $20,000,000. That's why I don't want to hear people's ideas. I'm not interested until I see their execution. (This post originally appeared on my O'Reilly blog on August 16, 2005. I'm re-posting it here since their site is getting filled with ads.) #### [Oct 14, 2011] Dennis Ritchie, 70, Dies, Programming Trailblazer - by Steve Rohr ###### October 13, 2011 | NYTimes.com Dennis M. Ritchie, who helped shape the modern digital era by creating software tools that power things as diverse as search engines like Google and smartphones, was found dead on Wednesday at his home in Berkeley Heights, N.J. He was 70. Mr. Ritchie, who lived alone, was in frail health in recent years after treatment for prostate cancer and heart disease, said his brother Bill. In the late 1960s and early '70s, working at Bell Labs, Mr. Ritchie made a pair of lasting contributions to computer science. He was the principal designer of the C programming language and co-developer of the Unix operating system, working closely with Ken Thompson, his longtime Bell Labs collaborator. The C programming language, a shorthand of words, numbers and punctuation, is still widely used today, and successors like C++ and Java build on the ideas, rules and grammar that Mr. Ritchie designed. The Unix operating system has similarly had a rich and enduring impact. Its free, open-source variant, Linux, powers many of the world's data centers, like those at Google and Amazon, and its technology serves as the foundation of operating systems, like Apple's iOS, in consumer computing devices. "The tools that Dennis built - and their direct descendants - run pretty much everything today," said Brian Kernighan, a computer scientist at Princeton University who worked with Mr. Ritchie at Bell Labs. Those tools were more than inventive bundles of computer code. The C language and Unix reflected a point of view, a different philosophy of computing than what had come before. In the late '60s and early '70s, minicomputers were moving into companies and universities - smaller and at a fraction of the price of hulking mainframes. Minicomputers represented a step in the democratization of computing, and Unix and C were designed to open up computing to more people and collaborative working styles. Mr. Ritchie, Mr. Thompson and their Bell Labs colleagues were making not merely software but, as Mr. Ritchie once put it, "a system around which fellowship can form." C was designed for systems programmers who wanted to get the fastest performance from operating systems, compilers and other programs. "C is not a big language - it's clean, simple, elegant," Mr. Kernighan said. "It lets you get close to the machine, without getting tied up in the machine." Such higher-level languages had earlier been intended mainly to let people without a lot of programming skill write programs that could run on mainframes. Fortran was for scientists and engineers, while Cobol was for business managers. C, like Unix, was designed mainly to let the growing ranks of professional programmers work more productively. And it steadily gained popularity. With Mr. Kernighan, Mr. Ritchie wrote a classic text, "The C Programming Language," also known as "K. & R." after the authors' initials, whose two editions, in 1978 and 1988, have sold millions of copies and been translated into 25 languages. Dennis MacAlistair Ritchie was born on Sept. 9, 1941, in Bronxville, N.Y. His father, Alistair, was an engineer at Bell Labs, and his mother, Jean McGee Ritchie, was a homemaker. When he was a child, the family moved to Summit, N.J., where Mr. Ritchie grew up and attended high school. He then went to Harvard, where he majored in applied mathematics. While a graduate student at Harvard, Mr. Ritchie worked at the computer center at the Massachusetts Institute of Technology, and became more interested in computing than math. He was recruited by the Sandia National Laboratories, which conducted weapons research and testing. "But it was nearly 1968," Mr. Ritchie recalled in an interview in 2001, "and somehow making A-bombs for the government didn't seem in tune with the times." Mr. Ritchie joined Bell Labs in 1967, and soon began his fruitful collaboration with Mr. Thompson on both Unix and the C programming language. The pair represented the two different strands of the nascent discipline of computer science. Mr. Ritchie came to computing from math, while Mr. Thompson came from electrical engineering. "We were very complementary," said Mr. Thompson, who is now an engineer at Google. "Sometimes personalities clash, and sometimes they meld. It was just good with Dennis." Besides his brother Bill, of Alexandria, Va., Mr. Ritchie is survived by another brother, John, of Newton, Mass., and a sister, Lynn Ritchie of Hexham, England. Mr. Ritchie traveled widely and read voraciously, but friends and family members say his main passion was his work. He remained at Bell Labs, working on various research projects, until he retired in 2007. Colleagues who worked with Mr. Ritchie were struck by his code - meticulous, clean and concise. His writing, according to Mr. Kernighan, was similar. "There was a remarkable precision to his writing," Mr. Kernighan said, "no extra words, elegant and spare, much like his code." #### [Apr 24, 2011] A Short Guide To Lifestyle Design (LSD) The 7 Core Skills Of The Cyberpunk Survivalist ###### February 28, 2011 | Sublime Oblivion Disagree that a person can become a competent computer programmer in under a year. Well, maybe the exceptional genius… For most people, it takes a minimum of 3 years to master the skills required to be a decent coder. It's not just about learning Java (which I do agree is a good computer language to start with), there are certain prerequisites. Fortunately, not a lot of math is required, high-school algebra is sufficient, plus a grasp of "functions" (because programmers usually have to write a lot of functions). On the other hand, boolean logic is absolutely required, and that's more than just knowing the difference between logical AND and logcial OR (or XOR). Also, if one gets into databases (my specialty, actually), then one also needs to master the mathematics of set theory. And a real programmer also needs to be able to write (and understand) a recursion algorithm. For example, every time I have interviewed a potential coder, I have asked them, "Are you familiar with the 'Towers of Hanoi' algorithim?" If they don't know what that is, they still have a chance to impress me if they can describe a B-tree navigation algorithm. That's first- or second-year computer science stuff. If they can't recurse a directory tree (using whatever programming language of their choice), then they aren't a real programmer. God knows there are plenty of fakes in the business. Sorry for the rant. Having to deal with "pretend programmers" (rookies who think they're programmers because they know how to update their Facebook page) is one of my pet peeves… Grrrrrrrr! #### [Nov 30, 2010] Professor Sir Maurice Wilkes ###### Telegraph The computer, known as EDSAC (Electronic Delay Storage Automatic Calculator) was a huge contraption that took up a room in what was the University's old Mathematical Library. It contained 3,000 vacuum valves arranged on 12 racks and used tubes filled with mercury for memory. Despite its impressive size, it could only carry out 650 operations per second. Before the development of EDSAC, digital computers, such as the American Moore School's ENIAC (Electronic Numeral Integrator and Computer), were only capable of dealing with one particular type of problem. To solve a different kind of problem, thousands of switches had to be reset and miles of cable re-routed. Reprogramming took days. In 1946, a paper by the Hungarian-born scientist John von Neumann and others suggested that the future lay in developing computers with memory which could not only store data, but also sets of instructions, or programs. Users would then be able to change programs, written in binary number format, without rewiring the whole machine. The challenge was taken up by three groups of scientists - one at the University of Manchester, an American team led by JW Mauchly and JP Eckert, and the Cambridge team led by Wilkes. Eckert and Mauchly had been working on developing a stored-program computer for two years before Wilkes became involved at Cambridge. While the University of Manchester machine, known as "Baby", was the first to store data and program, it was Wilkes who became the first to build an operational machine based on von Neumann's ideas (which form the basis for modern computers) to deliver a service. Wilkes chose to adopt mercury delay lines suggested by Eckert to serve as an internal memory store. In such a delay line, an electrical signal is converted into a sound wave travelling through a long tube of mercury at a speed of 1,450 metres per second. The signal can be transmitted back and forth along the tube, several of which were combined to form the machine's memory. This memory meant the computer could store both data and program. The main program was loaded by paper tape, but once loaded this was executed from memory, making the machine the first of its kind. After two years of development, on May 6 1949 Wilkes's EDSAC "rather suddenly" burst into life, computing a table of square numbers. From early 1950 it offered a regular computing service to the members of Cambridge University, the first of its kind in the world, with Wilkes and his group developing programs and compiling a program library. The world's first scientific paper to be published using computer calculations - a paper on genetics by RA Fisher – was completed with the help of EDSAC. Wilkes was probably the first computer programmer to spot the coming significance of program testing: "In 1949 as soon as we started programming", he recalled in his memoirs, "we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realised that a large part of my life from then on was going to be spent in finding mistakes in my own programs." In 1951 Wilkes (with David J Wheeler and Stanley Gill) published the world's first textbook on computer programming, Preparation of Programs for an Electronic Digital Computer. Two years later he established the world's first course in Computer Science at Cambridge. EDSAC remained in operation until 1958, but the future lay not in delay lines but in magnetic storage and, when it came to the end of its life, the machine was cannibalised and scrapped, its old program tapes used as streamers at Cambridge children's parties. Wilkes, though, remained at the forefront of computing technology and made several other breakthroughs. In 1958 he built EDSAC's replacement, EDSAC II, which not only incorporated magnetic storage but was the first computer in the world to have a micro-programmed control unit. In 1965 he published the first paper on cache memories, followed later by a book on time-sharing. In 1974 he developed the "Cambridge Ring", a digital communication system linking computers together. The network was originally designed to avoid the expense of having a printer at every computer, but the technology was soon developed commercially by others. When EDSAC was built, Wilkes sought to allay public fears by describing the stored-program computer as "a calculating machine operated by a moron who cannot think, but can be trusted to do what he is told". In 1964, however, predicting the world in "1984", he drew a more Orwellian picture: "How would you feel," he wrote, "if you had exceeded the speed limit on a deserted road in the dead of night, and a few days later received a demand for a fine that had been automatically printed by a computer coupled to a radar system and vehicle identification device? It might not be a demand at all, but simply a statement that your bank account had been debited automatically." Maurice Vincent Wilkes was born at Dudley, Worcestershire, on June 26 1913. His father was a switchboard operator for the Earl of Dudley whose extensive estate in south Staffordshire had its own private telephone network; he encouraged his son's interest in electronics and at King Edward VI's Grammar School, Stourbridge, Maurice built his own radio transmitter and was allowed to operate it from home. Encouraged by his headmaster, a Cambridge-educated mathematician, Wilkes went up to St John's College, Cambridge to read Mathematics, but he studied electronics in his spare time in the University Library and attended lectures at the Engineering Department. After obtaining an amateur radio licence he constructed radio equipment in his vacations with which to make contact, via the ionosphere, with radio "hams" around the world. Wilkes took a First in Mathematics and stayed on at Cambridge to do a PhD on the propagation of radio waves in the ionosphere. This led to an interest in tidal motion in the atmosphere and to the publication of his first book Oscillations of the Earth's Atmosphere (1949). In 1937 he was appointed university demonstrator at the new Mathematical Laboratory (later renamed the Computer Laboratory) housed in part of the old Anatomy School. When war broke out, Wilkes left Cambridge to work with R Watson-Watt and JD Cockroft on the development of radar. Later he became involved in designing aircraft, missile and U-boat radio tracking systems. In 1945 Wilkes was released from war work to take up the directorship of the Cambridge Mathematical Laboratory and given the task of constructing a computer service for the University. The following year he attended a course on "Theory and Techniques for Design of Electronic Digital Computers" at the Moore School of Electrical Engineering at the University of Pennsylvania, the home of the ENIAC. The visit inspired Wilkes to try to build a stored-program computer and on his return to Cambridge, he immediately began work on EDSAC. Wilkes was appointed Professor of Computing Technology in 1965, a post he held until his retirement in 1980. Under his guidance the Cambridge University Computer Laboratory became one of the country's leading research centres. He also played an important role as an adviser to British computer companies and was instrumental in founding the British Computer Society, serving as its first president from 1957 to 1960. After his retirement, Wilkes spent six years as a consultant to Digital Equipment in Massachusetts, and was Adjunct Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology from 1981 to 1985. Later he returned to Cambridge as a consultant researcher with a research laboratory funded variously by Olivetti, Oracle and AT&T, continuing to work until well into his 90s. Maurice Wilkes was elected a fellow of the Royal Society in 1956, a Foreign Honorary Member of the American Academy of Arts and Sciences in 1974, a Fellow of the Royal Academy of Engineering in 1976 and a Foreign Associate of the American National Academy of Engineering in 1977. He was knighted in 2000. Among other prizes he received the ACM Turing Award in 1967; the Faraday Medal of the Institute of Electrical Engineers in 1981; and the Harry Goode Memorial Award of the American Federation for Information Processing Societies in 1968. In 1985 he provided a lively account of his work in Memoirs of a Computer Pioneer. Maurice Wilkes married, in 1947, Nina Twyman. They had a son and two daughters. Computer Laboratory Maurice V. Wilkes #### [Apr 25, 2008] Interview with Donald Knuth By Donald E. Knuth,Andrew Binstock ###### Apr 25, 2008 Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation. Andrew Binstock: You are one of the fathers of the open-source revolution, even if you aren't widely heralded as such. You previously have stated that you released TeX as open source because of the problem of proprietary implementations at the time, and to invite corrections to the code-both of which are key drivers for open-source projects today. Have you been surprised by the success of open source since that time? Donald Knuth: The success of open source code is perhaps the only thing in the computer field that hasn't surprised me during the past several decades. But it still hasn't reached its full potential; I believe that open-source programs will begin to be completely dominant as the economy moves more and more from products towards services, and as more and more volunteers arise to improve the code. For example, open-source code can produce thousands of binaries, tuned perfectly to the configurations of individual users, whereas commercial software usually will exist in only a few versions. A generic binary executable file must include things like inefficient "sync" instructions that are totally inappropriate for many installations; such wastage goes away when the source code is highly configurable. This should be a huge win for open source. Yet I think that a few programs, such as Adobe Photoshop, will always be superior to competitors like the Gimp-for some reason, I really don't know why! I'm quite willing to pay good money for really good software, if I believe that it has been produced by the best programmers. Remember, though, that my opinion on economic questions is highly suspect, since I'm just an educator and scientist. I understand almost nothing about the marketplace. Andrew: A story states that you once entered a programming contest at Stanford (I believe) and you submitted the winning entry, which worked correctly after a single compilation. Is this story true? In that vein, today's developers frequently build programs writing small code increments followed by immediate compilation and the creation and running of unit tests. What are your thoughts on this approach to software development? Donald: The story you heard is typical of legends that are based on only a small kernel of truth. Here's what actually happened: John McCarthy decided in 1971 to have a Memorial Day Programming Race. All of the contestants except me worked at his AI Lab up in the hills above Stanford, using the WAITS time-sharing system; I was down on the main campus, where the only computer available to me was a mainframe for which I had to punch cards and submit them for processing in batch mode. I used Wirth's ALGOL W system (the predecessor of Pascal). My program didn't work the first time, but fortunately I could use Ed Satterthwaite's excellent offline debugging system for ALGOL W, so I needed only two runs. Meanwhile, the folks using WAITS couldn't get enough machine cycles because their machine was so overloaded. (I think that the second-place finisher, using that "modern" approach, came in about an hour after I had submitted the winning entry with old-fangled methods.) It wasn't a fair contest. As to your real question, the idea of immediate compilation and "unit tests" appeals to me only rarely, when I'm feeling my way in a totally unknown environment and need feedback about what works and what doesn't. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up." Andrew: One of the emerging problems for developers, especially client-side developers, is changing their thinking to write programs in terms of threads. This concern, driven by the advent of inexpensive multicore PCs, surely will require that many algorithms be recast for multithreading, or at least to be thread-safe. So far, much of the work you've published for Volume 4 of The Art of Computer Programming (TAOCP) doesn't seem to touch on this dimension. Do you expect to enter into problems of concurrency and parallel programming in upcoming work, especially since it would seem to be a natural fit with the combinatorial topics you're currently working on? Donald: The field of combinatorial algorithms is so vast that I'll be lucky to pack its sequential aspects into three or four physical volumes, and I don't think the sequential methods are ever going to be unimportant. Conversely, the half-life of parallel techniques is very short, because hardware changes rapidly and each new machine needs a somewhat different approach. So I decided long ago to stick to what I know best. Other people understand parallel machines much better than I do; programmers should listen to them, not me, for guidance on how to deal with simultaneity. Andrew: Vendors of multicore processors have expressed frustration at the difficulty of moving developers to this model. As a former professor, what thoughts do you have on this transition and how to make it happen? Is it a question of proper tools, such as better native support for concurrency in languages, or of execution frameworks? Or are there other solutions? Donald: I don't want to duck your question entirely. I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they're trying to pass the blame for the future demise of Moore's Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won't be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Titanium" approach that was supposed to be so terrific-until it turned out that the wished-for compilers were basically impossible to write. Let me put it this way: During the past 50 years, I've written well over a thousand programs, many of which have substantial size. I can't think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading. Surely, for example, multiple processors are no help to TeX.[1] How many programmers do you know who are enthusiastic about these promised machines of the future? I hear almost nothing but grief from software people, although the hardware folks in our department assure me that I'm wrong. I know that important applications for parallelism exist-rendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc. But all these applications require dedicated code and special-purpose techniques, which will need to be changed substantially every few years. Even if I knew enough about such methods to write about them in TAOCP, my time would be largely wasted, because soon there would be little reason for anybody to read those parts. (Similarly, when I prepare the third edition of Volume 3 I plan to rip out much of the material about how to sort on magnetic tapes. That stuff was once one of the hottest topics in the whole software field, but now it largely wastes paper when the book is printed.) The machine I use today has dual processors. I get to use them both only when I'm running two independent jobs at the same time; that's nice, but it happens only a few minutes every week. If I had four processors, or eight, or more, I still wouldn't be any better off, considering the kind of work I do-even though I'm using my computer almost every day during most of the day. So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it's a pipe dream. (No-that's the wrong metaphor! "Pipelines" actually work for me, but threads don't. Maybe the word I want is "bubble.") From the opposite point of view, I do grant that web browsing probably will get better with multicores. I've been talking about my technical work, however, not recreation. I also admit that I haven't got many bright ideas about what I wish hardware designers would provide instead of multicores, now that they've begun to hit a wall with respect to sequential computation. (But my MMIX design contains several ideas that would substantially improve the current performance of the kinds of programs that concern me most-at the cost of incompatibility with legacy x86 programs.) Andrew: One of the few projects of yours that hasn't been embraced by a widespread community is literate programming. What are your thoughts about why literate programming didn't catch on? And is there anything you'd have done differently in retrospect regarding literate programming? Donald: Literate programming is a very personal thing. I think it's terrific, but that might well be because I'm a very strange person. It has tens of thousands of fans, but not millions. In my experience, software created with literate programming has turned out to be significantly better than software developed in more traditional ways. Yet ordinary software is usually okay-I'd give it a grade of C (or maybe C++), but not F; hence, the traditional methods stay with us. Since they're understood by a vast community of programmers, most people have no big incentive to change, just as I'm not motivated to learn Esperanto even though it might be preferable to English and German and French and Russian (if everybody switched). Jon Bentley probably hit the nail on the head when he once was asked why literate programming hasn't taken the whole world by storm. He observed that a small percentage of the world's population is good at programming, and a small percentage is good at writing; apparently I am asking everybody to be in both subsets. Yet to me, literate programming is certainly the most important thing that came out of the TeX project. Not only has it enabled me to write and maintain programs faster and more reliably than ever before, and been one of my greatest sources of joy since the 1980s-it has actually been indispensable at times. Some of my major programs, such as the MMIX meta-simulator, could not have been written with any other methodology that I've ever heard of. The complexity was simply too daunting for my limited brain to handle; without literate programming, the whole enterprise would have flopped miserably. If people do discover nice ways to use the newfangled multithreaded machines, I would expect the discovery to come from people who routinely use literate programming. Literate programming is what you need to rise above the ordinary level of achievement. But I don't believe in forcing ideas on anybody. If literate programming isn't your style, please forget it and do what you like. If nobody likes it but me, let it die. On a positive note, I've been pleased to discover that the conventions of CWEB are already standard equipment within preinstalled software such as Makefiles, when I get off-the-shelf Linux these days. Andrew: In Fascicle 1 of Volume 1, you reintroduced the MMIX computer, which is the 64-bit upgrade to the venerable MIX machine comp-sci students have come to know over many years. You previously described MMIX in great detail in MMIXware. I've read portions of both books, but can't tell whether the Fascicle updates or changes anything that appeared in MMIXware, or whether it's a pure synopsis. Could you clarify? Donald: Volume 1 Fascicle 1 is a programmer's introduction, which includes instructive exercises and such things. The MMIXware book is a detailed reference manual, somewhat terse and dry, plus a bunch of literate programs that describe prototype software for people to build upon. Both books define the same computer (once the errata to MMIXware are incorporated from my website). For most readers of TAOCP, the first fascicle contains everything about MMIX that they'll ever need or want to know. I should point out, however, that MMIX isn't a single machine; it's an architecture with almost unlimited varieties of implementations, depending on different choices of functional units, different pipeline configurations, different approaches to multiple-instruction-issue, different ways to do branch prediction, different cache sizes, different strategies for cache replacement, different bus speeds, etc. Some instructions and/or registers can be emulated with software on "cheaper" versions of the hardware. And so on. It's a test bed, all simulatable with my meta-simulator, even though advanced versions would be impossible to build effectively until another five years go by (and then we could ask for even further advances just by advancing the meta-simulator specs another notch). Suppose you want to know if five separate multiplier units and/or three-way instruction issuing would speed up a given MMIX program. Or maybe the instruction and/or data cache could be made larger or smaller or more associative. Just fire up the meta-simulator and see what happens. Andrew: As I suspect you don't use unit testing with MMIXAL, could you step me through how you go about making sure that your code works correctly under a wide variety of conditions and inputs? If you have a specific work routine around verification, could you describe it? Donald: Most examples of machine language code in TAOCP appear in Volumes 1-3; by the time we get to Volume 4, such low-level detail is largely unnecessary and we can work safely at a higher level of abstraction. Thus, I've needed to write only a dozen or so MMIX programs while preparing the opening parts of Volume 4, and they're all pretty much toy programs-nothing substantial. For little things like that, I just use informal verification methods, based on the theory that I've written up for the book, together with the MMIXAL assembler and MMIX simulator that are readily available on the Net (and described in full detail in the MMIXware book). That simulator includes debugging features like the ones I found so useful in Ed Satterthwaite's system for ALGOL W, mentioned earlier. I always feel quite confident after checking a program with those tools. Andrew: Despite its formulation many years ago, TeX is still thriving, primarily as the foundation for LaTeX. While TeX has been effectively frozen at your request, are there features that you would want to change or add to it, if you had the time and bandwidth? If so, what are the major items you add/change? Donald: I believe changes to TeX would cause much more harm than good. Other people who want other features are creating their own systems, and I've always encouraged further development-except that nobody should give their program the same name as mine. I want to take permanent responsibility for TeX and Metafont, and for all the nitty-gritty things that affect existing documents that rely on my work, such as the precise dimensions of characters in the Computer Modern fonts. Andrew: One of the little-discussed aspects of software development is how to do design work on software in a completely new domain. You were faced with this issue when you undertook TeX: No prior art was available to you as source code, and it was a domain in which you weren't an expert. How did you approach the design, and how long did it take before you were comfortable entering into the coding portion? Donald: That's another good question! I've discussed the answer in great detail in Chapter 10 of my book Literate Programming, together with Chapters 1 and 2 of my book Digital Typography. I think that anybody who is really interested in this topic will enjoy reading those chapters. (See also Digital Typography Chapters 24 and 25 for the complete first and second drafts of my initial design of TeX in 1977.) Andrew: The books on TeX and the program itself show a clear concern for limiting memory usage-an important problem for systems of that era. Today, the concern for memory usage in programs has more to do with cache sizes. As someone who has designed a processor in software, the issues of cache-aware and cache-oblivious algorithms surely must have crossed your radar screen. Is the role of processor caches on algorithm design something that you expect to cover, even if indirectly, in your upcoming work? Donald: I mentioned earlier that MMIX provides a test bed for many varieties of cache. And it's a software-implemented machine, so we can perform experiments that will be repeatable even a hundred years from now. Certainly the next editions of Volumes 1-3 will discuss the behavior of various basic algorithms with respect to different cache parameters. In Volume 4 so far, I count about a dozen references to cache memory and cache-friendly approaches (not to mention a "memo cache," which is a different but related idea in software). Andrew: What set of tools do you use today for writing TAOCP? Do you use TeX? LaTeX? CWEB? Word processor? And what do you use for the coding? Donald: My general working style is to write everything first with pencil and paper, sitting beside a big wastebasket. Then I use Emacs to enter the text into my machine, using the conventions of TeX. I use tex, dvips, and gv to see the results, which appear on my screen almost instantaneously these days. I check my math with Mathematica. I program every algorithm that's discussed (so that I can thoroughly understand it) using CWEB, which works splendidly with the GDB debugger. I make the illustrations with MetaPost (or, in rare cases, on a Mac with Adobe Photoshop or Illustrator). I have some homemade tools, like my own spell-checker for TeX and CWEB within Emacs. I designed my own bitmap font for use with Emacs, because I hate the way the ASCII apostrophe and the left open quote have morphed into independent symbols that no longer match each other visually. I have special Emacs modes to help me classify all the tens of thousands of papers and notes in my files, and special Emacs keyboard shortcuts that make bookwriting a little bit like playing an organ. I prefer rxvt to xterm for terminal input. Since last December, I've been using a file backup system called backupfs, which meets my need beautifully to archive the daily state of every file. According to the current directories on my machine, I've written 68 different CWEB programs so far this year. There were about 100 in 2007, 90 in 2006, 100 in 2005, 90 in 2004, etc. Furthermore, CWEB has an extremely convenient "change file" mechanism, with which I can rapidly create multiple versions and variations on a theme; so far in 2008 I've made 73 variations on those 68 themes. (Some of the variations are quite short, only a few bytes; others are 5KB or more. Some of the CWEB programs are quite substantial, like the 55-page BDD package that I completed in January.) Thus, you can see how important literate programming is in my life. I currently use Ubuntu Linux, on a standalone laptop-it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux. Incidentally, with Linux I much prefer the keyboard focus that I can get with classic FVWM to the GNOME and KDE environments that other people seem to like better. To each his own. Andrew: You state in the preface of Fascicle 0 of Volume 4 of TAOCP that Volume 4 surely will comprise three volumes and possibly more. It's clear from the text that you're really enjoying writing on this topic. Given that, what is your confidence in the note posted on the TAOCP website that Volume 5 will see light of day by 2015? Donald: If you check the Wayback Machine for previous incarnations of that web page, you will see that the number 2015 has not been constant. You're certainly correct that I'm having a ball writing up this material, because I keep running into fascinating facts that simply can't be left out-even though more than half of my notes don't make the final cut. Precise time estimates are impossible, because I can't tell until getting deep into each section how much of the stuff in my files is going to be really fundamental and how much of it is going to be irrelevant to my book or too advanced. A lot of the recent literature is academic one-upmanship of limited interest to me; authors these days often introduce arcane methods that outperform the simpler techniques only when the problem size exceeds the number of protons in the universe. Such algorithms could never be important in a real computer application. I read hundreds of such papers to see if they might contain nuggets for programmers, but most of them wind up getting short shrift. From a scheduling standpoint, all I know at present is that I must someday digest a huge amount of material that I've been collecting and filing for 45 years. I gain important time by working in batch mode: I don't read a paper in depth until I can deal with dozens of others on the same topic during the same week. When I finally am ready to read what has been collected about a topic, I might find out that I can zoom ahead because most of it is eminently forgettable for my purposes. On the other hand, I might discover that it's fundamental and deserves weeks of study; then I'd have to edit my website and push that number 2015 closer to infinity. Andrew: In late 2006, you were diagnosed with prostate cancer. How is your health today? Donald: Naturally, the cancer will be a serious concern. I have superb doctors. At the moment I feel as healthy as ever, modulo being 70 years old. Words flow freely as I write TAOCP and as I write the literate programs that precede drafts of TAOCP. I wake up in the morning with ideas that please me, and some of those ideas actually please me also later in the day when I've entered them into my computer. On the other hand, I willingly put myself in God's hands with respect to how much more I'll be able to do before cancer or heart disease or senility or whatever strikes. If I should unexpectedly die tomorrow, I'll have no reason to complain, because my life has been incredibly blessed. Conversely, as long as I'm able to write about computer science, I intend to do my best to organize and expound upon the tens of thousands of technical papers that I've collected and made notes on since 1962. Andrew: On your website, you mention that the Peoples Archive recently made a series of videos in which you reflect on your past life. In segment 93, "Advice to Young People," you advise that people shouldn't do something simply because it's trendy. As we know all too well, software development is as subject to fads as any other discipline. Can you give some examples that are currently in vogue, which developers shouldn't adopt simply because they're currently popular or because that's the way they're currently done? Would you care to identify important examples of this outside of software development? Donald: Hmm. That question is almost contradictory, because I'm basically advising young people to listen to themselves rather than to others, and I'm one of the others. Almost every biography of every person whom you would like to emulate will say that he or she did many things against the "conventional wisdom" of the day. Still, I hate to duck your questions even though I also hate to offend other people's sensibilities-given that software methodology has always been akin to religion. With the caveat that there's no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I've ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other's code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me. I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you're totally convinced that reusable code is wonderful, I probably won't be able to sway you anyway, but you'll never convince me that reusable code isn't mostly a menace. Here's a question that you may well have meant to ask: Why is the new book called Volume 4 Fascicle 0, instead of Volume 4 Fascicle 1? The answer is that computer programmers will understand that I wasn't ready to begin writing Volume 4 of TAOCP at its true beginning point, because we know that the initialization of a program can't be written until the program itself takes shape. So I started in 2005 with Volume 4 Fascicle 2, after which came Fascicles 3 and 4. (Think of Star Wars, which began with Episode 4.) Finally I was psyched up to write the early parts, but I soon realized that the introductory sections needed to include much more stuff than would fit into a single fascicle. Therefore, remembering Dijkstra's dictum that counting should begin at 0, I decided to launch Volume 4 with Fascicle 0. Look for Volume 4 Fascicle 1 later this year. References [1] My colleague Kunle Olukotun points out that, if the usage of TeX became a major bottleneck so that people had a dozen processors and really needed to speed up their typesetting terrifically, a super-parallel version of TeX could be developed that uses "speculation" to typeset a dozen chapters at once: Each chapter could be typeset under the assumption that the previous chapters don't do anything strange to mess up the default logic. If that assumption fails, we can fall back on the normal method of doing a chapter at a time; but in the majority of cases, when only normal typesetting was being invoked, the processing would indeed go 12 times faster. Users who cared about speed could adapt their behavior and use TeX in a disciplined way. Andrew Binstock is the principal analyst at Pacific Data Works. He is a columnist for SD Times and senior contributing editor for InfoWorld magazine. His blog can be found at: http://binstock.blogspot.com. #### [Feb 21, 2008] Project details for Bare Bones interpreter ###### freshmeat.net BareBones is an interpreter for the "Bare Bones" programming language defined in Chapter 11 of "Computer Science: An Overview", 9th Edition, by J. Glenn Brookshear. Release focus: Minor feature enhancements Changes: Identifiers were made case-insensitive. A summary of the language was added to the README file. Author: Eric Smith [contact developer] #### Bill Joy Quotes • You can't prove anything about a program written in C or FORTRAN. It's really just Peek and Poke with some syntactic sugar. • There are a couple of people in the world who can really program in C or FORTRAN. They write more code in less time than it takes for other programmers. Most programmers aren't that good. The problem is that those few programmers who crank out code aren't interested in maintaining it. • The buzzwords of the 1980s are mips and megaflops. The buzzwords of the 1990s will be verification, reliability, and understandability. • Xerox PARC was a great environment because they had great people, enough money to build real systems, and management that protected them from management. • The best way to do research is to make a radical assumption and then assume it's true. For me, I use the assumption that object oriented programming is the way to go. • At Sun, we don't believe in the Soviet model of economic planning. • You can drive a car by looking in the rear view mirror as long as nothing is ahead of you. Not enough software professionals are engaged in forward thinking. • The standard definition of AI is that which we don't understand. • Questioner [whose initials are EF]: You mentioned in your talk about a catastrophic event taking place ten years from now that will depress you. Looking back, doesn't UNIX depress you? It's now 1989 and UNIX is 1968. BJ: Standards for UNIX are coming. This will halt progress. That will provide the opportunity for something better to enter the marketplace. You see, as long as UNIX keeps slipping and makes some progress, nothing better will come along. The UNIX standards committees are therefore doing us a great service by slowing down and eventually halting the progress of UNIX. • Generic software has absolutely no value. • The GNU approach to software is one extreme. Of course, it violates my axiom of the top programmers demanding lots of money for their products. #### [Jan 1, 2008] Computer Science Education Where Are the Software Engineers of Tomorrow ###### STSC CrossTalk Computer Science Education: Where Are the Software Engineers of Tomorrow? Dr. Robert B.K. Dewar, AdaCore Inc. Dr. Edmond Schonberg, AdaCore Inc. It is our view that Computer Science (CS) education is neglecting basic skills, in particular in the areas of programming and formal methods. We consider that the general adoption of Java as a first programming language is in part responsible for this decline. We examine briefly the set of programming skills that should be part of every software professional's repertoire. It is all about programming! Over the last few years we have noticed worrisome trends in CS education. The following represents a summary of those trends: 1. Mathematics requirements in CS programs are shrinking. 2. The development of programming skills in several languages is giving way to cookbook approaches using large libraries and special-purpose packages. 3. The resulting set of skills is insufficient for today's software industry (in particular for safety and security purposes) and, unfortunately, matches well what the outsourcing industry can offer. We are training easily replaceable professionals. These trends are visible in the latest curriculum recommendations from the Association for Computing Machinery (ACM). Curriculum 2005 does not mention mathematical prerequisites at all, and it mentions only one course in the theory of programming languages [1]. We have seen these developments from both sides: As faculty members at New York University for decades, we have regretted the introduction of Java as a first language of instruction for most computer science majors. We have seen he has weakened the formation of our students, as reflected in their performance in systems and architecture courses. As founders of a company that specializes in Ada programming tools for mission-critical systems, we find it harder to recruit qualified applicants who have the right foundational skills. We want to advocate a more rigorous formation, in which formal methods are introduced early on, and programming languages play a central role in CS education. Formal Methods and Software Construction Formal techniques for proving the correctness of programs were an extremely active subject of research 20 years ago. However, the methods (and the hardware) of the time prevented these techniques from becoming widespread, and as a result they are more or less ignored by most CS programs. This is unfortunate because the techniques have evolved to the point that they can be used in large-scale systems and can contribute substantially to the reliability of these systems. A case in point is the use of SPARK in the re-engineering of the ground-based air traffic control system in the United Kingdom (see a description of iFACTS – Interim Future Area Control Tools Support, at <www.nats.co.uk/article/90>). SPARK is a subset of Ada augmented with assertions that allow the designer to prove important properties of a program: termination, absence of run-time exceptions, finite memory usage, etc. [2]. It is obvious that this kind of design and analysis methodology (dubbed Correctness by Construction) will add substantially to the reliability of a system whose design has involved SPARK from the beginning. However, PRAXIS, the company that developed SPARK and which is designing iFACTS, finds it hard to recruit people with the required mathematical competence (and this is present even in the United Kingdom, where formal methods are more widely taught and used than in the United States). Another formal approach to which CS students need exposure is model checking and linear temporal logic for the design of concurrent systems. For a modern discussion of the topic, which is central to mission-critical software, see [3]. Another area of computer science which we find neglected is the study of floating-point computations. At New York University, a course in numerical methods and floating-point computing used to be required, but this requirement was dropped many years ago, and now very few students take this course. The topic is vital to all scientific and engineering software and is semantically delicate. One would imagine that it would be a required part of all courses in scientific computing, but these often take MatLab to be the universal programming tool and ignore the topic altogether. The Pitfalls of Java as a First Programming Language Because of its popularity in the context of Web applications and the ease with which beginners can produce graphical programs, Java has become the most widely used language in introductory programming courses. We consider this to be a misguided attempt to make programming more fun, perhaps in reaction to the drop in CS enrollments that followed the dot-com bust. What we observed at New York University is that the Java programming courses did not prepare our students for the first course in systems, much less for more advanced ones. Students found it hard to write programs that did not have a graphic interface, had no feeling for the relationship between the source program and what the hardware would actually do, and (most damaging) did not understand the semantics of pointers at all, which made the use of C in systems programming very challenging. Let us propose the following principle: The irresistible beauty of programming consists in the reduction of complex formal processes to a very small set of primitive operations. Java, instead of exposing this beauty, encourages the programmer to approach problem-solving like a plumber in a hardware store: by rummaging through a multitude of drawers (i.e. packages) we will end up finding some gadget (i.e. class) that does roughly what we want. How it does it is not interesting! The result is a student who knows how to put a simple program together, but does not know how to program. A further pitfall of the early use of Java libraries and frameworks is that it is impossible for the student to develop a sense of the run-time cost of what is written because it is extremely hard to know what any method call will eventually execute. A lucid analysis of the problem is presented in [4]. We are seeing some backlash to this approach. For example, Bjarne Stroustrup reports from Texas A & M University that the industry is showing increasing unhappiness with the results of this approach. Specifically, he notes the following: I have had a lot of complaints about that [the use of Java as a first programming language] from industry, specifically from AT&T, IBM, Intel, Bloomberg, NI, Microsoft, Lockheed-Martin, and more. [5] He noted in a private discussion on this topic, reporting the following: It [Texas A&M] did [teach Java as the first language]. Then I started teaching C++ to the electrical engineers and when the EE students started to out-program the CS students, the CS department switched to C++. [5] It will be interesting to see how many departments follow this trend. At AdaCore, we are certainly aware of many universities that have adopted Ada as a first language because of similar concerns. A Real Programmer Can Write in Any Language (C, Java, Lisp, Ada) Software professionals of a certain age will remember the slogan of old-timers from two generations ago when structured programming became the rage: Real programmers can write Fortran in any language. The slogan is a reminder of how thinking habits of programmers are influenced by the first language they learn and how hard it is to shake these habits if you do all your programming in a single language. Conversely, we want to say that a competent programmer is comfortable with a number of different languages and that the programmer must be able to use the mental tools favored by one of them, even when programming in another. For example, the user of an imperative language such as Ada or C++ must be able to write in a functional style, acquired through practice with Lisp and ML1, when manipulating recursive structures. This is one indication of the importance of learning in-depth a number of different programming languages. What follows summarizes what we think are the critical contributions that well-established languages make to the mental tool-set of real programmers. For example, a real programmer should be able to program inheritance and dynamic dispatching in C, information hiding in Lisp, tree manipulation libraries in Ada, and garbage collection in anything but Java. The study of a wide variety of languages is, thus, indispensable to the well-rounded programmer. Why C Matters C is the low-level language that everyone must know. It can be seen as a portable assembly language, and as such it exposes the underlying machine and forces the student to understand clearly the relationship between software and hardware. Performance analysis is more straightforward, because the cost of every software statement is clear. Finally, compilers (GCC for example) make it easy to examine the generated assembly code, which is an excellent tool for understanding machine language and architecture. Why C++ Matters C++ brings to C the fundamental concepts of modern software engineering: encapsulation with classes and namespaces, information hiding through protected and private data and operations, programming by extension through virtual methods and derived classes, etc. C++ also pushes storage management as far as it can go without full-blown garbage collection, with constructors and destructors. Why Lisp Matters Every programmer must be comfortable with functional programming and with the important notion of referential transparency. Even though most programmers find imperative programming more intuitive, they must recognize that in many contexts that a functional, stateless style is clear, natural, easy to understand, and efficient to boot. An additional benefit of the practice of Lisp is that the program is written in what amounts to abstract syntax, namely the internal representation that most compilers use between parsing and code generation. Knowing Lisp is thus an excellent preparation for any software work that involves language processing. Finally, Lisp (at least in its lean Scheme incarnation) is amenable to a very compact self-definition. Seeing a complete Lisp interpreter written in Lisp is an intellectual revelation that all computer scientists should experience. Why Java Matters Despite our comments on Java as a first or only language, we think that Java has an important role to play in CS instruction. We will mention only two aspects of the language that must be part of the real programmer's skill set: 1. An understanding of concurrent programming (for which threads provide a basic low-level model). 2. Reflection, namely the understanding that a program can be instrumented to examine its own state and to determine its own behavior in a dynamically changing environment. Why Ada Matters Ada is the language of software engineering par excellence. Even when it is not the language of instruction in programming courses, it is the language chosen to teach courses in software engineering. This is because the notions of strong typing, encapsulation, information hiding, concurrency, generic programming, inheritance, and so on, are embodied in specific features of the language. From our experience and that of our customers, we can say that a real programmer writes Ada in any language. For example, an Ada programmer accustomed to Ada's package model, which strongly separates specification from implementation, will tend to write C in a style where well-commented header files act in somewhat the same way as package specs in Ada. The programmer will include bounds checking and consistency checks when passing mutable structures between subprograms to mimic the strong-typing checks that Ada mandates [6]. She will organize concurrent programs into tasks and protected objects, with well-defined synchronization and communication mechanisms. The concurrency features of Ada are particularly important in our age of multi-core architectures. We find it surprising that these architectures should be presented as a novel challenge to software design when Ada had well-designed mechanisms for writing safe, concurrent software 30 years ago. Programming Languages Are Not the Whole Story A well-rounded CS curriculum will include an advanced course in programming languages that covers a wide variety of languages, chosen to broaden the understanding of the programming process, rather than to build a résumé in perceived hot languages. We are somewhat dismayed to see the popularity of scripting languages in introductory programming courses. Such languages (Javascript, PHP, Atlas) are indeed popular tools of today for Web applications. Such languages have all the pedagogical defaults that we ascribe to Java and provide no opportunity to learn algorithms and performance analysis. Their absence of strong typing leads to a trial-and-error programming style and prevents students from acquiring the discipline of separating design of interfaces from specifications. However, teaching the right languages alone is not enough. Students need to be exposed to the tools to construct large-scale reliable programs, as we discussed at the start of this article. Topics of relevance are studying formal specification methods and formal proof methodologies, as well as gaining an understanding of how high-reliability code is certified in the real world. When you step into a plane, you are putting your life in the hands of software which had better be totally reliable. As a computer scientist, you should have some knowledge of how this level of reliability is achieved. In this day and age, the fear of terrorist cyber attacks have given a new urgency to the building of software that is not only bug free, but is also immune from malicious attack. Such high-security software relies even more extensively on formal methodologies, and our students need to be prepared for this new world. References 1. Joint Taskforce for Computing Curricula. "Computing Curricula 2005: The Overview Report." ACM/AIS/ IEEE, 2005 <www.acm.org/education /curric_vols/CC2005-March06 Final.pdf>. 2. Barnes, John. High Integrity Ada: The Spark Approach. Addison-Wesley, 2003. 3. Ben-Ari, M. Principles of Concurrent and Distributed Programming. 2nd ed. Addison-Wesley, 2006. 4. Mitchell, Nick, Gary Sevitsky, and Harini Srinivasan. "The Diary of a Datum: An Approach to Analyzing Runtime Complexity in Framework-Based Applications." Workshop on Library-Centric Software Design, Object-Oriented Programming, Systems, Languages, and Applications, San Diego, CA, 2005. 5. Stroustrup, Bjarne. Private communication. Aug. 2007. 6. Holzmann Gerard J. "The Power of Ten – Rules for Developing Safety Critical Code." IEEE Computer June 2006: 93-95. Note 1. Several programming language and system names have evolved from acronyms whose formal spellings are no longer considered applicable to the current names for which they are readily known. ML, Lisp, GCC, PHP, and SPARK fall under this category. #### Who Killed the Software Engineer (Hint It Happened in College) One of the article's main points (one that was misunderstood, Dewar tells me) is that the adoption of Java as a first programming language in college courses has led to this decline. Not exactly. Yes, Dewar believes that Java's graphic libraries allow students to cobble together software without understanding the underlying source code. But the problem with CS programs goes far beyond their focus on Java, he says. "A lot of it is, 'Let's make this all more fun.' You know, 'Math is not fun, let's reduce math requirements. Algorithms are not fun, let's get rid of them. Ewww – graphic libraries, they're fun. Let's have people mess with libraries. And [forget] all this business about 'command line' – we'll have people use nice visual interfaces where they can point and click and do fancy graphic stuff and have fun." Dewar says his email in-box is crammed full of positive responses to his article, from students as well as employers. Many readers have thanked him for speaking up about a situation they believe needs addressing, he says. One email was from an IT staffer who is working with a junior programmer. The older worker suggested that the young engineer check the call stack to see about a problem, but unfortunately, "he'd never heard of a call stack." #### Mama, Don't Let Your Babies Grow Up to be Cowboys (or Computer Programmers) At fault, in Dewar's view, are universities that are desperate to make up for lower enrollment in CS programs – even if that means gutting the programs. It's widely acknowledged that enrollments in computer science programs have declined. The chief causes: the dotcom crash made a CS career seem scary, and the never-ending headlines about outsourcing makes it seem even scarier. Once seen as a reliable meal ticket, some concerned parents now view CS with an anxiety usually reserved for Sociology or Philosophy degrees. Why waste your time? College administrators are understandably alarmed by smaller student head counts. "Universities tend to be in the raw numbers mode," Dewar says. "'Oh my God, the number of computer science majors has dropped by a factor of two, how are we going to reverse that?'" They've responded, he claims, by dumbing down programs, hoping to make them more accessible and popular. Aspects of curriculum that are too demanding, or perceived as tedious, are downplayed in favor of simplified material that attracts a larger enrollment. This effort is counterproductive, Dewar says. "To me, raw numbers are not necessarily the first concern. The first concern is that people get a good education." These students who have been spoon-fed easy material aren't prepared to compete globally. Dewar, who also co-owns a software company and so deals with clients and programmers internationally, says, "We see French engineers much better trained than American engineers," coming out of school. #### [Mar 2, 2007] Microsoft rolls out tutorial site for new programmers ##### Microsoft has unveiled a new Web site offering lessons to new programmers on building applications using the tools in Visual Studio 2005. #### [Sep 30, 2006] Dreamsongs Essays Downloads Triggers & Practice: How Extremes in Writing Relate to Creativity and Learning [pdf] I presented this keynote at XP/Agile Universe 2002 in Chicago, Illinois. The thrust of the talk is that it is possible to teach creative activities through an MFA process and to get better by practicing, but computer science and software engineering education on one hand and software practices on the other do not begin to match up to the discipline the arts demonstrate. Get to work. #### [Sep 30, 2006] Google Code - Summer of Code - Summer of Code Welcome to the Summer of Code 2006 site. We are no longer accepting applications from students or mentoring organizations. Students can view previously submitted applications and respond to mentor comments via the student home page. Accepted student projects will be announced on code.google.com/soc/ on May 23, 2006. You can talk to us in the Summer-Discuss-2006 group or via IRC in #summer-discuss on SlashNET. If you're feeling nostalgic, you can still access the Summer of Code 2005 site. Participating Mentoring Organizations  AbiSource (ideas) Apache Software Foundation (ideas) ArgoUML (ideas) Beagle (ideas) Boost (ideas) ClamAV (ideas) Codehaus (ideas) Creative Commons (ideas) CUWiN Wireless Project (ideas) Debian (ideas) Django (Lawrence Journal-World) (ideas) Drupal (ideas) Etherboot Project (ideas) FreeBSD Project (ideas) Gallery (ideas) GCC (ideas) Gentoo (ideas) GNOME (ideas) Handhelds.org (ideas) Horde (ideas) ICU (ideas) Inkscape (ideas) Internet2 (ideas) Jabber Software Foundation (ideas) JXTA (ideas) KDE (ideas) Lanka Software Foundation (LSF) (ideas) LiveJournal (ideas) MoinMoin (ideas) Monotone (ideas) MythTV (ideas) Nmap Security Scanner (ideas) OhioLINK (ideas) Open Security Foundation (OSVDB) (ideas) Open Source Cluster Application Resources (OSCAR) (ideas) OpenOffice.org (ideas) openSUSE (ideas) PHP (ideas) Plone Foundation (ideas) PostgreSQL Project (ideas) Python Software Foundation (ideas) Refractions Research (ideas) Samba (ideas) Subversion (ideas) The Free Earth Foundation (ideas) The Free Software Initiative of Japan (ideas) The LLVM Compiler Infrastructure (ideas) The Mozilla Foundation (ideas) The Shmoo Group (ideas) The Wine Project (ideas) University of Michigan Aerospace Engineering & Space Science Departments WinLibre (ideas) XenSource (ideas) XMMS2 (ideas) XWiki (ideas) Questions? Please peruse our Student FAQ, Mentor FAQ #### [Jun 30, 2005] Art and Computer Programming by John Littler ##### Knuth view holds; Stallman's views does not make any sense other then in context of his cult :-). See also Slashdot discussion Slashdot Is Programming Art ###### ONLamp.com Art and hand-waving are two things that a lot of people consider to go very well together. Art and computer programming, less so. Donald Knuth put them together when he named his wonderful multivolume set on algorithms The Art of Computer Programming, but Knuth chose a craft-oriented definition of art (PDF) in order to do so. ... ... ... Someone I didn't attempt to contact but whose words live on is Albert Einstein. Here are a couple of relevant quotes: [W]e do science when we reconstruct in the language of logic what we have seen and experienced. We do art when we communicate through forms whose connections are not accessible to the conscious mind yet we intuitively recognise them as something meaningful. Also: After a certain level of technological skill is achieved, science and art tend to coalesce in aesthetic plasticity and form. The greater scientists are artists as well.[1] This is a lofty place to start. Here's Fred Brooks with a more direct look at the subject: The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.[2] He doesn't say it's art, but it sure sounds a lot like it. In that vein, Andy Hunt from the Pragmatic Programmers says: It is absolutely an art. No question about it. Check out this quote from the Marines: An even greater part of the conduct of war falls under the realm of art, which is the employment of creative or intuitive skills. Art includes the creative, situational application of scientific knowledge through judgment and experience, and so the art of war subsumes the science of war. The art of war requires the intuitive ability to grasp the essence of a unique military situation and the creative ability to devise a practical solution. Sounds like a similar situation to software development to me. There are other similarities between programming and artists, see my essay at Art In Programming (PDF). I could go on for hours about the topic... Guido van Rossum, the creator of Python, has stronger alliances to Knuth's definition: I'm with Knuth's definition (or use) of the word art. To me, it relates strongly to creativity, which is very important for my line of work. If there was no art in it, it wouldn't be any fun, and then I wouldn't still be doing it after 30 years. Bjarne Stroustrup, the creator of C++, is also more like Knuth in refining his definition of art: When done right, art and craft blends seamlessly. That's the view of several schools of design, though of course not the view of people into "art as provocation". Define "craft"; define "art". The crafts and arts that I appreciate blend seamlessly into each other so that there is no dilemma. So far, these views are very top-down. What happens when you change the viewpoint? Paul Graham, programmer and author of Hackers and Painters, responded that he'd written quite a bit on the subject and to feel free to grab something. This was my choice: I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. For example, I was taught in college that one ought to figure out a program completely on paper before even going near a computer. I found that I did not program this way. I found that I liked to program sitting in front of a computer, not a piece of paper. Worse still, instead of patiently writing out a complete program and assuring myself it was correct, I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Debugging, I was taught, was a kind of final pass where you caught typos and oversights. The way I worked, it seemed like programming consisted of debugging. For a long time I felt bad about this, just as I once felt bad that I didn't hold my pencil the way they taught me to in elementary school. If I had only looked over at the other makers, the painters or the architects, I would have realized that there was a name for what I was doing: sketching. As far as I can tell, the way they taught me to program in college was all wrong. You should figure out programs as you're writing them, just as writers and painters and architects do.[3] Paul goes on to talk about the implications for software design and the joys of dynamic typing, which allows you to stay looser later. Now, we're right down to the code. This is what Richard Stallman, founder of the GNU Project and the Free Software Foundation, has to say (throwing in a geek joke for good measure): I would describe programming as a craft, which is a kind of art, but not a fine art. Craft means making useful objects with perhaps decorative touches. Fine art means making things purely for their beauty. Programming in general is not fine art, but some entries in the obfuscated C contest may qualify. I saw one that could be read as a story in English or as a C program. For the English reading one had to ignore punctuation--for instance, the name Charlotte might appear as char *lotte. (Once I was eating in Legal Sea Food and ordered arctic char. When it arrived, I looked for a signature, saw none, and complained to my friends, "This is an unsigned char. I wanted a signed char!" I would have complained to the waiter if I had thought he'd get the joke.) ... ... ... Constraints and Art The existence of so many restraints in the actual practice of code writing makes it tempting to dismiss programming as art, but when you think about it, people who create recognized art have constraints too. Writers, painters, and so on all have their code--writers must be comprehensible in some sort of way in their chosen language. Musicians have tools of expression in scales, harmonies, and timbres. Painters might seem to be free of this, but cultural rules exist, as they do for the other categories. An artist can break rules in an inspired way and receive the highest praise for it--but sometimes only after they've been dead for a long time. Program syntax and logic might seem to be more restrictive than these rules, which is why it is more inspiring to think as Fred Brooks did--in the heart of the machine. Perhaps it's more useful to look at the process. If there are ways in which the concept of art could be useful, then maybe we'll find them there. If we broadly take the process as consisting of idea, design, and implementation, it's clear that even if we don't accept that implementation is art, there is plenty of scope in the first two stages, and there's certainly scope in the combination. Thinking about it a little more also highlights the reductio ad absurdum of looking at any art in this way, where sculpture becomes the mere act of chiseling stone or painting is the application of paint to a surface. Looking at the process immediately focuses on the different situations of the lone hacker or small team as opposed to large corporate teams, who in some cases send specification documents to people they don't even know in other countries. The latter groups hope that they've specified things in such detail that they need to know nothing about the code writers other than the fact that they can deliver. The process for the lone hacker or small team might be almost unrecognizable as a process to an outsider--a process like that described by Paul Graham, where writing the code itself alters and shapes an idea and its design. The design stage is implicit and ongoing. If there is art in idea and design, then this is kneaded through the dough of the project like a special magic ingredient--the seamless combination that Bjarne Stroustrup mentioned. In less mystical terms, the process from beginning to end has strong degrees of integrity. The situation with larger project groups is more difficult. More people means more time constraints on communication, just because the sums are bigger. There is an immediate tendency for the existence of more rules and a concomitant tendency for thinking inside the box. You can't actually order people to be creative and brilliant. You can only make the environment where it's more likely and hope for the best. Xerox PARC and Bell Labs are two good examples of that. The real question is how to be inspired for the small team, and additionally, how not to stop inspiration for the larger team. This is a question of personal development. Creative thinking requires knowledge outside of the usual and ordinary, and the freedom and imagination to roam. Why It Matters What's the prize? What's the point? At the micro level, it's an idea (which might not be a Wow idea) with a brilliant execution. At the macro level, it's a Wow idea (getting away from analogues, getting away from clones--something entirely new) brilliantly executed. I realize now that I should have also asked my responders, if they were sympathetic to the idea of programming as art, to nominate some examples. I'll do that myself. Maybe you'd like to nominate some more? I think of the early computer game Elite, made by a team of two, which extended the whole idea of games both graphically and in game play. There are the first spreadsheets VisiCalc and Lotus 1-2-3 for the elegance of the first concept even if you didn't want to use one. Even though I don't use it anymore, the C language is artistic for the elegance of its basic building blocks, which can be assembled to do almost anything. Anyway, go make some art. Why not?! References • [1] from Alice Calaprice, The New Quotable Einstein, Princeton. • [2] Frederick P. Brooks, Jr., The Mythical Man Month: Essays on Software Engineering, Addison-Wesley, Reading, MA, anniversary edition 1995. • [3] http://www.paulgraham.com/hp.html is an essay that's part of Hackers and Painters, published by O'Reilly. • [4] p. 50, Application Development Advisor, May/June 2005. John Littler is chief gopher for Mstation.org. #### Art and Computer Programming/Discussion ###### ONLamp.com • Medium vs Message 2005-07-06 06:30:03 nurbles The article and many of the comments seem to be trying to consider where programming is an art by examining coding. IMHO, that is no more valid than examining typing (or handwriting) to determine if a novellist is an artist or examining hammers and chisels to decide is sculptors are artists. The code is a tool the way that information is accessed, manipulated and presented is the art that a programmer produces. Combining existing ideas in new ways (and creating completely new ideas) to make some chunk on information more useful (or whatever aesthetic pleases you) is what makes programming art. • Well of course it's art 2005-07-06 05:33:56 kbw333 In recent years it has been said that art is something produced by an artist. This goes on to imply that if you're not an artist, your products cannot be seriously considered art. It justifies an artist's messy room as art, and everyone else's as a messy room. This point of view is also used to justify the publicity awarded to particular popular artists, while keeping the lid on others. Occasionally, as if by osmosis, an new artist will be discovered and he/she'll join the ranks of established artists. When I was younger, there was talk of Arts and Crafts. These days it's the arts that get focus and there's little talk of craft. Craft seems to be implied in art. For example, a clever photograph will often require skill with a camera and until recently, with film processing. This is an example of arts and crafts being bound together. These huge metal sculptures I keep bumping into are clearly art, but require knowledge of metal working to achieve them; again arts and crafts. I don't think you can have one without the other, it's a matter of emphasis. It is said that the most complex structures built by mankind are software systems. This is not generally appreciated because most people cannot see them. Maybe that's a good thing because if we saw them as buildings, we'd deem many of them unsafe. But this obscurity leads to a generally unrecognising of the beauty of some software. It is clear that software construction is a craft. But you just need to try it to realise that it's art too. The whole idea of design patterns was an attempt to elevate the art in novices. There are many ways to construct software, but it's artistic input that makes it manageable, beautiful, and reliable. • A good overview. 2005-07-06 01:05:15 aixguru1 [Reply | View] Constraints are found in any type of art. For instance with painters, your canvas is a set size and your brushes are the ones (random things included) that you have nearby. You have a limited set of tools and constraints, but still the world is open to you. The same is true with programming. You have constraints, but as Brooks say, "air from air". You can imagine, create, and manipulate your programs to do whatever you want them to do. One thing to note on another comment I noticed was a comparison of craft and true art based on skill levels. As with artists, programmers come with various degrees of skill. Some art is primitive and unskilled, but practice makes perfect and a programmer that works on his skillset can improve as well. One argument is that programming is structured and follows after others. The key is that art is the same. Look up figure drawing for example and you will find the techniques most artists use for basic figure drawing to create what later become works of art. Art also immitates nature and other artists in many cases. Sometimes immitation is the highest form of flattery. With programming, imitation is a key to success on various levels. For instance, I personally referenced and learned many things about coding from the Richard Stevens books and his examples. Many others have gained knowledge on "best practices" in coding as a basis to help them create. Much like the eliptical shapes, ovals, circles, and other shapes that make the lightly drawn poses in the start of a figure drawing, programming has those key APIs, bits of code, libraries, classes and "shapes" if you will that help you create the final "picture" that makes up what we call our art. The art of a programmer. • art vs. craft 2005-07-05 23:06:02 unwesen [Reply | View] being a programmer myself, and working closely with artists, i have found that programming and writing/painting/composing music are rather similar activities. one of my artist friends in particular was interested in what programming actually is, so we struggled to find a definition for it. as it turns out, we quickly accepted that programming must be craft. we agreed that in order to be a reasonable artist, one has to be a good artisan. art that isn't executed well is a stroke of luck, might be beautiful, but is essentially meaningless. an artisan who puts thought and experience into the piece he creates, however, creates a manifestation his thoughts, and thereby makes them accessible to others. craft, in other words, is a carrier medium for culture. we judge long-dead cultures by the the 'things' they have made. it is no accident that we call these things 'artifacts', from the latin words 'ars' (art) and 'facere' (to make, to create). if the products of craft are carriers of culture, what, then, is art? it's something you might call _inspired_ craft. again, if you look at the latin roots for 'inspiration', 'spirare' means to 'breathe'; receiving an 'inspiration' therefore is receiving the breath of life, the spirit that god reputedly breathed into us in order to make us alife. creating life is universially seen as creating something new - in the simplest sense, children are 'new' human beings. art, therefore, must be craft that has an element of newness to it. how do you achieve something new? by breaking the boundaries of the system within which everything 'old' exists. if craft is a carrier for our culture, art by definition must break with that culture. now there are two possibilities: either your art is rejected by the majority of people, or it is accepted as beautiful. in the first case, it might be anything - meaningless, ahead of its time, etc. in the second case, however, it will quickly become part of the culture it broke with - culture expands to embrace those slight deviations from its norm. art, therefore, is craft that advances our culture. in this i differ strongly from stallman's opinion that art is 'merely beautiful' - it must have an impact on our culture in order to be considered art. that might sound rather elitist, i'm afraid... yet consider that every culture contains subcultures, and the impact i'm speaking of does not have to be earth-shattering. a street musician known in one part of a smaller city for his inspired music is an artist, even if his art reaches a few hundred people at most. as long as it's not a mere reproduction of our culture, it's art. reading through all this again, i still agree that programming is mainly craft. if you are an inspired programmer, however, you might well create art. as with conventional artists, whether your creation is art or craft may sometimes not be recognized until long after the act of creation. i could go on about this. i know there are some aspects still not covered, but this text is too long already. in closing i would like to use this text as an example of art vs. craft. i certainly know how to write, and to some extent how to phrase my thoughts in order to acheive certain effects. in that sense, i'm an artisan (although, admittedly, not a very good one). whether this text can be considered art depends very much on the readership: either i have restated the obvious, then it's merely poor craft. if i have managed to blow fresh thoughts into enough of your minds, it might be considered a small work of art. • art vs. craft 2005-07-06 05:41:15 evanh [Reply | View] I find it quite easy to merge your two definitions together by simply reducing, like your "street musician", the physical border of "our culture" to "my culture". • Indeed it's art 2005-07-05 20:41:12 tonywilliams [Reply | View] I, too, program like Paul Graham. Most of the great programmers from whom I have learnt program exactly the same way - they just pproduce better code on the first pass than I do (g). A nomination for art I've seen? Unix design was an example of art. The power of "everything is a file" and the concept of a pipe are pure art. I have recently been reading the O'Reilly title "Classic Shell Scripting" and it has examples of combining those two principles to produce amazing software - such as a spell checker in a single pipe. Rael Dornfest's original version of Blosxom was art. Blog software in a very small number of lines of Perl that used simplicity, the power of Perl and the facilities of the underlying OS. Since then the refinements and improvements have been like the final polish of a sculpture. # Tony • I program like Paul Graham! 2005-07-05 16:32:53 makeme [Reply | View] TFA quotes Paul Graham as saying: "For example, I was taught in college that one ought to figure out a program completely on paper before even going near a computer. I found that I did not program this way. I found that I liked to program sitting in front of a computer, not a piece of paper. Worse still, instead of patiently writing out a complete program and assuring myself it was correct, I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Debugging, I was taught, was a kind of final pass where you caught typos and oversights. The way I worked, it seemed like programming consisted of debugging." This is how I program! Maybe I'm not as crappy a programmer as I originally thought! • art and programming 2005-07-02 06:27:44 neilhorne [Reply | View] For a further development of this idea see: dotAtelier Trackbacks Comments made on other sites via trackbacks appear below. • Trackback from [Smalltalk] Programming - Art or Science 2005-07-06 01:05:47 ONLamp has an essay by John Littler discussing the relationship between computer programming and art. Included in the essay are quotes on that topic from various luminaries. My favorite quotes are from Fred Brooks: The programmer, like the poet, works... • Trackback from Toadkillerdog's DogHouse Programming: Geek Art or Science? 2005-07-05 20:56:46 An interesting blurb on Slashdot on whether or not programming is art of science (includes link to the... • Trackback from Riaan's Blog Programming. Is is Art? 2005-07-05 20:42:31 • Trackback from Sashidhar Kokku 's Development blog. Is Programming Art? or is assembling??? 2005-07-05 17:33:50 Is programming an art, or is assembling an art? Is drawing an art, or is filling the drawing with colors an art? • Trackback from Sashidhar Kokku 's Development blog. Is Programming Art? or is assembling??? 2005-07-05 17:31:08 chromatic writes "A constant question for software developers is 'What is the nature of programming?'... • Trackback from Sexy Jihad Is Programming Art? 2005-07-01 05:28:30 Is programming art? This is a very interesting question that ONLAMP has an article about: What the heck is art anyway, at least as most people understand it? What do people mean when they say "art"? A straw poll showed a fair degree of ... • #### James Gosling on Java ##### Java is a horrible language, but people are better then institutions :-) ###### Slashdot Page 2 and scripting languages (Score:5, Interesting) by MarkEst1973 (769601) on Thursday June 30, @09:59PM (#12956728) The entire second page of the article talks about scripting languages, specifically Javascript (in browsers) and Groovy. 1. Kudos to the Groovy [codehaus.org] authors. They've even garnered James Gosling's attention. If you write Java code and consider yourself even a little bit of a forward thinker, look up Groovy. It's a very important JSR (JSR-241 specifically). 2. He talks about Javascript solely from the point of view of the browser. Yes, I agree that Javascript is predominently implemented in a browser, but it's reach can be felt everywhere. Javascript == ActionScript (Flash scripting language). Javascript == CFScript (ColdFusion scripting language). Javascript object notation == Python object notation. But what about Javascript and Rhino's [mozilla.org] inclusion in Java 6 [sun.com]? I've been using Rhino as a server side language for a while now because Struts is way too verbose for my taste. I just want a thin glue layer between the web interface and my java components. I'm sick and tired of endless xml configuration (that means you, too, EJB!). A Rhino script on the server (with embedded Request, Response, Application, and Session objects) is the perfect glue that does not need xml configuration. (See also Groovy's Groovlets for a thin glue layer). 3. Javascript has been called Lisp in C's clothing. Javascript (via Rhino) will be included in Java 6. I also read that Java 6 will allow access to the parse trees created by the javac compiler (same link as Java 6 above). Java is now Lisp? Paul Graham writes about 9 features [paulgraham.com] that made Lisp unique when it debuted in the 50s. Access to the parse trees is one of the most advanced features of Lisp. He argues that when a language has all 9 features (and Java today is at about #5), you've not created a new language but a dialect of Lisp. I am a Very Big Fan of dynamic languages that can flex like a pretzel to fit my problem domain. Is Java evolving to be that pretzel? #### [May 12, 2003] What I Hate About Your Programming Language ##### The article is pretty weak, but the discussion after it contains some interesting points ###### ONLamp.com The Pragmatic Programmers suggest learning a new language every year. This has already paid off for me. The more different languages I learn, the more I understand about programming in general. It's a lot easier to solve problems if you have a toolbox full of good tools. Ideal language: Delphi w/ Clarion influence 2003-05-16 12:27:29 anonymous Sadly Delphi/Kylix (Object Pascal) is often overlooked. Perl, Ruby, etc. are all find for scripts, but in most cases, a compiiled program in a better way to do. Delphi lets you program procedurally like C, or with Objects like C++, only the union is much more natural. It prevents you from making many stupid mistakes, while allowing you 99.9% of the power C has. It borrows some syntax from perhaps better languages (Oberon, Modula, etc.), but has a much bigger and more useful standard library. (Unofficially, anyway...) It has never let me down... FOXPRO (VFP) 2003-05-15 06:41:48 anonymous VFP is great. It has its own easy to deploy runtime. You can compile to .exe. Its IDE if excellent. It is complete with the front-end user interface, middle-ware code and it's own multi-user safe & high performance database engine (desktop). BUT: M$ (aka the Borg) assimilated back in the early 90's what was then a cross platform development tool. Now M$vision of cross platform for VFP is multiple versions of Windows. Plus M$ can not make a lot of end-user money on a product whos runtime is free.

And what of C#?

2003-05-15 03:05:28 anonymous
I've found that C# grows on me faster than any other language I've used. At first I was very disappointed, saying it was just 9% better than Java. I was dismissive of the funny ways they use the new and override keywords until I understood they had addressed an important set of problems.

Having used it a while, I'd say it's very nice. Perhaps the best single advantage that C# has over Java, however, is that when it burst onto the public scene, it was much more complete than Java was for the first several years. Including libraries and documentation. It is of course completely unfair that the C# designers had years to use and study Delphi and Java and C++ before committing to a design for C#. So what!

The single best thing about C# may be that it works just as the documentations says it does. This alone is worth the price of admission (which is steep).

anonymous

You need to look at REXX 2003-05-14 07:25:47

Some great points on languages, but REXX beats them all in so many of the points you raise.

bob hamilton

> You need to look at REXX

2003-05-14 10:59:36 anonymous

I liked some parts of AREXX -- on the Amiga -- mostly the idea of the standard interprocess communication scripting. However, I always had problems with the syntax -- figuring out what was actually being passed, or being processed. It was weird. (I think in C.)

I eventually did figure out how to do useful things -- my favorite is a script that controls 3D image rendering in Lightwave, uses an external image processing program to apply motion blur and watermarks, then loads the results into the Toaster frame buffer, and talks to a comm program that controls a SVHS single frame editing deck to write the frame out.

All possible, because these programs that didn't know anything about each other all supported an Arexx port.

I wish the same thing existed on Linux. Perl scripts and system() calls are not the same thing as interprocess communication. And don't get me started about that fu-"scripting" that gimp has.

#### [May 12, 2003]What I Hate About Your Programming Language

###### ONLamp.com

These are my preferences, based on the kind of work I've done and continue to do, the order in which I learned the languages, and just plain personal taste. In the spirit of generating new ideas, learning new techniques, and maybe understanding why things are done the way they're done, it's worth considering the different ways to do them.

The Pragmatic Programmers suggest learning a new language every year. This has already paid off for me. The more different languages I learn, the more I understand about programming in general. It's a lot easier to solve problems if you have a toolbox full of good tools.

... ... ...

Every language is sacred in the eyes of its zealots, but there's bound to be someone out there for whom the language just doesn't feel right. In the open source world, we're fortunate to be able to pick and choose from several high-quality and free and open languages to find what fits our minds the best.

#### Professional Programmers

...was this article really about programming in general, or a hyping of open source software? open source programmers (i'm thinking of Python, Ruby, etc.) are really no better than, say for example, C++ programmers or JAVA programmers.

just because they use open source software solutions and technologies, does not mean they have any more a grasp on programming concepts and the tricks of the trade then those using proprietary solutions.

i consider myself to be more a teacher of programming (i am just better at that), but i don't think that someone who has been programming for years or uses open source solutions is any more qualified a programmer than i am.

A grain of salt, posted 11 Jun 2002 by tk (Journeyer)

Though many free software programmers exhibit high quality in their work, I'll hesitate before concluding that a good way to nurture good coders is to throw them into the midst of the free community. It may well be that many people go into free software because they are already competent enough and want to contribute.

That said, I'm not sure either what's the best way to groom people into truly professional coders.

<off-topic>
An excellent (IMO) book which introduces assembly languages to complete beginners is "Peter Norton's Assembly Language Book for the IBM PC", by Peter Norton and John Socha.
</off-topic>

Kids These Days..., posted 12 Jun 2002 by goingware (Master)

I've written some stuff on this topic. Here's a sampler:

Also see the last two sections, the ones entitled "The Value of Constant Factor Optimization" and "Old School Programming" in Musings on Good C++ Style as well as the conclusion of Pointers, References and Values.

I think everyone should learn at least two architectures of assembly code (RISC and CISC), no matter what language they're programming in.

Also read University of California at Davis Professor Norman Matloff's testimony to Congress: Debunking the Myth of a Desperate Software Labor Shortage.

It happens that I have a very long resume. The reason I make it so long is that I depend on potential clients finding it via the search engines for a large portion of my business. If I just wanted to help someone understand my employability it could be considerably shorter. But in an effort to make my resume show up in a lot of searches for skills, I mention every skill keyword that I can legitimately claim to have experience in somewhere in the resume, sometimes several times. The resume is designed to appeal to buzzword hunters.

But it annoys me, I shouldn't have to do that. So my resume has an editorial statement in it, aimed squarely at the HR managers you complain about:

I strive to achieve quality, correctness, performance and maintainability in the products I write.

I believe a sound understanding and application of software engineering principles is more valuable than knowledge of APIs or toolsets. In particular, this makes one flexible enough to handle any sort of programming task.

It helps if you don't deal with headhunters or contract brokers. They're much worse than most HR managers for only attempting to place people that match a buzzword search in a database rather than understanding someone's real talent. Read my policy on recruiters and contract agencies.

It's generally easier to get smaller companies to take real depth seriously than the larger companies. One reason for this is that they are too small to employ HR managers, so the person you're talking to is likely to be another engineer. My first jobs writing retail Macintosh software, Smalltalk, and Java were gotten at small companies where the person I contacted first at the company was an engineer.

If you're looking for permanent employment, many companies post their openings on their own web pages. I give some tips on locating these job postings via search engines on this page.

If you're a consultant like me, and you're fed up with the body shops, may I suggest you read my article Market Yourself - Tips for High-Tech Consultants.

I've been consulting full-time for over four years, and I've only taken one contract through a broker. I've actually bent my own rules and tried to find other work through the body shops, but they have been useless to me. I've had far better luck finding work on my own, through the web, and through referrals from friends and former coworkers.

#### Programming Language Critiques

The first incarnation of this page was started by John W.F. McClain at MIT. He took it with him when he moved to Loral, but was unable to update and maintain it there, so I offered to take it over.

In John's original page, he said:

Computer programmers create new languages all the time (often without even realizing it.) My hope is this collection of critiques will help raise the general quality of computer language design.

#### The Future of Programming

###### DDJ

Predicting the future is easier said than done, and yet, we persist in trying to do it. As futile as it may seem to forecast the future of programming, if we're going to try, it's helpful to recognize certain fundamental characteristics of programming and programmers. We know, for example, that programming is hard. We know that the industry is driven by the desire to make programming easier. And we know, as Perl creator Larry Wall has often observed, that programmers are lazy, impatient, and excessively proud.

This first condition formed the basis of Frederick Brooks's classic text on software engineering, The Mythical Man Month (Addison-Wesley, 1995; ISBN 0201835959) first published in 1975, where he wrote:

As we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement in productivity, in reliability, in simplicity.

Brooks's prediction was dire and, unfortunately, accurate. There was no silver bullet, and as far as we can tell, there never will be. However, programming is undoubtedly easier today than it was in the past, and the latter two principles of programming and programmers explain why. Programming became easier because the software industry was motivated to make it so, and because lazy and impatient programmers wouldn't accept anything less. And there is no reason to believe that this will change in the future.

#### FORTRANSIT -- the 650 Processor that made FORTRAN

The FORTRANSIT story is covered in the Annals of Computing History [4, 5], but an additional and more informal slant doesn't hurt.

#### The historical development of Fortran in Fortran 90 for the Fortran 77 Programmer by Bo Einarsson and Yurij Shokin.

The following simple program, which uses many different and usual concepts in programming, is based on "The early development of Programming Languages" by Donald E. Knuth and Luis Trabb Pardo, published in "A History of Computing in the Twentieth Century" edited by N. Metropolis, J. Howlett and Gian-Carlo Cota, Academic Press, New York, 1980, pp. 197-273. They gave an example in Algol 60 and translated into some very old languages such as Zuse's Plankalkül, Goldstine's Flow diagrams, Mauchly's Short Code, Burks' Intermediate PL, Rutishauser's Klammerausdrücke, Bohm's Formules, Hopper's A-2, Laning and Zierler's Algebraic interpreter, Backus' FORTRAN 0 and Brooker's AUTOCODE.

Klammerausdrücke is a German expression, we keep the German expression also in the Russian and English versions. A direct English translation is "bracket expression". FORTRAN 0 was really not called FORTRAN 0, it is just the very first version of Fortran.

The program is given here in Pascal, C and five variants of Fortran. The purpose of this is to show how Fortran has been developed from a cryptical, almost machine-dependent language, into a modern structured high-level programming language.

The final example shows the program in the new programming language F.

#### [Sept 8, 2001] Lisp as an Alternative to Java

Introduction

In a recent study [1], Prechelt compared the relative performance of Java and C++ in terms of execution time and memory utilization. Unlike many benchmark studies, Prechelt compared multiple implementations of the same task by multiple programmers in order to control for the effects of differences in programmer skill. Prechelt concluded that, "as of JDK 1.2, Java programs are typically much slower than programs written in C or C++. They also consume much more memory."

We have repeated Precheltнs study using Lisp as the implementation language. Our results show that Lisp's performance is comparable to or better than C++ in terms of execution speed, with significantly lower variability which translates into reduced project risk. Furthermore, development time is significantly lower and less variable than either C++ or Java. Memory consumption is comparable to Java. Lisp thus presents a viable alternative to Java for dynamic applications where performance is important.

Conclusions

Lisp is often considered an esoteric AI language. Our results suggest that it might be worthwhile to revisit this view. Lisp provides nearly all of the advantages that make Java attractive, including automatic memory management, dynamic object-oriented programming, and portability. Our results suggest that Lisp is superior to Java and comparable to C++ in terms of runtime, and superior to both in terms of programming effort, and variability of results. This last item is particularly significant as it translates directly into reduced risk for software development.

#### Slashdot Lisp as an Alternative to Java

There is more data available for other languages.. (Score:4, Interesting)
by crealf on Saturday September 08, @07:53AM (#2266890)
(User #414283 Info)

The article about Lisp is a follow-up of an article by Lutz Prechelt in CACM99 (a draft [ira.uka.de] is available on his page along with other articles).

However there is more data now, as, Prechelt itself widdened the study, and published in 2000 An empirical comparison of C, C++, Java, Perl, Python, Rexx, and Tcl [ira.uka.de] (a detailed technical report is here [ira.uka.de]).

If you look, from the developer point of view, Python and Perl work times are similar to those of Lisp, along with program sizes.
Of course, from the speed point of view, in the test, none of the scripting language could compete with Lisp.

Anyway some articles by Prechelt [ira.uka.de] are interesting too (as many other research papers ; found via citeseer [nec.com] for instance)

Smalltalk a better alternative to Java (Score:1, Interesting)
by Anonymous Coward on Saturday September 08, @08:33AM (#2266985)

In my opinion Smalltalk makes a much better alternative to Java.

Smalltalk has all the trappings--a very rich set of base classes, byte-coded, garbage collected, etc.

There are many Smalltalks out there...Smalltalk/X is quite good, and even has a Smalltalk-to-C compiler to boot. It's not totally free, but pretty cheap (and I believe for non-commercial use everything works but the S-to-C compiler).

Squeak is an even better place to start...it is highly portable (moreso than Java), very extensible (thanks to VM plugins) and has as very active community that includes Alan Kay, the man who INVENTED the term "object-oriented programming". Squeak has a just-in-time compiler (JITTER), support for multiple front-ends, and can be tied to any kind of external libraries and DLL's. It's not GPL'd, but it is free under an old Apple license (I believe the only issue is with the fonts..they are still Apple fonts). It's already been ported to every platform I've ever seen, including the iPaq (both WinCE and Linux). It runs even on STOCK iPaqs (ie 32m) without any expansion...Java, from what I understand, still has big problems just running on the iPaq, not to mention unexpanded iPaqs.

And of course, we can't forget about old GNU Smalltalk, which is still seeing development.

Smalltalk is quite easy to learn--you can just pick up the old "Smalltalk-80: The Language" (Goldberg) and work right from there. Squeak already has two really good books that have just come into print (go to Amazon and search for Mark Guzdial).

(this is not meant as a language flame...I'm just throwing this out on the table, since we're discussing alternatives to Java. Scheme/LISP is a cool idea as well, but I think Smalltalk deserves some mention.)

I've written 2 Lisp and 4 Java books (Score:3, Informative)
by MarkWatson on Saturday September 08, @09:56AM (#2267225)
(User #189759 Info)

First, great topic!

I have written 2 Lisp books for Springer-Verlag and 4 Java books, so you bet that I have an opinion on my two favorite languages.

First, given free choice, I would use Common LISP for most of my devlopment work. Common LISP has a huge library and is a very stable language. Although I prefer Xanalys LispWorks, there are also good free Common LISP systems.

Java is also a great language, mainly because of the awesome class libraries and the J2EE framework (I am biased here because I am just finishing up writing a J2EE book).

Peter Norvig once made a great comment on Java and Lisp (roughly quoting him): Java is only half as good as Lisp for AI but that is good enough.

Anyway, I find that both Java and Common LISP are very efficient environments to code in. I only use Java for my work because that is what my customers want.

BTW, I have a new free web book on Java and AI on my web site - help yourself!

Best regards,

Mark

-- www.markwatson.com -- Open Source and Content

Why Java succeeded, LISP can't make headway now (Score:5, Informative)
by joneshenry on Saturday September 08, @10:44AM (#2267438)
(User #9497 Info)

Java was never marketted as the ultimate fast language to do searching or to manipulate large data structures. What Java was marketted as was a language that was good enough for programming paradigms popular at the time such as object orientation and automatic garbage collection while providing the most comprehensive APIs under the control of one entity who would continue to push the extension of those APIs.

In this LinuxWorld interview [linuxworld.com] look what Stroustrup is hoping to someday have in the C++ standard for libraries. It's a joke, almost all of those features are already in Java. As Stroustrup says, a standard GUI framework is not "politically feasible".

Now go listen to what Linux Torvalds is saying [ddj.com] about what he finds to be the most exciting thing to happen to Linux the past year. Hint, it's not the completion of the kernel 2.4.x, it's KDE. The foundation of KDE's success is the triumph of Qt as the de facto standard that a large community has embraced to build an entire reimplementation of end user applications.

To fill the void of a standard GUI framework for C++, Microsoft has dictated a set of de facto standards for Windows, and Trolltech has successfully pushed Qt as the de facto standard for Linux.

I claim that as a whole the programming community doesn't care whether a standard is de jure or de facto, but they do care that SOME standard exists. When it comes to talking people into making the investment of time and money to learn a platform on which to base their careers, a multitude of incompatible choices is NOT the way to market.

I find talking about LISP as one language compared to Java to be a complete joke. Whose LISP? Scheme? Whose version of Scheme, GNU's Guile? Is the Elisp in Emacs the most widely distributed implementation of LISP? Can Emacs be rewritten using Guile? What is the GUI framework for all of LISP? Anyone come up with a set of LISP APIs that are the equivalent of J2EE or Jini?

I find it extremely disheartening that the same people who can grasp the argument that the value of networks lies in the communication people can do are incapable of applying the same reasoning to programming languages. Is it that hard to read Odlyzko [umn.edu] and not see that people just want to do the same thing with programming languages--talk among themselves. The modern paradigm for software where the money is being made is getting things to work with each other. Dinosaur languages that wait around for decades while slow bureaucratic committees create nonsolutions are going to get stomped by faster moving mammals such as Java pushed by single-decision vendors. And so are fragmented languages with a multitude of incompatible and incomplete implementations such as LISP.

Some hopefully useful points (Score:2, Informative)
by dlakelan (qynxryna@lnu-spam-bb.pbz) on Saturday September 08, @02:20PM (#2268461)
(User #43245 Info | http://www.endpointcomputing.com)

First off, one of the best spokespersons for Lisp is Paul Graham, author of "On Lisp" and "ANSI Common Lisp". His web site is Here [paulgraham.com].

Reading through his articles [paulgraham.com] will give you a better sense of what lisp is about. One that I'd like to see people comment on is: java's cover [paulgraham.com] ... It resonates with my experience as well. Also This response [paulgraham.com] to his java's cover article succinctly makes a good point that covers most of the bickering found here...

I personally think that the argument that Lisp is not widely known, and therefore not enough programmers exist to support corporate projects is bogus. The fact that you can hire someone who claims to know C++ does NOT in any way shape or form mean that you can hire someone who will solve your C++ programming problem! See my own web site [endpointcomputing.com] for more on that.

I personally believe that if you have a large C++ program you're working on and need to hire a new person or a replacement who already claims to know C++, the start up cost for that person is the same as if you have a Lisp program doing the same thing, and need to hire someone AND train them to use Lisp. Why? the training more than pays for itself because it gives the new person a formal introduction to your project, and Lisp is a more productive system than C++ for most tasks. Furthermore, it's quite likely that the person who claims to know C++ doesn't know it as well as you would like, and therefore the fact that you haven't formally trained them on your project is a cost you aren't considering.

One of the points that the original article by the fellow at NASA makes is that Lisp turned out to have a very low standard deviation of run-time and development time. What this basically says is that the lisp programs were more consistent. This is a very good thing as anyone who has ever had deadlines knows.

Yes, the JVM version used in this study is old, but lets face it that would affect the average, but wouldn't affect the standard deviation much. Java programs are more likely to be slow, as are C++ programs!

The point about lisp being a memory hog that a few people have made here is invalid as well. The NASA article states:

Memory consumption for Lisp was significantly higher than for C/C++ and roughly comparable to Java. However, this result is somewhat misleading for two reasons. First, Lisp and Java both do internal memory management using garbage collection, so it is often the case that the Lisp and Java runtimes will allocate memory from the operating system this is not actually being used by the application program.

People here have interpreted this to mean that the system is a memory hog anyway. In fact many lisp systems reserve a large chunk of their address space, which makes it look like a large amount of memory is in use. However the operating system has really just reserved it, not allocated it. When you touch one of the pages it does get allocated. So it LOOKS like you're using a LOT of memory, but in fact because of the VM system, you are NOT using very much memory at all.

The biggest reasons people don't use Lisp are they either don't understand Lisp, or have been forced by clients or supervisors to use something else.

Interesting, but flawed? (Score:5, Insightful)
by tkrotchko on Saturday September 08, @07:41AM (#2266864)
(User #124118 Info | http://www.toad.net/~tomk)

Its interesting to see the results of a short study, even though the author admits to the flaw in his methodolody (primarily the subjects were self-chosen). Still, I don't think that's a fatal flaw, and I think his results do have some validity.

However, I think the author misses a more important issue: development involving a single programmer for a relatively small task isn't the point for most organizations. Maintainability and a large pool of potential developers (for example) are a significant factor in deciding what language to use. LISP is a fabulous language, but try to find 10 programmers at a reasonable price in the next 2 weeks. Good luck.

Also, while initial development time is important, typically testing/debug cycles are the costly part of implementation, so that's what should weigh on your mind as the area that the most gains can be made. Further, large projects are collaborative efforts, so the objects and libraries available for a particular language plays a role in how quickly you can produce quality code.

As an aside, it would've been interesting to see the same development done with experienced Visual Basic programmer. My guess is he/she would have the lowest development cycle, and yet it wouldn't be my first choice for a large scale development project (although at the risk of being flamed, its not a bad language for just banging out a quick set of tools for my own use).

Some of thing things I believe are more important when thinking about a programming language:

1) Amenable to use by team of programmers
2) Viability over a period of time (5-10 years).
3) Large developer base
4) Cross platform - not because I think cross-platform is a good thing by itself; rather, I think its important to avoid being locked-in to a single hardware or Operating System vendor.
5) Mature IDE, debugging tools, and compilers.
6) Wide applicability

Computer languages tend to develop in response to specific needs, and most programmers will probably end up learning 5-10 languages over the course of their career. It would be helpful to have a discussion of the appropriate roles for certain computer languages, since I'm not sure any computer languages is better than any other.

Perhaps not quite as illuminating as it appears (Score:1)
by ascholl (ascholl-at-max(dot)cs(dot)kzoo(dot)edu) on Saturday September 08, @07:53AM (#2266888)
(User #225398 Info)

The study does show an advantage of lisp over java/c/c++ -- but only for small problems which depend heavily on the types of tasks lisp was designed for. The author recognizes the second problem ("It might be because the benchmark task involved search and managing a complex linked data structure, two jobs for which Lisp happens to be specifically designed and particularly well suited.") but doesn't even mention the first.
While I haven't seen the example programs, I suspect that the reason the java versions performed poorly time-wise was probably directly related to object instantiation. Instantiating an object is a pretty expensive task in java; typical 'by the book' methods would involve instantiating new numbers for every collection of digits, word, digit/character set representation, etc. The performance cut due to instantiation can be minimized dramatically by re-using program wide collections of commonly used objects, but the effect would only be seen on large inputs. Since the example input was much smaller than the actual test case, it seems likely that the programmers may have neglected to include this functianality.
Hypothising about implementation aside, the larger question is one of problem scope. If you're going to claim that language A is better than language B, you probably aren't concerned about tiny (albeit non-trivial) problems like the example. Now, I don't know whether this is true, but it seems possible that a large project implemented in java or c/c++ might be built quicker, be easier to maintain, and be less fragile than its equivilent in lisp. It may even perform better. It's not fair to assume blindly that the advantages of lisp seen in this study will scale up. I'm not claiming that they don't ... but still. If we're choosing a language for a task, this should be a primary consideration.

##### Here is another relevant view that explains that advocacy of a particular language might has little in common with the desire to innovate. Most people simply hate to be wrong after they made their (important and time consuming) choice ;-)
###### Slashdot

Nobody wants to be obsolete (Score:2, Interesting)
by e4 on Thursday December 14, @12:27PM EST (#102)
(User #102617 Info) http://www.razorlist.com

I think one of the biggest reasons for language advocacy (/OS advocacy/DB advocacy/etc.) is that we have a vested interest in "our" language succeeding. Each of us has worked hard to learn the subtleties and intricacies of [language X], and if something else comes along that's better, we're suddenly newbies again. That hard-won expertise doesn't carry much weight if [language Y] makes it easy for "any idiot" to accomplish and/or understand what took you a week to figure out.

We start trying to come up with reasons why it's not really better: It doesn't give you enough control; it's not as efficient; it has fewer options...

PC vs. Mac. BSD vs. Mac. Mainframe vs. client-server. Command line vs. GUI. How many people were a little saddened to see MS-DOS fading into the mist, not because it was a great tool, but because they knew how to use it?

A language advocate needs [language X] to succeed, to be dominant, to be the best, because he has more status and more useful knowledge that way.

Bottom line, it's an ego thing.

#### [Sep 02, 2000] Programming Languages of Choice

##### What is it that you like about programming languages? What is it that you hate? What did you start on? What do you find yourself coding with most often today? Has your choice of programming languages affected other choices in software? (I.e. Lisp hackers tend to gravitate toward emacs, whereas others go to vi)

It is quite interesting to me the amount of influence that programming languages have on the way programmers think about how they do things. One example from one perspective is this; if you didn't know that most UNIXen were implemented in C, would you be able to tell? If so, why or why not? What are the different properties that UNIX has that makes it pretty obvious that it wasn't written by somebody programming in a functional language, or in an object-oriented language (or style)?

... ... ...

One of the responces

My favorite language is Chez Scheme for two reasons: syntactic abstraction and control abstraction.

Syntactic abstraction is macros. As opposed to other implementations of Scheme, Chez Scheme in my opinion has the best story on macros, and its macro system is among the most powerful I have seen.

Control abstraction is the power to add new control operations to your language. For example, backtracking and coroutines. More esoterically, monads in direct-style code. Control abstraction boils down to first-class continuations (call/cc). With the single exception of SML/NJ, no other language I know of has call/cc.

I know I will be using Scheme for years to come, and my company will also continue to use it in its systems. We code a lot in C++ and Delphi, but the Real Hard Stuff(tm) is done in Scheme because macros and continuations are big hammers. Despite Scheme being over 20 years old and despite demonstrated, efficient implementations of these "advanced" language concepts, I don't see new language designs adopting these features from Scheme. I hope this changes

#### [Aug 2, 1999] Turbo Vision Salvador Eduardo Tropea (SET) - June 11th 1999, 05:33 EST

Turbo Vision provides a very nice user interface (comparable with the very well known GUIs) but only for console applications. This UNIX port is based on Borland's version 2.0 with fixes and was made to create RHIDE (a nice IDE for gcc and other GNU compilers). The library supports /dev/vcsa devices to speed-up, ncurses to run from telnet and xterm. This port, in contrast to the Sigala's port, doesn't have "100% compatibility with the original library" as goal, instead we modified a lot of code in favor of security (specially buffer overflows). The port is also available for the original platform (DOS).

#### Programming Languages: Design and Implementation (Third edition)

The following have made material available related to the book Programming Languages: Design and Implementation (Third edition) by Terrence W. Pratt and Marvin Zelkowitz (Prentice-Hall, 1995).

Etc.

## Classic Papers

#### The Rise of Worse is Better''

I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase the right thing.'' To such a designer it is important to get all of the following characteristics right:

• Simplicity -- the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.
• Correctness -- the design must be correct in all observable aspects. Incorrectness is simply not allowed.
• Consistency -- the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.
• Completeness -- the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.

I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the MIT approach.'' Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation.

The worse-is-better philosophy is only slightly different:

• Simplicity -- the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.
• Correctness -- the design must be correct in all observable aspects. It is slightly better to be simple than correct.
• Consistency -- the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.
• Completeness -- the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must be sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.

Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the New Jersey approach.'' I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach.

However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.

#### Worse Is Better by Richard P Gabriel

The concept known as "worse is better" holds that in software making (and perhaps in other arenas as well) it is better to start with a minimal creation and grow it as needed. Christopher Alexander might call this "piecemeal growth." This is the story of the evolution of that concept.

From 1984 until 1994 I had a Lisp company called "Lucid, Inc." In 1989 it was clear that the Lisp business was not going well, partly because the AI companies were floundering and partly because those AI companies were starting to blame Lisp and its implementations for the failures of AI. One day in Spring 1989, I was sitting out on the Lucid porch with some of the hackers, and someone asked me why I thought people believed C and Unix were better than Lisp. I jokingly answered, "because, well, worse is better." We laughed over it for a while as I tried to make up an argument for why something clearly lousy could be good.

A few months later, in Summer 1989, a small Lisp conference called EuroPAL (European Conference on the Practical Applications of Lisp) invited me to give a keynote, probably since Lucid was the premier Lisp company. I agreed, and while casting about for what to talk about, I gravitated toward a detailed explanation of the worse-is-better ideas we joked about as applied to Lisp. At Lucid we knew a lot about how we would do Lisp over to survive business realities as we saw them, and so the result was called "Lisp: Good News, Bad News, How to Win Big." [html] (slightly abridged version) [pdf] (has more details about the Treeshaker and delivery of Lisp applications).

I gave the talk in March, 1990 at Cambridge University. I had never been to Cambridge (nor to Oxford), and I was quite nervous about speaking at Newton's school. There were about 500-800 people in the auditorium, and before my talk they played the Notting Hillbillies over the sound system - I had never heard the group before, and indeed, the album was not yet released in the US. The music seemed appropriate because I had decided to use a very colloquial American-style of writing in the talk, and the Notting Hillbillies played a style of music heavily influenced by traditional American music, though they were a British band. I gave my talk with some fear since the room was standing room only, and at the end, there was a long silence. The first person to speak up was Gerry Sussman, who largely ridiculed the talk, followed by Carl Hewitt who was similarly none too kind. I spent 30 minutes trying to justify my speech to a crowd in no way inclined to have heard such criticism - perhaps they were hoping for a cheerleader-type speech.

I survived, of course, and made my way home to California. Back then, the Internet was just starting up, so it was reasonable to expect not too many people would hear about the talk and its disastrous reception. However, the press was at the talk and wrote about it extensively in the UK. Headlines in computer rags proclaimed "Lisp Dead, Gabriel States." In one, there was a picture of Bruce Springsteen with the caption, "New Jersey Style," referring to the humorous name I gave to the worse-is-better approach to design. Nevertheless, I hid the talk away and soon was convinced nothing would come of it.

About a year later we hired a young kid from Pittsburgh named Jamie Zawinski. He was not much more than 20 years old and came highly recommended by Scott Fahlman. We called him "The Kid." He was a lot of fun to have around: not a bad hacker and definitely in a demographic we didn't have much of at Lucid. He wanted to find out about the people at the company, particularly me since I had been the one to take a risk on him, including moving him to the West Coast. His way of finding out was to look through my computer directories - none of them were protected. He found the EuroPAL paper, and found the part about worse is better. He connected these ideas to those of Richard Stallman, whom I knew fairly well since I had been a spokesman for the League for Programming Freedom for a number of years. JWZ excerpted the worse-is-better sections and sent them to his friends at CMU, who sent them to their friends at Bell Labs, who sent them to their friends everywhere.

Soon I was receiving 10 or so e-mails a day requesting the paper. Departments from several large companies requested permission to use the piece as part of their thought processes for their software strategies for the 1990s. The companies I remember were DEC, HP, and IBM. In June 1991, AI Expert magazine republished the piece to gain a larger readership in the US.

However, despite the apparent enthusiasm by the rest of the world, I was uneasy about the concept of worse is better, and especially with my association with it. In the early 1990s, I was writing a lot of essays and columns for magazines and journals, so much so that I was using a pseudonym for some of that work: Nickieben Bourbaki. The original idea for the name was that my staff at Lucid would help with the writing, and the single pseudonym would represent the collective, much as the French mathematicians in the 1930s used "Nicolas Bourbaki" as their collective name while rewriting the foundations of mathematics in their image. However, no one but I wrote anything under that name.

In the Winter of 1991-1992 I wrote an essay called "Worse Is Better Is Worse" under the name "Nickieben Bourbaki." This piece attacked worse is better. In it, the fiction was created that Nickieben was a childhood friend and colleague of Richard P. Gabriel, and as a friend and for Richard's own good, Nickieben was correcting Richard's beliefs.

In the Autumn of 1992, the Journal of Object-Oriented Programming (JOOP) published a "rebuttal" editorial I wrote to "Worse Is Better Is Worse" called "Is Worse Really Better?" The folks at Lucid were starting to get a little worried because I would bring them review drafts of papers arguing (as me) for worse is better, and later I would bring them rebuttals (as Nickieben) against myself. One fellow was seriously nervous that I might have a mental disease.

In the middle of the 1990s I was working as a management consultant (more or less), and I became interested in why worse is better really could work, so I was reading books on economics and biology to understand how evolution happened in economic systems. Most of what I learned was captured in a presentation I would give back then, typically as a keynote, called "Models of Software Acceptance: How Winners Win," and in a chapter called "Money Through Innovation Reconsidered," in my book of essays, "Patterns of Software: Tales from the Software Community."

You might think that by the year 2000 I would have settled what I think of worse is better - after over a decade of thinking and speaking about it, through periods of clarity and periods of muck, and through periods of multi-mindedness on the issues. But, at OOPSLA 2000, I was scheduled to be on a panel entitled "Back to the Future: Is Worse (Still) Better?" And in preparation for this panel, the organizer, Martine Devos, asked me to write a position paper, which I did, called "Back to the Future: Is Worse (Still) Better?" In this short paper, I came out against worse is better. But a month or so later, I wrote a second one, called "Back to the Future: Worse (Still) is Better!" which was in favor of it. I still can't decide. Martine combined the two papers into the single position paper for the panel, and during the panel itself, run as a fishbowl, participants routinely shifted from the pro-worse-is-better side of the table to the anti-side. I sat in the audience, having lost my voice giving my Mob Software talk that morning, during which I said, "risk-taking and a willingness to open one's eyes to new possibilities and a rejection of worse-is-better make an environment where excellence is possible. Xenia invites the duende, which is battled daily because there is the possibility of failure in an aesthetic rather than merely a technical sense."

Decide for yourselves.

## Education

• Paul Hsieh's Beginners Advice Page

... In going over the list above I can find that the only things I did that I regret, or feel have no value: learning Fortran, corporate politics, and prolog. So given the relatively little wasted effort, I feel compelled to recommend that newbies should learn along a similar path to mine. However, what is not shown in this list is that I have a strong mathematical background and that I thoroughly enjoy programming which I view as much as a creative process as a mechanical one.

However, the average industry programmer is not necessarily like me. Nor should they necessarily desire to be so. My consumption by computers is not something I've seen many other people being taken by. To many, if not most people, computer programming is not at all a creative process, and is mearly a means of getting a paycheck. In the short term anyone bright enough to pick up a University/College degree need not do much more than that to find themselves a job in the computer industry these days.

For those people I suggest you enter a University/College which can prepare you for a career in programming, but also a career in something else, should you find passion for something other than staring into a phosphor screen 9 to 5. If you've already done that, then go see your family, career councilor, therapist, whatever. You're grown up, you can figure out what kind of a job you want, can't you?

But for those who want to get into programming. I mean seriously into programming. For those that feel drawn towards computer programming, I can make some recommendations. First, this industry is a young one, moving and changing very quickly. Picking up one particular language that seems popular right now, may not make any sense in 5-10 years. Starting the way I started, I feel, is the best recommendation I can make.

 BASIC -- the first language to learn

This is a very contentious issue on the USENET, but I do strongly suggest beginners pick up the BASIC programming language first. There are many reasons why other people advocate learning other languages, but I feel that they are generally misguided. It is impossible to re-instill into a seasoned programmer, the idea that learning to program from scratch is not a trivial thing. The deepers concepts in most other languages are totally beyond what a beginner has every experienced or could have any hope of assimilating given they have no idea what the motivation is behind it.

Concepts such as scope, data types, pointers, modularity, and dynamic memory allocation have no meaning to someone who isn't at already familliar with some fundamental programming issues. These are all, in a sense, meta programming issues which the beginner could not possibly truly appreciate when they first pick up a language. What if it turns out the potential student is unsure of themselves and needs to decide if they can or cannot hack programming? Snowing the beginner under these concepts will only serve to encourage them to give it up.

Programming is not about following rules, structure or design, that's the job of your compiler or syntax checker (like lint.) Programming is about giving instructions to your computer and making it follow them. Its about being as creative as you can possibly be with your computer. So many programmers so easily forget this as they expound on wonderful things like object oriented programming, garbage collection, portability and all sorts of other nonsense. Trying to feed this to a beginner is going to warp how they think a computer works. And knowing how a computer works is very important, far more important than how Simonyi, Stroussup, Ritchie or Kernighan think you should write code.

BASIC provides a simple syntax with some simple rules. If you can't master basic in a very short amount of time, you can be pretty sure that programming is not for you. But just because someone can't grok templates, classes, linked lists or whatever on their first outing with programming, doesn't meant they couldn't handle those concepts with proper pre-motivation. That pre-motivation can only exist if the beginner has a good idea how to program his/her computer in the first place. Teaching them C, C++, etc., turns it into a chicken and egg problem, where they might know the solution but have no idea why things are the way they are, and consequently are unable to re-apply that motivated thinking to the future problems of programming that they won't have a book to refer to about.

Basic also gives you a lot of room to play with. As shunned as it is in this industry you can actually do some nifty things in it. Another important thing is that most Basics have machine specific extensions for doing rudimentary graphics and sound. The positive feedback the people get from being able to in some sense have absolute control over the fundamental user interactive features of their computer (display, and audio output) is completely unseen in languages such as C, Pascal, COBOL, FORTRAN, or other candidate beginner languages. Learning graphics in those languages is considered an "advanced concept" because it takes you away from the fundamentals of those languages whose base syntax are oriented toward managing databases, spreadsheets or doing complex mathematical calculations.

Finally, as one last attempt to convince you, Basic itself has no other role than to be a programmer's first language. It is not powerful enough to be a real programmer's tool. It lets you get your feet wet, and is a reasonable balance between high level and low level programming concepts. From basic, the beginner is meant to spring board into another direction, and should be able to no matter what second language they chose.

Once a beginner is convinced to start programming in Basic, the learning process, for most, almost takes care of itself. Show someone some simple concepts of basic programming and if they have any aptitude for it at all, they should be able to run with it on their own for a reasonable amount of time. With me for example, I started in the middle ages of computing, when there weren't any instructors to teach you how to program. I had a manual, some magazines, and a little bit of a push from the local guru (as well as a very inspirational TV program called Bits and Bytes.) After that, I was on my own. But I was in bliss, because I had what I needed to tame the computer. I could make it do what I want, and it was just up to my own ingenuity to make what ever I wanted the computer do a reality.

Of course after convincing you of this, it then begs the question as to what the second language to learn should be. This is a hard question. To get yourself up to a respectable level of programming expertise, I claim that you need the minimalist concepts that are derived from assembly language as well as highly level data structures and meta programming technqiues you can learn from Pascal, C, C++ or Ada. I don't think I can whole heartly recommend learning one before the other, so I would instead rather make the suggestion that you learn assembly and C as second and third languages, though not necessarily in that order. I think the benefit of learning high level programming structures in a language like C are not worth debating as they are so self evident. But it might not be completely obvious why I consider assembly language so important. I justify my position in the next section.

• Paul Hsieh's Goto Page

Paul Nettle (a programmer at Terminal Reality), recently pointed out some things written (by Microsoftie's no less) about the use goto and the debate that so commonly ensues about it. Here's an excerpt of what he posted to the rec.games.programmer newsgroup:

 Here's what "Writing Solid Code" (p. xxii) has to say on the subject of the goto statement: ---[excerpt begins]--- That's not to say that you should blindly follow the guidelines in this book. They aren't rules. Too many programmers have taken the guideline "Don't use goto statements" as a commandment from God that should never be broken. When asked why they're so strongly against gotos, they say that using goto statements results in unmaintainable spaghetti code. Experienced programmers often add that goto statments can upset the compiler's code optimizer. Both points are valid. Yet there are times when the judicious use of a goto can greatly improve the clarity and efficiency of the code. In such cases, clinging to the guideline "Don't use goto statements" would result in worse code, not better. ---[excerpt ends]--- And here's what "Code Complete" (p. 349) has to say on the subject: ---[excerpt begins]--- The Phony goto Debate A primary feature of most goto discussions is a shallow approach to the question. The arguer on the "gotos are evil" side usually presents a trivial code fragment that uses gotos and then shows how easy it is to rewrite the fragment without gotos. This proves mainly that it's easy to write trivial code without gotos. ---[excerpt ends]---

Indeed it always surprises me how quickly people are willing to regurgitate the age old argument against the use of goto. In my early days of programmer, I started as a BASIC spaghetti code programmer that over used goto's as a matter of course. Then I went to university where I was told never to use goto (or suffer the wrath of the TA's grading penalties.) Indeed I trained myself to get out of using goto's and indeed, I've never come to an algorithm that I couldn't some implement without gotos.

• A List of Courses in Principles and Implementation of Programming Languages (this has many links to compiler construction courses; maintained by Giorgio Ingargiola)
• Undergraduate Computer Science Education Links (maintained by Renée McCauley)
• Resources for University Teaching (maintained by Steve Beaty)
• Functional programming languages in education
• Teaching resources in the object-oriented information catalog
• OOPSLA '97 Workshop Reuse in the Classroom with information about various courses teaching OO programming.
• comp.edu newsgroup
• Debunking the Myth of a Desperate Software Labor Shortage

As explained in Sec. 5.9, an experienced programmer CANNOT get a job using a new skill by taking a course in that skill; employers demand actual work experience. So, how can one deal with this Catch-22 situation?

The answer is, sad to say, that you should engage in frequent job-hopping. Note that the timing is very delicate, with the windows of opportunity usually being very narrow, as seen below.

Suppose you are currently using programming language X, but you see that X is beginning to go out of fashion, and a new language (or OS or platform, etc.) Y is just beginning to come on the scene. The term just beginning'' is crucial here; it means that Y is so new that there almost no one has work experience in it yet. At that point you should ask your current employer to assign you to a project which uses Y, and let you learn Y on the job. If your employer is not willing to do this, or does not have a project using Y, then find another employer who uses both X and Y, and thus who will be willing to hire you on the basis of your experience with X alone, since very few people have experience with Y yet.

Clearly, if you wait too long to make such a move, so that there are people with work experience in the skill, the move will be nearly impossible. As one analyst, Jay Whitehead humorously told ZD-TV Radio, if your skill shows up as a book in the Dummies series, that skill is no longer marketable.

What if you do not manage to time this process quite correctly? You will then likely be in a very tough situation if you need to find a new programming job, say if you get laid off. The best strategy is to utilize your social network, including former coworkers whom you might know only slightly - anyone who knows the quality of your work. Call them and say, You know that I'm a good programmer, someone who really gets the job done. I can learn any skill quickly. Please pass my re'sume' to a hiring manager.''

## Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy :