Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)


News Recommended Books Recommended Links Donald Knuth Random Findings Humor Etc

Many people do not understand that one of the first major open source project was neither GNU not Linux. It was TeX. Knuth developed the first version of TeX in in 1971-1978 in order to avoid problem with typesetting of the second edition of his TAoCP volumes. The program proved popular and he produced a second version (in 1982) which was the basis of what we use today.  The whole text of the program was published in this book The TeXbook (Addison-Wesley, 1984, ISBN 0-201-13447-0, paperback ISBN 0-201-13448-9).

I think that he just has special love for typesetting. Otherwise working with some vendor like Adobe to improve an exiting tool that it became suitable for the tasks like typesetting of complex mathematical texts like TAOCP would be a better solution of the problem. After all, TAOCP is not an open book. And free software is only free if you don't place a monetary value on your own time. From this point of view  eight years of Donald Knuth time is a huge investment.  The only benefit that I see is that computer science development stalled after, say 1990 and window lost for TeX created for Knuth a new opportunity to systemize more of less static body of knowledge.

Anyway,  eight year were lost for peripheral ( from the point if view of TOACP goals) project and Donald Knuth emerged as one of open source pioneers. Unlike most open source pioneers he published not only his code but  major algorithms as well.

Here is how Donald Knuth explained the motives of writing TeX in his Advocado Interview:

Advogato: The first questions that I have are about free software. TeX was one of the first big projects that was released as free software and had a major impact. These days, of course, it's a big deal. But I think when TeX came out it was just something you did, right?

Prof. Knuth: I saw that the whole business of typesetting was being held back by proprietary interests, and I didn't need any claim to fame. I had already been successful with my books and so I didn't have to stake it all on anything. So it didn't matter to me whether or not whether I got anything financial out of it.

There were people who saw that there was a need for such software, but each one thought that they were going to lock everyone into their system. And pretty much there would be no progress. They wouldn't explain to people what they were doing. They would have people using their thing; they couldn't switch to another, and they couldn't get another person to do the typesetting for them. The fonts would be only available for one, and so on.

But I was thinking about FORTRAN actually, the situation in programming in the '50s, when IBM didn't make FORTRAN an IBM-only thing. So it became a lingua franca. It was implemented on all different machines. And I figured this was such a new subject that whatever I came up with probably wouldn't be the best possible solution. It would be more like FORTRAN, which was the first fairly good solution [chuckle]. But it would be better if it was available to everybody than if there were all kinds of things that people were keeping only on one machine.

So that was part of the thinking. But partly that if I hadn't already been successful with my books, and this was my big thing, I probably would not have said, "well, let's give it away." But since I was doing it really for the love it and I didn't have a stake in it where I needed it, I was much more concerned with the idea that it should be usable by everybody. It's partly also that I come out of traditional mathematics where we prove things, but we don't charge people for using what we prove.

So this idea of getting paid for something over and over again, well, in books that seems to happen. You write a book and then the more copies you sell the more you get, even though you only have to write the book once. And software was a little bit like that.

It is interesting to note that TeX begins slightly earlier than Unix but was finished long after Unix became a major player on the OS arena. Actually one can say that Knuth wrote an open source alternative to Unix troff (see below).

TeX is the composition engine (strictly speaking, an interpreter, not a compiler). It is essentially a batch engine, although a limited amount of interactivity is possible when processing a file, to allow error recovery and diagnosis.

To produce his own books, Knuth had to use the rich set of academic publishing conventions and typesetting styles  – footnotes, floating insertions (figures and tables), etc., etc. As a mathematician/computer scientist, he developed an input language that makes sense to other scientists, and for math expressions, is quite similar to how one mathematician would recite a string of notation to another on the telephone. The TeX language is an interpreter. It accepts mixed commands and data. The command language is very low level (skip so much space, change to font X, set this string of words in paragraph form, ...), but is amenable to being enhanced by defining macro commands to build a very high level user interface (this is the title, this is the author, use them to set a title page according to AMS specifications).

The handling of footnotes and similar structures are so well behaved that "style files" have been created for TeX to process critical editions and legal tomes.

It is also (after some highly useful enhancements in about 1990) able to handle the composition of many different languages according to their own traditional rules, and is for this reason (as well as for the low cost), quite widely used in Eastern Europe.

From the start, it has been popular among theoretical computer scientists, mathematicians, physicists, astronomers, acually among a diverse category of research scientists who were plagued by lack of the necessary symbols on typewriters and who wanted a more professional look to their preprints.

Some of the algorithms in TeX have not been bettered in any of the composition tools devised in the years since TeX appeared. The most obvious example is the paragraph breaking: text is considered a full paragraph at a time, not line-by-line; this is the basic starting algorithm used in the HZ-program by Peter Karow (and named for Hermann Zapf, who developed the special fonts this program needs to improve on the basics).

In summary, TeX is a special-purpose programming language that is the centerpiece of a typesetting system that produces publication quality mathematics (and surrounding text), available to and usable by individuals. 

TeX uses only the metrics, and produces a "device independent" output file – .dvi – that must then be translated to the particular output device being used (a laser printer, inkjet printer, typesetter; in the "old days" even daisy-wheel printers were used). The DVI translator actually accesses the font shapes, either as bitmaps, Type 1 fonts, or pointers to fonts installed in a printer with the shapes not otherwise accessible. PostScript is one of the most popular "final" output forms for TeX.

TeX can be the cause of religious wars. For those of us who need the capabilities of TeX – for production of books and journal articles in research mathematics – no other current composition tool, proprietary or otherwise, can handle the material and produce high-quality, publication-worthy output, and simultaneously be usable by the writer of the document.


Metafont – the font creation program – is separate from TeX. It generates only bitmap fonts (although internally it creates outlines on which the bitmaps are based). There is still research to be done on combining overlapping outlines into a single outline that can be used like the outline of a type 1 font; Knuth has "frozen" Metafont, so further research and development are being done by someone else, and the result will not be called "Metafont". In fact, it's also possible to use Type 1 fonts, and quite a few TeX installations are routinely using commercial Type 1 fonts, especially if they're not producing heavily technical material; only Computer Modern (the font Knuth developed), Lucida Bright, and Times have anywhere near a comprehensive symbol complement, and none of the symbol sets are from "major" font suppliers (too much work, not enough money; the demise of Monotype's symbol catalog – metal only – is a particularly great loss).

One of the major areas where TeX will hold its own over the next few years is as a "back end" to SGML and XML systems, where no human intervention is expected between data input (structured, not WYSIWYG) and removing the output from the printer or viewing it on a screen. Granted, this isn't "creative" in the sense most often discussed, but it's still important to the readability and usefulness of such documents that care is taken with the design and typography, and the flexibility and programmability of TeX makes that possible.


Donald Knuth, a professor of computer science at Stanford University and the author of numerous books on computer science and the TeX composition system, rewards the first finder of each typo or computer program bug with a check based on the source and the age of the bug. Since his books go into numerous editions, he does have a chance to correct errors. Typos and other errors in books typically yield $2.56 each once a book is in print (pre-publication "bounty-hunter" photocopy editions are priced at $.25 per), and program bugs rise by powers of 2 each year from $1.28 or so to a maximum of $327.68. Knuth's name is so valued that very few of his checks – even the largest ones – are actually cashed, but instead framed. (Barbara Beeton states that her small collection has been worth far more in bragging rights than any equivalent cash in hand. She's also somewhat biased, being Knuth's official entomologist for the TeX system, but informal surveys of past check recipients have shown that this holds overwhelmingly for nearly everyone but starving students.) This probably won't be true for just anyone, but the relatively small expense can yield a very worthwhile improvement in accuracy.

If you become known for this sort of interest in accuracy and reward for the first report, we suspect that it will attract readers to the material itself, as well as just to the finding of errors.

Top Visited
Past week
Past month


Old News ;-)

Interesting user letter form 65302.html.

Chris - Subject: Once upon a time... (long) (Mar 20, 2000, 21:59:21 )
... I was a Wordperfect user once, running WP5.1 on DOS. I briefly tried the Windows version on Win3.0 (under OS/2), but stayed with the DOS version.

Then, I downloaded a version of the gcc compiler for my C++ programming under OS/2 for an engineering class project. The documentation was in texinfo format, which, someone explained to me
on usenet, was related to something called TeX. So I checked Knuth's TeXbook out of the college library to see what TeX was all about. I tried to write a lab report in TeX, and found it somewhat difficult but interesting, but found the result looked far better than what I was used to. Eventually, I learned LaTeX (a macro system front-end to TeX), and started using it for my day to day work, and stopped using Wordperfect.

It seemed at first strange to write papers in a text editor, but I already was using Emacs for programming (I downloaded it and started using it because the gcc documentation said it worked with gdb). I eventually realized that Emacs was far more powerful than Wordperfect for editing text, and that editing documents as text was no more difficult than editing a document directly in a form more or less
visually related to how things would look when printed to paper. Wordperfect--at least then--required lots of function keys to operate, so Emacs didn't seem to difficult in comparison anyway.

The results also looked much better than any word-processed documents I saw around me, both my old papers and those of students and professors around me. My senior paper in philosophy was written in LaTeX, and before my defense, a couple of professors on my committee told me they were impressed with how it looked (they were used to seeing Word files). They were all Mac-heads, so I just said "thanks" and didn't try to explain how I had typeset my paper.

Because I was already using TeX and gcc and Emacs, switching from OS/2 to Linux in 1994 was really easy for me, and now I'm finishing a moderate-sized PhD thesis in LaTeX written in Emacs AucTeX mode under Linux and Irix. I would hate to have to go back to using a word processor.

It is great that Wordperfect and other word-processors are available for Linux now, but still, if you don't know what TeX and LaTeX are, and especially if you have a technical inclination, look into them.
TeX is an amazing program, it is free software, it is extremely portable, its typesetting is still unmatched (especially for math), and it is largely "finished" and believed to be free of programming bugs. Some people even buy the source code formatted as a book to read, and if you find a bug, Knuth will pay you for it. He even wrote a book about the bugs found in earlier versions.

I'm writing this because I worry that many new Linux users may never hear about TeX and LaTeX. The reason is that TeX is basically finished (what a concept!), LaTeX is developing slowly, and there are
no advertising budgets behind them.

LyX -- A very interesting solution that puts up a pseudo-WYSIWYG front end and hands the hard work off to LaTeX.

KLyX a port to the KDE Desktop Environment

teTeX -- TeX distribution

Version 2.0 of TtH the TeX to HTML translator, is released.

TtH translates LaTeX or Plain TeX, including equations, into HTML, thus making possible the browsable web-publication of mathematics. TtH uses no gif images for equation rendering; instead it uses the native fonts of modern browsers and HTML tables for layout. The results generally have better visual quality, especially alignment, than approaches that use images. Moreover documents produced by TtH are more compact and far faster to download, view and navigate than the alternatives. TtH translates TeX faster than browsers can render the output; so TtH is eminently capable of real-time document translation and serving from a single maintained TeX source. TtH is completely portable to any operating system. It is available from

Recommended Links

Google matched content

Softpanorama Recommended

Top articles


TeX - Wikipedia, the free encyclopedia

TeX Users Group (TUG) home page

MiKTeX Project Page -- windows port

Random Findings

Because troff required a commercial license, it had to be cloned for free UNIX. The groff formatter suite used on most free BSD systems was cloned by James Clark in the 1980's from UNIX troff, with ideas from SoftQuad and other extended versions of troff. As many people probably know the initial version of Unix was designed for publishing. Bell Lab histrorically was the first to experiment with typesetting tools. J.E. Saltzer had written "runoff" for CTSS. Bob Morris moved it to the 635, and called it "roff". Ritchie rewrote that as "rf" for the PDP-7, before there was UNIX. At the same time, the summer of 1969, Doug McIllroy rewrote roff in BCPL (...), extending and simplifying it. The first version of UNIX was developed on a PDP-7 which was sitting around Bell Labs. In 1971 the developers wanted to get a PDP-11 for further work on the operating system. In order to justify the cost for this system, they proposed that they would implement a document formatting system for the AT&T patents division. It was McIllroy's version that first Joe Osanna and, after his death, Brian Kernighan turned into troff... Nearly a decade later, Ted Dolotta created the memorandum (-mm) macros, with a lot of input from John Mashey. Thereafter, Eric Allman wrote the BSD -me macros. There is a book by Salus "A Quarter Century of UNIX" that might provide additional details, but main facts are frelly availab from the Internet, for example in Groff History :

`troff' can trace its origins back to a formatting program called `runoff', written by J. E. Saltzer, which ran on MIT's CTSS operating system in the mid-sixties. This name came from the common phrase of the time "I'll run off a document." Bob Morris ported it to the 635 architecture and called the program `roff' (an abbreviation of `runoff'). It was rewritten as `rf' for the PDP-7 (before having UNIX), and at the same time (1969), Doug McIllroy rewrote an extended and simplified version of `roff' in the BCPL programming language.

The first version of UNIX was developed on a PDP-7 which was sitting around Bell Labs. In 1971 the developers wanted to get a PDP-11 for further work on the operating system. In order to justify the cost for this system, they proposed that they would implement a document formatting system for the AT&T patents division. This first formatting
program was a reimplementation of McIllroy's `roff', written by J. F. Ossanna.

When they needed a more flexible language, a new version of `roff' called `nroff' ("Newer `roff'") was written. It had a much more complicated syntax, but provided the basis for all future versions. When they got a Graphic Systems CAT Phototypesetter, Ossanna wrote a version of `nroff' that would drive it. It was dubbed `troff', for "typesetter `roff'", although many people have speculated that it actually means "Times `roff'" because of the use of the Times font family in `troff' by default. As such, the name `troff' is pronounced `t-roff' rather than `trough'.

With `troff' came `nroff' (they were actually the same program except for some `#ifdef's), which was for producing output for line printers and character terminals. It understood everything `troff' did, and ignored the commands which were not applicable (e.g. font changes).

Since there are several things which cannot be done easily in `troff', work on several preprocessors began. These programs would transform certain parts of a document into `troff', which made a very natural use of pipes in UNIX.

The `eqn' preprocessor allowed mathematical formulae to be specified in a much simpler and more intuitive manner. `tbl' is a preprocessor for formatting tables. The `refer' preprocessor (and the similar program, `bib') processes citations in a document according to a bibliographic database.

Unfortunately, Ossanna's `troff' was written in PDP-11 assembly language and produced output specifically for the CAT phototypesetter. He rewrote it in C, although it was now 7000 lines of uncommented code and still dependent on the CAT. As the CAT became less common, and was no longer supported by the manufacturer, the need to make it support
other devices became a priority. However, before this could be done, Ossanna was killed in an auto accident.

So, Brian Kernighan took on the task of rewriting `troff'. The newly rewritten version produced a device independent code which was very easy for postprocessors to read and translate to the appropriate printer codes. Also, this new version of `troff' (called `ditroff' for "device independent `troff'") had several extensions, which included drawing functions.

Due to the additional abilities of the new version of `troff', several new preprocessors appeared. The `pic' preprocessor provides a wide range of drawing functions. Likewise the `ideal' preprocessor did the same, although via a much different paradigm. The `grap' preprocessor took specifications for graphs, but, unlike other preprocessors, produced `pic' code.

James Clark began work on a GNU implementation of `ditroff' in early 1989. The first version, `groff' 0.3.1, was released June 1990. `groff' included:

Development of GNU `troff' progressed rapidly, and saw the additions of a replacement for `refer', an implementation of the `ms' and `mm' macros, and a program to deduce how to format a document (`grog').

It was declared a stable (i.e. non-beta) package with the release of version 1.04 around November 1991.

Beginning in 1999, `groff' has new maintainers (the package was an orphan for a few years). As a result, new features and programs like `grn', a preprocessor for gremlin images, and an output device to produce HTML output have been added.

Another nice summary of Bell Lab work was found in Daemon News 1999/03 - A History of UNIX before Berkeley- UNIX ...

3.2 roff

Having a good text editor is only half the text processing battle. Having entered your text, you still must format it neatly for presentation. That's the function of a text-formatting program. The earliest UNIX formatter known to man is roff, a line-command formatter. Like ed, roff is part of a large and diverse family, one that includes the runoff package found on Digital Equipment computers (the latest release is called DSR, for DEC Standard Runoff). The earliest Runoff program is attributed by Kernighan and Plauger to J. Saltzer, who wrote it for CTSS. Runoff also is an ancestor of the Script programs available on IBM mainframe systems; that descent would be equally interesting for IBMers to trace (no doubt we'll get letters from those with information to SHARE with us).

Roff was written by M. D. McIlroy, at Research. Like ed, roff was well in place by the First Edition of UNIX. It was considered static by the time of the Sixth Edition, regarded as obsolescent by the time of the Seventh, and dropped altogether by the time of System III.

3.3 nroff and troff - The assembler of text

Computerists are never satisfied. So after roff came ``New Roff'', or nroff, written by the late Joseph Ossanna, who throughout his career was concerned with improving the way text was handled. Ossanna's nroff, as Kernighan and Pike relate,

``was much more ambitious [than roff]. Rather than trying to provide every style of document that users might ever want, Ossanna made nroff programmable, so that many formatting tasks were handled by programming in the nroff language.

``When a small typesetter was acquired in 1973, nroff was extended to handle the multiple sizes and fonts and the richer character set that the typesetter provided. The new program was called troff (which by analogy to ``en-roff'' is pronounced ``tee-roff''). nroff and troff are basically the same program...'' with divergent processing appropriate to the differences in output device. [UNIX Programming Environment, page 289]

They point out that troff is tremendously flexible, and indeed many computer books have been typeset using it. But it can be complex to use. As a result, most everyone uses one or another ``macro package'' - a series of pre-programmed commands - and optionally one of the preprocessors (such as eqn, tbl, refer, and more recently pic, grap and ideal). Troff was originally written in assembler, but was redone in C in 1975. Joseph Ossanna wrote both versions and maintained it until his death in 1977.

3.4 Macro Packages

The earliest macro package to come into wide use was ``ms'', for ``manuscript''[Lesk1977a]. Written by Mike Lesk, the ms macros provide a powerful but easy-to-learn (by comparison with bare nroff) approach to document formatting. The ms macros were distributed with the Sixth and Seventh Edition UNIX and most subsequent releases. The package was picked up and extended at Berkeley.

The USG versions of UNIX include a macro package called `mm', for ``memorandum macros''[Dolo1978a]. These do most of the same things as ms, in slightly different ways, with the addition of numbered lists and a few other bells and whistles, but are about half again as big as ms. The startup time is such with mm that USG in 1979 had to resort to a compacted form of the macro packages; this made it into System III.

There are two versions of the `man' macro package used to format the manual pages in Volume 1 of the UNIX Manual Set. One was used on V6, and the other from V7 on. If you see a manual page beginning with `.th' instead of `.TH', it's from V6. System III has an (undocumented) command mancvt, and 4.1BSD had trman, to convert files from the old to the new format.

There is also the `mv' macros for producing viewgraph or slide presentations. This is a USG product, and versions of the USG manuals from PWB 1 up to just before System III carried the now-famous line:

The PWB/UNIX document entitled: PWB/UNIX View Graph and Slide Macros is not yet available.

System III manuals appeared with scarcely a mention of mv, and (finally) it was documented with the System V manuals.

3.5 tbl, eqn, refer

One view of troff is as an assembler language for text processing. If this be true, then eqn, tbl, refer and later preprocessors are the high-level compilers that go with it.

Mathematics has always been an inconvenience to traditional typesetting. This observation led Brian Kernighan and Lorinda Cherry to develop eqn for UNIX, and would later lead Donald Knuth to write his TeX typesetting package with math capabilities built in.

The eqn program reads an entire nroff/troff input file and passes it unchanged except for ``equation specifications'' delimited by .EQ and .EN requests (or pairs of delimiter characters for in-line equations). Material inside these requests is used to construct equations of considerable complexity from simple input. English words such as `sum', `x sub i', and 'infinity' produce the expected results (a large Sigma, x with a subscript i, and the infinity symbol respectively). In most cases no typesetter wizardry is required. A list of some forty extra character definitions lives in the file /usr/pub/eqnchar; these can be copied, extended, or altered by the knowledgeable user.

Eqn was written by Brian Kernighan and Lorinda Cherry. The ``new graphic symbols'' in /usr/pub/eqnchar are the work of Carmela L'Hommedieu (formerly Scrocca) at Bell Labs. The first public write-up of eqn appears to be a paper by Kernighan and Cherry in the CACM, March 1975[Kern1975a]. The software was included in the Sixth Edition UNIX.

Like eqn, tbl is a preprocessor that passes over a formatter input file looking for special requests (here .TS and .TE)[Lesk1976a]. The material between these requests is expected to be a series of special commands to tbl and some tabular data. To greatly over-simplify how this program works, tbl replaces tab characters with explicit horizontal and vertical moves to make the rows and columns in the table align exactly under control of the table specification. It is invaluable for putting tabular material of any kind into documents.

Mike Lesk wrote tbl at Research; the idea for it came from an earlier table formatting program by J. F. Gimpel. Tbl first appeared outside the Labs with the V6 release of UNIX. It appeared in its present form on the ``Phototypesetter Version 7'' (interim V7) and PWB tapes, in Seventh Edition UNIX distributions, and in all systems since then.

Lesk also wrote refer[Lesk1978a], a bibliographic citation and reference package, which first appeared in V7. Regrettably, not all UNIX distributions picked it up (like Lesk's tbl, refer contained many bugs, some of which manifested as core dumps on various architectures; that probably didn't help, though there have also been strong rumors that AT&T didn't want to distribute any form of software incorporating indices, notably the V7 dbm library, in order to leave a market for third-party data base companies). It incorporates knowledge of how to format citations and can produce from an approximate citation a footnote (or reference) mark and a formal citation as a footnote or reference at the end. Once you build up (or copy) a large collection of references, this is very handy. If your system doesn't come with refer, grefer can be found in the groff distribution.

3.6 Typesetter Independent troff

New! Improved! Yet again! That's right. troff is infinitely perfectible. In 1979, Brian Kernighan at Research set out to re-write troff. Rather than rewrite it completely and be incompatible with the tens or hundreds of thousands of documents in existence, he chose to ``clean up'' troff. It turned out to be rather more akin to cleaning the Augean Stables than he had imagined, but resolve did not desert Kernighan. Finally he emerged with a tape for the Typesetter-Independent (later dubbed ``Device-Independent'', to avoid a sexually-charged acronym) troff[Kern1982a], along with revised tbl/eqn/refer and two new preprocessors, pic and Chris Van Wyk's ideal[Kern1981a]. The new ditroff produced output that is readable ASCII containing a low-level description of the commands needed to drive a typesetter; this `intermediate language' is fed to a particular post-processor to generate the particular commands needed to drive a typesetter. Pic (as the name implies) does pictures. It is useful for drawing ``flow-chart''-like drawings, but there is much more to it than that. Ideal also draws pictures, but is somewhat more mathematical in usage than is pic.

This set of products formed the basis of the commercialized ``Documentor's Workbench'' package from AT&T. And work continues, of course. Kernighan has been working on cleaning up the appearance of eqn output. Recently, Kernighan and John Bentley have written grap, a graph plotting preprocessor for pic. The program was released with Release 2 of Documentor's WorkBench, released in early 1986. A report on grap is also available from Bell Laboratories as a CSTR[Bent1984a].

3.7 Troff in the outside world

Several Universities have further developed troff. Since these are changing all the time, it is not feasible to mention them all here. A future issue of the USENIX journal ;login: will contain a summary of this work.

Many companies picked up the commercial (binary sublicense) release of Documentor's WorkBench. As a result one can now purchase this product in binary-only form for most any UNIX computer. Most of these ``troff houses'' have added post-processors for specific output devices; few have made substantial change to the main program. One of the authors (Darwin) was employed at SoftQuad in the mid-1980's, then believed to be the only company doing serious development on troff internals (AT&T had no developers assigned to the product). SoftQuad had drastically changed the intermediate language devised by Kernighan. It is now more readable and more amenable to additional processing with the UNIX software tools than is the standard ditroff. Most troff houses support the common output devices (in 1986 this means the Apple LaserWriter® and other PostScript® printers, Imagen imPRESS® printers, the Hewlett-Packard LaserJet®, and other devices; by the mid 1990's it means PostScript and HP).

A few attempts have been made to port Documentor's WorkBench to non-UNIX environments. Both SoftQuad and Elan Systems have an MS-DOS port of complete DWB. Other companies (including Image Network) have ported the product to Digital Equipment's VMS system. But the main usage of troff continues to be on UNIX.

3.8 Of Mice and Blits

Tired of typing at a dull, boring 24×80 screen, Rob Pike and Bart Locanthi had a better idea. Integrating the ideas of the Alto project and related work at Xerox PARC (Palo Alto Research Center) with the UNIX approach to things, they built a special terminal called the Blit[Pike1985a, Pike1984a, Carg1984a] (not an acronym) with a Motorola 68000 processor, high-resolution screen, good keyboard, a fabulous mouse, and software split between the host and the terminal. Their particular combination of these ingredients makes possible a form of interaction with the computer that was years ahead of its time. Pike, in addition to being a radical advocate of the UNIX approach to software development, is quite visionary. Some of his work was described at recent (1980's) USENIX conferences. Sadly, large parts of the audience didn't seem to grasp the essentials of what he was saying. Some of the same ideas are of course found in other bit-mapped screen environments, such as the SUN workstation and The X Window System, in somewhat different forms.

Blits were available only inside AT&T, mostly in Bell Labs, although a few have been released to selected Universities. The Blit has been commercialized by AT&T/Teletype, and was sold (with a different microprocessor) as the AT&T DMD 5620 terminal.

(More recently, much of this work has been consolidated into an operating system called Plan 9 From Bell Laboratories.)[Pike1990a, Pike1995a]

3.9 Style, Diction, Writer's Workbench

One of the authors (Darwin) has long been interested in the computerized processing of text, a term I take to mean more than is commonly included as ``word processing.'' So I was quite interested to read a paper by L. E. McMahon, Lorinda L. Cherry and R. Morris entitled ``Statistical Text Processing'' in the 1978 special BSTJ issue on UNIX[McMa1978a]. I would later use several of the techniques mentioned in the paper.

At the end of the paper, Ms. Cherry describes a program parts for finding parts of speech in English text. This was written to be the first pass of a system to add inflection to the speak program written by Doug McIlroy, but the person doing the stress part left the company. Ms. Cherry wasn't interested in the stress assignment, so she documented the work done so far and went on to other things.

In the spring of 1979, W. Vesterman of Rutgers approached Doug McIlroy at Research about computerizing one of the techniques Vesterman used in teaching writing. The students had to count surface features in their text and in a sample of text written by a professional writer. That summer, Ms. Cherry expanded parts considerably, and added the code that turned it into style, a program to analyse the readability and other characteristics of a textual document. She also developed diction to check for awkward word uses, over-used words, and other problems facing everyone who composes text for others to read. She also modified deroff to find the real text in a document. Vesterman consulted on this work.

And when the 4.1BSD release of the system came out, I was pleasantly surprised to see that style and diction were present[Cher1981a]. Bell Labs has a policy of sometimes releasing software to educational institutions; this probably explains the release at Berkeley.

While this was going on, the Human Factors group at Piscataway (now at Summit) was getting interested in automating document review, and Nina Macdonald of that group called Ms. Cherry about using parts. She had worked at Murray Hill in a Linguistics group and was familiar with the program. Ms. Macdonald took style and diction, and WWB evolved from there. Writer's Workbench (WWB) consists of style, diction and a dozen or so related programs for finding problems in written work. The ``chattiness'' level of the programs is set for the beginning user, but can easily be adjusted by the advanced user. The ideas for this work came from the Piscataway group, the Murray Hill group, and from Colorado State University, where extensive use of the Writer's Workbench (described at USENIX, Toronto, July 1983) currently puts several thousand undergraduates on WWB each year. The use of WWB is perceived to improve significantly the students' writing skills.

Many writers will be thankful to all who contributed, because these programs have proven themselves useful many times over. If buying a 4.1 or 4.2BSD system, insist on style and diction. If you get a System V UNIX, consider getting the WWB add-on if you'll be doing any document preparation. Writer's Workbench is one product that should survive and prosper as UNIX continues to evolve. The next major release of WWB (3.0) was scheduled for the spring of 1985.


About two-thirds of this book concerns Knuth's experiences writing a book entitled "3:16." Knuth decided to study the Bible through a sort of stratified random sample: taking Chapter 3, verse 16 from each book of the Bible and studying it in depth. These discussions do have their interest, but I do not feel that "Things A Computer Science Rarely Talks About" really stands on its own.

A better title for this book would have been something like "3:16: The Story Behind The Book." See

TechNetCast - MIT God and Computers Lecture Series Donald Knuth

Consider chapter 4. This is mostly a series of stories about how Knuth and Herrmann Zapf asked many of the world's leading calligraphers to illuminate the verses he used. He talks about the suble colors used to print them, the delicacy of the originals (not fully picked up by a 600 dpi scanner and requiring pixel-level editing to prepare them for reproduction). It therefore can only be described as irritating to be presented with these pictures, about fifty of them, in the form of little 2x3" halftone reproductions.

I do not feel the book delivers much on the promise that Knuth will reveal "the many insights that [he] gained" from the work. He talks a good deal about the reasoning and the process behind the project and it is quite interesting, perhaps even inspiring in the sense of making one wish to do likewise. But his presentation of his own beliefs is rather muted and low-key. I perceive this as modesty, not evasion. Still, it is not what I expected. Consider chapter 4, again: we learn a great deal about how he feels about the esthetics of the calligraphy, how he edited them, how he dealt with issues like calligraphers who inadvertently made mistakes in the text. But are there really any religious insights here? Well, subtle ones, perhaps.

I think he is sincere when he says, referring to "3:16," "I am not here today to sell copies of the book." In a discussion, someone asks "What would you recommend for computer science students who have never read the Bible?" and I believe Knuth is joking when he says "The number one recommendation is that they should certainly read my book. You know, it makes a wonderful Christmas gift. More seriously..."

Chapters 5 ("Glimpses of God") and 6 ("God And Computer Science") are fascinating, and come close to delivering on the promise of the book. I would gladly read a book-length expansion of this material in these two chapters.

Still, on finishing this book, I am aware of two feelings: a) an interest in reading "3:16," and b) an irritation with myself for having purchased this one.

5 of 5 stars A unique and intimate portrait of the Bible., January 13, 2002
Reviewer: A reader from New York, New York

From his idiosyncratic perspective as a computer scientist, Knuth presents an aesthetically pleasing and intellectually inviting commentary of the 3:16's.

In this day and age of technological sophistication, it is so courageous that a scientist and scholar of Knuth's stature can say "it's tragic that scientific advances have caused many people to imagine that they know it all, and that God is irrelevant or nonexistent. The fact is that everything we learn reveals more things that we do not understand... Reverence for God comes naturally if we are honest about how little we know." [1]

Knuth is candid about what he knows as well as what he doesn't know and he presents his views in a non-judgemental, introspective manner. For example, Knuth is surely including himself when he states "God sees the rottenness, deceit, and hypocrisy in every one of us..." [2] Furthermore, there are rare glimpses into Knuth the man as he unabashedly says what he feels. To illustrate, Knuth describes his thoughts about his own mortality and how he felt when his father died. [3]

Ultimately, this book is Knuth's solemn and joyous celebration of his relationship with God. But don't let the elegance of the artwork or the relative brevity of the commentary fool you into thinking this book is merely easy on the eyes. The Christian will find this an uplifting and spiritually challenging study, while the unbeliever will discover the richness of the 3:16's and why Knuth finds the Bible is relevant to everyday life.

Knuth is a consummate craftsman and this is a towering work of biblical scholarship, an enduring exegetical legacy for the ages.

Quotes and references from book:
[1] Proverbs 3:16 study
[2] Romans 3:16 study
[3] Job 3:16 study

7 of 10 people found the following review helpful:

5 of 5 stars We Are Not Worthy!, August 17, 2000
Reviewer: A reader from Sim City, CA (Somewhere in the South Bay)

In the well-rounded, whimsical, yet diligent style that is so characteristic for all of his work, Knuth has given us a small stochastico-theologico-typographical masterpiece.

This work shows Knuth's authorship to an extent that even medieval monks could only dream of: He not only wrote the exegeses, but he translated each of the 59 verses of the bible he picked from the original Hebrew or Greek; he picked 59 illustrators to illuminate the verses; he wrote the software that controls the placement of each character on the pages of the book, and he wrote the software that controls the bits for each character of the book.

This book is by no means Knuth's most important contribution to humankind, but if it were the only one of his works to reach posterity, it would still suffice to show him as the compassionate and inspired genius that he is.

5 of 5 stars closest thing to a miracle in our times, December 20, 1997
Reviewer: A reader from Pasadena, California

Zapf + Knuth. Two of the best in their fields came together and made a book about the best of books. Knuth really dedicated himself in writing this book, and the result is clearly superb: I was compelled to say Amen at the turning of every few pages. He also presented the material in a jargon-free way, keeping it relevant to the modern world. If you miss the days when scientists and artists unabashedly manifest their great faith in the Lord, this is the book to get.

In the well-rounded, whimsical, yet diligent style that is so characteristic for all of his work, Knuth has given us a small stochastico-theologico-typographical masterpiece.

This work shows Knuth's authorship to an extent that even medieval monks could only dream of: He not only wrote the exegeses, but he translated each of the 59 verses of the bible he picked from the original Hebrew or Greek; he picked 59 illustrators to illuminate the verses; he wrote the software that controls the placement of each character on the pages of the book, and he wrote the software that controls the bits for each character of the book.

This book is by no means Knuth's most important contribution to humankind, but if it were the only one of his works to reach posterity, it would still suffice to show him as the compassionate and inspired genius that he is.

The book 3:16, Bible Texts Illuminated combines Don Knuth's life-long interest in Bible study, typography, and art. For each of the 59 books of the Bible with at least 16 verses starting at Chapter 3, verse 1, that book is selected for a chapter. Each lesson has four pages. The first page summarizes the book and its historical context. The second page contains an illustration of the sampled verse, The third page gives the text of that verse and discusses it on that page and the fourth, relating it to other Bible references, other translations, and the original languages from which translations have been made. In some cases his original translations are used.

Each of the illustrations is done by a different one of the world's leading calligraphers.

The book, and also a poster containing those illustrations, is available from A-R Editions, Inc.

The book 3:16, Bible Texts Illuminated is a marriage of profound Biblical insights, exquisite calligraphy, and balanced page typography - "a treat for the mind, the eyes, and the spirit."



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2018 by Dr. Nikolai Bezroukov. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: October 03, 2017