Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Software development realism vs Software development idealism.
Two flavors of open source software development idealism: Stallmanism vs Raymondism

Something is rotten in the state of Denmark

Shakespeare, Hamlet
 

News My papers Software realism vs software idealism Economism and abuse of economic theory in American politics Responce to the letter by Paolo Pumilia Slashdot Discussion of the first paper Neoliberal rationality Bad Linux Advocacy FAQ
Webliography to my papers Commentary to the paper Comments on first paper Comments on the second paper Comments about this page Linus Torvalds Biography Earlier Critique of CatB Labyrinth of Software Freedom
ESR's interviews and speeches Annotated quotes from CatB book IPO casino RMS Biography

Status Competition

Open Source: research materials Humor Etc
How essential to the open exchange of knowledge is the notion that none of the participants are getting rich off the exchange?

"Open Source Software Development
as a Special Type of Academic Research"

The main text of this page was recently was converted into an article Software realism vs. software idealism. I consider open source to be a special type of academic research and call this approach Software Realism as opposed to Software Idealism represented by two major schools: Stallmanism and Raymondism. Stallmanism utilizes for the justification of this software development ideology the discourse of ethics and freedom (it goes beyond technical issues of efficiency and raises moral questions of justice, equality and the public good.). Stallman explicitly stated that : "I consider [Free Software] a human right, and thus a moral norm." His introduction of the ideas of good and evil makes his position a kind  of secular religion.  Raymondism is based on neoliberal ideology (or more correctly on Neoliberal rationality ) and postulated higher technical efficiency of open source software development. As we know neoliberalism is also a special variety of secular religion.  Here is the abstract:

In the vision of Software Realists programmers like all humans have weaknesses and guided primarily by self-interest and as such can benefit formal organization and incentives (direct or indirect, via professional norms) to act outside the limits defined by their own self-interest. Software Realists see the evils of the software world as given and derived from the limited and unhappy choices available, given the inherent moral and intellectual limitations of human beings and existing hardware. Some of them try to create better software like programmers involved in creation of all BSD flavors but they consider commercial programmers as equals (actually many of them wear two hats) and do not object to reusing the code that they developed pro bono in proprietary software.

In this slightly tragic worldview the software development is a hard and often underappreciated craft that requires special, pretty rare, talent People with this talent like talented sport stars (for example tennis players or ice ballet dancers) risk their health while they are young creating short living but tremendously complex artifacts (large software systems are probably the most complex system even created by humans) and for this reason should be remunerated accordingly. It is important to understand that like in case of artists creating paintings on the sand beach, programmers creation are short-lived and the new wave of hardware often wipes them clean. For example, who now remembers the creators of PL/1 optimizing and debugging compilers, the compilers that were real breakthrough in many areas of complier writing art and in comparison with which some modern compliers still look like junk despite the fact that they were written 30 years ago.

In other words they see that due to immense complexity of those artifacts all software sucks. It is just that some software sucks less. It can be proprietary software, it can be free software -- all depends of the talent of the creators, not so much on a particular ideology (which, by the way, can be completely wrong: Soviets invented quite a few new technologies and managed to launch the first man into space). Thus, in view of Software Realists school the perfection of software is unachievable and old good Unix with its classic codebase might sometimes be preferable to new Turks be it Windows or Linux. That does not mean that Linux or Windows codebase is all crap. That just means that they are just another OSes in a long historical line of such systems. In some areas they might be better then competition, in some worse, but none has the monopoly on innovation (actually Linux is a pretty conservative kernel despite a radical ideology behind it).

Software Realists are unconvinced by claims of linux superiority and want to see hard facts. Furthermore, history guided them that the proper tradeoffs between different subsystems of OSes can be ironed out only via long, expensive and painful process that requires strong, highly paid managers, programmers and testers who are ready to sacrifices their health for the success of their creation. The real art requires sacrifices and it is better when such sacrifices are properly remunerated, although stories of talented artists who died in poverty are nothing new.

Because real talents are rare, good software is a very expensive thing. Software realism school presuppose that modern software is almost always a compromise between the demands of the talent and demands of the marketplace.

Software realists contribute to open source projects not because they see them superior to commercially developed software, but because they consider them an important part of software ecosystem, the part that requires support and nourishment and helps to keep closed source software developers honest. It is a extremely important countervailing force that has value of its own, independently whether particular open source program is superior to similar closed source program or not, the availability of open source codebase make it more suitable for some purposes, for example in education and often compensates for real or perceived shortcomings. They view open source development as a special type of academic research that has similar set of motivations, similar risks and reward structure

The Software Idealist school (both in its Stallmanism and Raymondism incarnations) holds that mankind in general and programmers in particular has not yet achieved their full moral potential, and that they are (at least in principle) perfectible if guided by wonderful new software development ideology. Foolish and immoral choices inherent in the creation of proprietary software explains the all the evils of the existing software world and revolution was needed and actually already came in the form of either free software movement or its less pure form called open source software movement. Both major Software Idealism schools rely on volunteer labor of programmers connected via Internet to produce immortal gems of software wisdom that will crush proprietary software developers like cockroaches.

In Software Idealism worldview, whether purely moral incentive actually work in a long run or not and what will happen when Linus Torvalds will become yet another retired dot-com multimillionaire with his own yacht is irrelevant to the achievement of true software justice, justice for all. This utopian view holds that volunteer programmer potential is immense and can do everything including improving human nature that should get rid of those outdated economic rewards for software development and be satisfied working part time in McDonalds in order to be able to participate in the movement. So the Software Idealism vision promotes pursuit of the high moral ground of software freedom which somehow guarantee the best software solutions. At the end of the day new liberated men should all storm this evil Bastille of software oppression which is of course Microsoft and dance on its ruins. Sometimes in their enthusiasm they can attack other sinful old fashioned proprietary software vendors instead of Microsoft. Until opening Solaris (and even after that) sometimes their target was Sun.

And if the unwashed masses who corrupt their soils by using Windows are slow in catching on, then it is the role of the intellectual vanguard (the keepers of programming craft who in Eastern Europe might be called "programming intelligentsia") to lead them -- even if in the short run, the masses can be unhappy with the results because they might not be able to use full capabilities of their laptops, cameras or scanners. In general Software Idealists think that higher moral considerations should guide us and that those consideration somehow guarantee creation of a better software, the software that is not only better but which is as perfect as it is free.

Fairy tales

Like any ideology Stallmanism and Raymondism rewrite history producing myths. We will note just one aspect of this mythological history: gift economies are much more intricate systems then ideologically blind open source proponents like ESR assume ( David Graeber On the Invention of Money – Notes on Sex, Adventure, Monomaniacal Sociopathy and the True Function of Economics « naked capitalism):

What anthropologists have in fact observed where money is not used is not a system of explicit lending and borrowing (aka gift economies -- NNB), but a very broad system of non-enumerated credits and debts. In most such societies, if a neighbor wants some possession of yours, it usually suffices simply to praise it (“what a magnificent pig!”); the response is to immediately hand it over, accompanied by much insistence that this is a gift and the donor certainly would never want anything in return.

In fact, the recipient now owes him a favor. Now, he might well just sit on the favor, since it’s nice to have others beholden to you, or he might demand something of an explicitly non-material kind (“you know, my son is in love with your daughter…”) He might ask for another pig, or something he considers roughly equivalent in kind.

But it’s almost impossible to see how any of this would lead to a system whereby it’s possible to measure proportional values. After all, even if, as sometimes happens, the party owing one favor heads you off by presenting you with some unwanted present, and one considers it inadequate—a few chickens, for example—one might mock him as a cheapskate, but one is unlikely to feel the need to come up with a mathematical formula to measure just how cheap you consider him to be.

As a result, as Chris Gregory observed, what you ordinarily find in such ‘gift economies’ is a broad ranking of different types of goods—canoes are roughly the same as heirloom necklaces, both are superior to pigs and whale teeth, which are superior to chickens, etc—but no system whereby you can measure how many pigs equal one canoe. [3]

... For instance, Sumerians, though they had the technological means to do so, never produced scales accurate enough to weigh out the tiny amounts of silver that would have been required to buy a single cask of beer, or a woolen tunic, or a hammer—the clearest indication that even once money did exist, it was not used as a medium of exchange for minor transactions, but rather as a means of keeping track of transactions made on credit.

In many times and places, one sees a similar arrangement: two sorts of money, one, a common long-distance trade item, the other, a common subsistence item—cattle, grain—that’s stockpiled, but never traded. Still, Temple bureaucracies and their ilk are something of a rarity. In their absence, how else might a system of pricing, of proportional equivalents between the values of any and all objects, potentially arise? Here again, anthropology and history both provide one compelling answer, one that again, falls off the radar of just about all economists who have ever written on the subject. That is: legal systems.

If someone makes an inadequate return you will merely mock him as a cheapskate. If you do so when he is drunk and he responds by poking your eye out, you are much more likely to demand exact compensation. And that is, again, exactly what we find. Anthropology is full of examples of societies without markets or money, but with elaborate systems of penalties for various forms of injuries or slights. And it is when someone has killed your brother, or severed your finger, that one is most likely to stickle, and say, “The law says 27 heifers of the finest quality and if they’re not of the finest quality, this means war!” It’s also the situation where there is most likely to be a need to establish proportional values: if the culprit does not have heifers, but wishes to substitute silver plates, the victim is very likely to insist that the equivalent be exact. (There is a reason the word ‘pay’ comes from a root that means ‘to pacify’.)

Again, unlike the economists’ version, this is not hypothetical. This is a description of what actually happens—and not only in the ethnographic record, but the historical one as well. The numismatist Phillip Grierson long ago pointed to the existence of such elaborate systems of equivalents in the Barbarian Law Codes of early Medieval Europe. [7]

For example, Welsh and Irish codes contain extremely detailed price schedules where in the Welsh case, the exact value of every object likely to be found in someone’s house were worked out in painstaking detail, from cooking utensils to floorboards—despite the fact that there appear to have been, at the time, no markets where any such items could be bought and sold. The pricing system existed solely for the payment of damages and compensation—partly material, but particularly for insults to people’s honor, since the precise value of each man’s personal dignity could also be precisely quantified in monetary terms. One can’t help but wonder how classical economic theory would account for such a situation. Did the ancient Welsh and Irish invent money through barter at some point in the distant past, and then, having invented it, kept the money, but stopped buying and selling things to one another entirely?

... Even when strangers met and barter did ensue, people often had a lot more on their minds than getting the largest possible number of arrowheads in exchange for the smallest number of shells. Let me end, then, by giving a couple examples from the book, of actual, documented cases of ‘primitive barter’—one of the occasional, one of the more established fixed-equivalent type.

The first example is from the Amazonian Nambikwara, as described in an early essay by the famous French anthropologist Claude Levi-Strauss. This was a simple society without much in the way of division of labor, organized into small bands that traditionally numbered at best a hundred people each. Occasionally if one band spots the cooking fires of another in their vicinity, they will send emissaries to negotiate a meeting for purposes of trade. If the offer is accepted, they will first hide their women and children in the forest, then invite the men of other band to visit camp. Each band has a chief and once everyone has been assembled, each chief gives a formal speech praising the other party and belittling his own; everyone puts aside their weapons to sing and dance together—though the dance is one that mimics military confrontation. Then, individuals from each side approach each other to trade:

If an individual wants an object he extols it by saying how fine it is. If a man values an object and wants much in exchange for it, instead of saying that it is very valuable he says that it is worthless, thus showing his desire to keep it. ‘This axe is no good, it is very old, it is very dull’, he will say… [8]

In the end, each “snatches the object out of the other’s hand”—and if one side does so too early, fights may ensue.

The whole business concludes with a great feast at which the women reappear, but this too can lead to problems, since amidst the music and good cheer, there is ample opportunity for seductions (remember, these are people who normally live in groups that contain only perhaps a dozen members of the opposite sex of around the same age of themselves. The chance to meet others is pretty thrilling.) This sometimes led to jealous quarrels. Occasionally, men would get killed, and to head off this descending into outright warfare, the usual solution was to have the killer adopt the name of the victim, which would also give him the responsibility for caring for his wife and children.

The second example is the Gunwinngu of West Arnhem land in Australia, famous for entertaining neighbors in rituals of ceremonial barter called the dzamalag. Here the threat of actual violence seems much more distant. The region is also united by both a complex marriage system and local specialization, each group producing their own trade product that they barter with the others.

In the 1940s, an anthropologist, Ronald Berndt, described one dzamalag ritual, where one group in possession of imported cloth swapped their wares with another, noted for the manufacture of serrated spears. Here too it begins as strangers, after initial negotiations, are invited to the hosts’ camp, and the men begin singing and dancing, in this case accompanied by a didjeridu. Women from the hosts’ side then come, pick out one of the men, give him a piece of cloth, and then start punching him and pulling off his clothes, finally dragging him off to the surrounding bush to have sex, while he feigns reluctance, whereon the man gives her a small gift of beads or tobacco. Gradually, all the women select partners, their husbands urging them on, whereupon the women from the other side start the process in reverse, re-obtaining many of the beads and tobacco obtained by their own husbands. The entire ceremony culminates as the visitors’ men-folk perform a coordinated dance, pretending to threaten their hosts with the spears, but finally, instead, handing the spears over to the hosts’ womenfolk, declaring: “We do not need to spear you, since we already have!” [9]

In other words, the Gunwinngu manage to take all the most thrilling elements in the Nambikwara encounters—the threat of violence, the opportunity for sexual intrigue—and turn it into an entertaining game (one that, the ethnographer remarks, is considered enormous fun for everyone involved). In such a situation, one would have to assume obtaining the optimal cloth-for-spears ratio is the last thing on most participants’ minds. (And anyway, they seem to operate on traditional fixed equivalences.)

As it appears, a kind of faith, a revealed Truth embodied in the words of great prophets who must, by definition be correct, and whose theories must be defended whatever empirical reality throws at them—even to the extent of generating imaginary unknown periods of history where something like gift economics ‘must have’ taken place...

Read more

All in all, I tried to communicate a more objective message that can mobilize developers by giving them a clear sense of what OSS is about, what are major pitfalls and difficulties and how to avoid them or at least lessen their influence on the project. Choices are difficult and for talented programmer jumping into open source in order to prove themselves is just one possibility. Not always the best. Also level of contribution can vary and some level of contribution is advocated by the authored as pretty affective medicine against dull, bureaucratized and stiffing for innovation atmosphere of IT departments in many large corporations with clueless bosses and complete technological dysfunction :-). Here, not in fairy tales about bazaar models or illusive freedom via GPL, the author see future of open source and free software development. Open source emerged first as a counterculture and now after commercial success can move back from Linux gold rush at least slightly closer to its roots. Again here the example of Free/Open/NetBSD is an really inspirational as those projects survives and had shown impressive results (and in some areas beat Linux kernel developers, like was the case with FreeBSD jails or Linux application developers like was the case with OpenBSD OpenSSH project) without injection of doping from heavyweights like IBM into the community :-). But in general I see the future of open source more in the area connected or adjacent to scripting languages (LAMP is one example; usage of Perl in bioinformatics is another), where it really opened new horizons and beat commercial developers such as IBM and Microsoft. Yes, Microsoft.

I wrote five papers and one ebook analyzing this problem from various angles:

  1. Open Source Software Development as a Special Type of Academic Research
  2. A second look at The Cathedral and The Bazaar
  3. Bad Linux Advocacy FAQ (Raymondism FAQ)
  4. Solaris vs Linux
  5. Software realism vs software idealism

Ebook

Labyrinth of Software Freedom (BSD vs GPL and social aspects of free licensing debate)


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

1993 1994 1995 1996 1997 1998 1999 2000
2001 2002 2003 2004 2005 2006 2007 2008

[Jul 10, 2017] Crowdsourcing, Open Data and Precarious Labour by Allana Mayer Model View Culture

Notable quotes:
"... Photo CC-BY Mace Ojala. ..."
"... Photo CC-BY Samantha Marx. ..."
Jul 10, 2017 | modelviewculture.com
Crowdsourcing, Open Data and Precarious Labour Crowdsourcing and microtransactions are two halves of the same coin: they both mark new stages in the continuing devaluation of labour. by Allana Mayer on February 24th, 2016 The cultural heritage industries (libraries, archives, museums, and galleries, often collectively called GLAMs) like to consider themselves the tech industry's little siblings. We're working to develop things like Linked Open Data, a decentralized network of collaboratively-improved descriptive metadata; we're building our own open-source tech to make our catalogues and collections more useful; we're pushing scholarly publishing out from behind paywalls and into open-access platforms; we're driving innovations in accessible tech.

We're only different in a few ways. One, we're a distinctly feminized set of professions , which comes with a large set of internally- and externally-imposed assumptions. Two, we rely very heavily on volunteer labour, and not just in the internship-and-exposure vein : often retirees and non-primary wage-earners are the people we "couldn't do without." Three, the underlying narrative of a "helping" profession ! essentially a social service ! can push us to ignore the first two distinctions, while driving ourselves to perform more and expect less.

I suppose the major way we're different is that tech doesn't acknowledge us, treat us with respect, build things for us, or partner with us, unless they need a philanthropic opportunity. Although, when some ingenue autodidact bootstraps himself up to a billion-dollar IPO, there's a good chance he's been educating himself using our free resources. Regardless, I imagine a few of the issues true in GLAMs are also true in tech culture, especially in regards to labour and how it's compensated.

Crowdsourcing

Notecards in a filing drawer: old-fashioned means of recording metadata.

Photo CC-BY Mace Ojala.

Here's an example. One of the latest trends is crowdsourcing: admitting we don't have all the answers, and letting users suggest some metadata for our records. (Not to be confused with crowdfunding.) The biggest example of this is Flickr Commons: the Library of Congress partnered with Yahoo! to publish thousands of images that had somehow ended up in the LOC's collection without identifying information. Flickr users were invited to tag pictures with their own keywords or suggest descriptions using comments.

Many orphaned works (content whose copyright status is unclear) found their way conclusively out into the public domain (or back into copyright) this way. Other popular crowdsourcing models include gamification , transcription of handwritten documents (which can't be done with Optical Character Recognition), or proofreading OCR output on digitized texts. The most-discussed side benefits of such projects include the PR campaign that raises general awareness about the organization, and a "lifting of the curtain" on our descriptive mechanisms.

The problem with crowdsourcing is that it's been conclusively proven not to function in the way we imagine it does: a handful of users end up contributing massive amounts of labour, while the majority of those signed up might do a few tasks and then disappear. Seven users in the "Transcribe Bentham" project contributed to 70% of the manuscripts completed; 10 "power-taggers" did the lion's share of the Flickr Commons' image-identification work. The function of the distributed digital model of volunteerism is that those users won't be compensated, even though many came to regard their accomplishments as full-time jobs .

It's not what you're thinking: many of these contributors already had full-time jobs , likely ones that allowed them time to mess around on the Internet during working hours. Many were subject-matter experts, such as the vintage-machinery hobbyist who created entire datasets of machine-specific terminology in the form of image tags. (By the way, we have a cute name for this: "folksonomy," a user-built taxonomy. Nothing like reducing unpaid labour to a deeply colonial ascription of communalism.) In this way, we don't have precisely the free-labour-for-exposure/project-experience problem the tech industry has ; it's not our internships that are the problem. We've moved past that, treating even our volunteer labour as a series of microtransactions. Nobody's getting even the dubious benefit of job-shadowing, first-hand looks at business practices, or networking. We've completely obfuscated our own means of production. People who submit metadata or transcriptions don't even have a means of seeing how the institution reviews and ingests their work, and often, to see how their work ultimately benefits the public.

All this really says to me is: we could've hired subject experts to consult, and given them a living wage to do so, instead of building platforms to dehumanize labour. It also means our systems rely on privilege , and will undoubtedly contain and promote content with a privileged bias, as Wikipedia does. (And hey, even Wikipedia contributions can sometimes result in paid Wikipedian-in-Residence jobs.)

For example, the Library of Congress's classification and subject headings have long collected books about the genocide of First Nations peoples during the colonization of North America under terms such as "first contact," "discovery and exploration," "race relations," and "government relations." No "subjugation," "cultural genocide," "extermination," "abuse," or even "racism" in sight. Also, the term "homosexuality" redirected people to "sexual perversion" up until the 1970s. Our patrons are disrespected and marginalized in the very organization of our knowledge.

If libraries continue on with their veneer of passive and objective authorities that offer free access to all knowledge, this underlying bias will continue to propagate subconsciously. As in Mechanical Turk , being "slightly more diverse than we used to be" doesn't get us any points, nor does it assure anyone that our labour isn't coming from countries with long-exploited workers.

Labor and Compensation

Rows and rows of books in a library, on vast curving shelves.

Photo CC-BY Samantha Marx.

I also want to draw parallels between the free labour of crowdsourcing and the free labour offered in civic hackathons or open-data contests. Specifically, I'd argue that open-data projects are less ( but still definitely ) abusive to their volunteers, because at least those volunteers have a portfolio object or other deliverable to show for their work. They often work in groups and get to network, whereas heritage crowdsourcers work in isolation.

There's also the potential for converting open-data projects to something monetizable: for example, a Toronto-specific bike-route app can easily be reconfigured for other cities and sold; while the Toronto version stays free under the terms of the civic initiative, freemium options can be added. The volunteers who supply thousands of transcriptions or tags can't usually download their own datasets and convert them into something portfolio-worthy, let alone sellable. Those data are useless without their digital objects, and those digital objects still belong to the museum or library.

Crowdsourcing and microtransactions are two halves of the same coin: they both mark new stages in the continuing devaluation of labour, and they both enable misuse and abuse of people who increasingly find themselves with few alternatives. If we're not offering these people jobs, reference letters, training, performance reviews, a "foot in the door" (cronyist as that is), or even acknowledgement by name, what impetus do they have to contribute? As with Wikipedia, I think the intrinsic motivation for many people to supply us with free labour is one of two things: either they love being right, or they've been convinced by the feel-good rhetoric that they're adding to the net good of the world. Of course, trained librarians, archivists, and museum workers have fallen sway to the conflation of labour and identity , too, but we expect to be paid for it.

As in tech, stereotypes and PR obfuscate labour in cultural heritage. For tech, an entrepreneurial spirit and a tendency to buck traditional thinking; for GLAMs, a passion for public service and opening up access to treasures ancient and modern. Of course, tech celebrates the autodidactic dropout; in GLAMs, you need a masters. Period. Maybe two. And entry-level jobs in GLAMs require one or more years of experience, across the board.

When library and archives students go into massive student debt, they're rarely apprised of the constant shortfall of funding for government-agency positions, nor do they get told how much work is done by volunteers (and, consequently, how much of the job is monitoring and babysitting said volunteers). And they're not trained with enough technological competency to sysadmin anything , let alone build a platform that pulls crowdsourced data into an authoritative record. The costs of commissioning these platforms aren't yet being made public, but I bet paying subject experts for their hourly labour would be cheaper.

Solutions

I've tried my hand at many of the crowdsourcing and gamifying interfaces I'm here to critique. I've never been caught up in the "passion" ascribed to those super-volunteers who deliver huge amounts of work. But I can tally up other ways I contribute to this problem: I volunteer for scholarly tasks such as peer-reviewing, committee work, and travelling on my own dime to present. I did an unpaid internship without receiving class credit. I've put my research behind a paywall. I'm complicit in the established practices of the industry, which sits uneasily between academic and social work: neither of those spheres have ever been profit-generators, and have always used their codified altruism as ways to finagle more labour for less money.

It's easy to suggest that we outlaw crowdsourced volunteer work, and outlaw microtransactions on Fiverr and MTurk, just as the easy answer would be to outlaw Uber and Lyft for divorcing administration from labour standards. Ideally, we'd make it illegal for technology to wade between workers and fair compensation.

But that's not going to happen, so we need alternatives. Just as unpaid internships are being eliminated ad-hoc through corporate pledges, rather than being prohibited region-by-region, we need pledges from cultural-heritage institutions that they will pay for labour where possible, and offer concrete incentives to volunteer or intern otherwise. Budgets may be shrinking, but that's no reason not to compensate people at least through resume and portfolio entries. The best template we've got so far is the Society of American Archivists' volunteer best practices , which includes "adequate training and supervision" provisions, which I interpret to mean outlawing microtransactions entirely. The Citizen Science Alliance , similarly, insists on "concrete outcomes" for its crowdsourcing projects, to " never waste the time of volunteers ." It's vague, but it's something.

We can boycott and publicly shame those organizations that promote these projects as fun ways to volunteer, and lobby them to instead seek out subject experts for more significant collaboration. We've seen a few efforts to shame job-posters for unicorn requirements and pathetic salaries, but they've flagged without productive alternatives to blind rage.

There are plenty more band-aid solutions. Groups like Shatter The Ceiling offer cash to women of colour who take unpaid internships. GLAM-specific internship awards are relatively common , but could: be bigger, focus on diverse applicants who need extra support, and have eligibility requirements that don't exclude people who most need them (such as part-time students, who are often working full-time to put themselves through school). Better yet, we can build a tech platform that enables paid work, or at least meaningful volunteer projects. We need nationalized or non-profit recruiting systems (a digital "volunteer bureau") that matches subject experts with the institutions that need their help. One that doesn't take a cut from every transaction, or reinforce power imbalances, the way Uber does. GLAMs might even find ways to combine projects, so that one person's work can benefit multiple institutions.

GLAMs could use plenty of other help, too: feedback from UX designers on our catalogue interfaces, helpful tools , customization of our vendor platforms, even turning libraries into Tor relays or exits . The open-source community seems to be looking for ways to contribute meaningful volunteer labour to grateful non-profits; this would be a good start.

What's most important is that cultural heritage preserves the ostensible benefits of crowdsourcing – opening our collections and processes up for scrutiny, and admitting the limits of our knowledge – without the exploitative labour practices. Just like in tech, a few more glimpses behind the curtain wouldn't go astray. But it would require deeper cultural shifts, not least in the self-perceptions of GLAM workers: away from overprotective stewards of information, constantly threatened by dwindling budgets and unfamiliar technologies, and towards facilitators, participants in the communities whose histories we hold.

Tech Workers Please Stop Defending Tech Companies by Shanley Kane Model View Culture

[Jan 22, 2013] Ulrich Drepper about Stallman

I knew that Ulrich and Stallman didn't get along but this email is a must read: http://sources.redhat.com/ml/libc-announce/2001/msg00000.html "The morale of this is that people will hopefully realize what a control freak and raging manic Stallman is. Don't trust him... Read the licenses carefully and rip out parts which give Stallman any possibility to influence your future"
Aug 15, 2001 |Sourceware.com

... And now for some not so nice things.

Stallman recently tried what I would call a hostile takeover of the glibc development. He tried to conspire behind my back and persuade the other main developers to take control so that in the end he is in control and can dictate whatever pleases him. This attempt failed but he kept on pressuring people everywhere and it got really ugly. In the end I agreed to the creation of a so-called "steering committee" (SC). The SC is different from the SC in projects like gcc in that it does not make decisions. On this front nothing changed. The only difference is that Stallman now has no right to complain anymore since the SC he wanted acknowledged the status quo. I hope he will now shut up forever.

The morale of this is that people will hopefully realize what a control freak and raging manic Stallman is. Don't trust him. As soon as something isn't in line with his view he'll stab you in the back. *NEVER* voluntarily put a project you work on under the GNU umbrella since this means in Stallman's opinion that he has the right to make decisions for the project.

The glibc situation is even more frightening if one realizes the story behind it. When I started porting glibc 1.09 to Linux (which eventually became glibc 2.0) Stallman threatened me and tried to force me to contribute rather to the work on the Hurd. Work on Linux would be counter-productive to the Free Software course. Then came, what would be called embrace-and-extend if performed by the Evil of the North-West, and his claim for everything which lead to Linux's success.

Which brings us to the second point. One change the SC forced to happen against my will was to use LGPL 2.1 instead of LGPL 2. The argument was that the poor lawyers cannot see that LGPL 2 is sufficient. Guess who were the driving forces behind this. The most remarkable thing is that Stallman was all for this despite the clear motivation of commercialization. The reason: he finally got the provocative changes he made to the license through. In case you forgot or haven't heard, here's an excerpt: [...] For example, permission to use the GNU C Library in non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system. This $&%$& demands everything to be labeled in a way which credits him and he does not stop before making completely wrong statements like "its variant". I find this completely unacceptable and can assure everybody that I consider none of the code I contributed to glibc (which is quite a lot) to be as part of the GNU project and so a major part of what Stallman claims credit for is simply going away.

This part has a morale, too, and it is almost the same: don't trust this person. Read the licenses carefully and rip out parts which give Stallman any possibility to influence your future. Phrases like

[...] GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.

just invites him to screw you when it pleases him. Rip out the "any later version" part and make your own decisions when to use a different license since otherwise he can potentially do you or your work harm.

In case you are interested why the SC could make this decision I'll give a bit more background. When this SC idea came up I wanted to fork glibc (out of Stallman's control) or resign from any work. The former was not welcome this it was feared to cause fragmentation. I didn't agree but if nobody would use a fork it's of no use. There also wasn't much interest in me resigning so we ended up with the SC arrangement where the SC does nothing except the things I am not doing myself at all: handling political issues. All technical discussions happens as before on the mailing list of the core developers and I reserve the right of the final decision.

The LGPL 2.1 issue was declared political and therefore in scope of the SC. I didn't feel this was reason enough to leave the project for good so I tolerated the changes. Especially since I didn't realize the mistake with the wording of the copyright statements which allow applying later license versions before.

I cannot see this repeating, though. Despite what Stallman believes, maintaining a GNU project is *NOT* a privilege. It's a burden, and the bigger the project the bigger the burden. I have no interest to allow somebody else to tell me what to do and not to do if this is part of my free time. There are plenty of others interesting things to do and I'll immediately walk away from glibc if I see a situation like this coming up again. I will always be able to fix my own system (and if the company I work for wants it, their systems).

[Sep 10, 2011] Michael Hart (1947 - 2011) Prophet of Abundance by Glyn Moody

September 08, 2011 | Open Enterprise

I've never written an obituary before in these pages. Happily, that's because the people who are driving the new wave of openness are relatively young, and still very much alive. Sadly, one of the earliest pioneers, Michael Hart, was somewhat older, and died on Tuesday at the age of just 64.

What makes his death particularly tragic is that his name is probably only vaguely known, even to people familiar with the areas he devoted his life to: free etexts and the public domain. In part, that was because he modest, content with only the barest recognition of his huge achievements. It was also because he was so far ahead of his times that there was an unfortunate disconnect between him and the later generation that built on his trailblazing early work.

To give an idea of how visionary Hart was, it's worth bearing in mind that he began what turned into the free etext library Project Gutenberg in 1971 - fully 12 years before Richard Stallman began to formulate his equivalent ideas for free software. Here's how I described the rather extraordinary beginnings of Hart's work in a feature I wrote in 2006:

In 1971, the year Richard Stallman joined the MIT AI Lab, Michael Hart was given an operator's account on a Xerox Sigma V mainframe at the University of Illinois. Since he estimated this computer time had a nominal worth of $100 million, he felt he had an obligation to repay this generosity by using it to create something of comparable and lasting value.

His solution was to type in the US Declaration of Independence, roughly 5K of ASCII, and to attempt to send it to everyone on ARPANET (fortunately, this trailblazing attempt at spam failed). His insight was that once turned from analogue to digital form, a book could be reproduced endlessly for almost zero additional cost - what Hart termed "Replicator Technology". By converting printed texts into etexts, he was able to create something whose potential aggregate value far exceeded even the heady figure he put on the computing time he used to generate it.

Hart chose the name "Project Gutenberg" for this body of etexts, making a bold claim that they represented the start of something as epoch-making as the original Gutenberg revolution.

Naturally, in preparing to write that feature for LWN.net, I wanted to interview Hart to find out more about him and his project, but he was very reluctant to answer my questions directly - I think because he was uncomfortable with being placed in the spotlight in this way. Instead, he put me on his mailing list, which turned out to be an incredible cornucopia of major essays, quick thoughts, jokes and links that he found interesting.

In one of those messages, he gave a good explanation of what he believed his Project Gutenberg would ultimately make possible:

Today we have terabyte drives for under $100 that are just about the same size as the average book.

10 years ago, in 1999, most people were using gigabytes in their systems rather than terabytes.

10 years before that, in 1989, most people used megabytes.

10 years before that, in 1979, most people used kilobytes.

My predictions run up to about 2021, which would be around the 50th anniversary of that first eBook from 1971.

I predict there will be affordable petabytes in 2021.

If there are a billion eBooks by 2021, they should fit the new petabytes just fine, as follows:

Premise #1:

The average eBook in the plainest format takes a megabyte.

Premise #2

There will be a billion eBooks in 2021 or shortly after.

Therefore:

A billion eBooks at a megabyte each takes one petabyte.

You will be able to carry all billion eBooks in one hand.

As this makes clear, Hart was the original prophet of digital abundance, a theme that I and others are now starting to explore. But his interest in that abundance was not merely theoretical - he was absolutely clear about its technological, economic and social implications:

I am hoping that with a library this size that the average middle class person can afford, that the result will be an even greater overthrow of the previous literacy, education and other power structures than happened as direct results of The Gutenberg Press around 500 years ago.

Here are just a few of the highlights that may repeat:

1. Book prices plummet.

2. Literacy rates soar.

3. Education rates soar.

4. Old power structures crumbles, as did The Church.

5. Scientific Revolution.

6. Industrial Revolution.

7. Humanitarian Revolution.

Part of those revolutions was what Hart called the "Post-Industrial Revolution", where the digital abundance he had created with Project Gutenberg would be translated into the analogue world thanks to more "replicators" - 3D printers such as the open source RepRap:

If we ... presume the world at large sees its first replicator around 2010, which is probably too early, given how long it took most other inventions to become visible to the world at large [usually 30 years according to thesis by Madelle Becker], we can presume that there will be replicators capable of using all the common materials some 34.5 years into the future from whatever time that may actually be.

Hence the date of 2050 for the possibility of some replicators to actually follow somebody home: if that hasn't already been made illegal by the fears of the more conservative.

Somewhere along the line there will also be demarcations of an assortment of boundaries between replicators who can only make certain products and those who can make new replicators, and a replicator that could actually walk around and follow someone, perhaps all the way home to ask if it could help.

The fact that it was ~30 years from the introduction of eBooks to those early Internet pioneers to the time Google made their big splashy billion dollar media blitz to announce their eBook project without any mention of the fact that eBooks existed in any previous incarnation, simply is additional evidence for an educated thesis mentioned above, that had previously predicted about a 30 year gap between the first public introductions and awareness by the public in general.

So, when you first start to see replicators out there set your alarm clocks for ~30 years, to remind you when you should see, if they haven't been made illegal already, replicators out for a walk in at least some neighborhoods.

Notice the comment "if that hasn't already been made illegal". This was another major theme in Hart's thinking and writings - that copyright laws have always been passed to stop successive waves of new technologies creating abundance:

We keep hearing about how we are in "The Information Age," but rarely is any reference made to any of four previously created Information Ages created by technology change that was as powerful in the day as the Internet is today.

The First Information Age, 1450-1710, The Gutenberg Press, reduced the price of the average books four hundred times. Stifled by the first copyright laws that reduced the books in print in Great Britain from 6,000 to 600, overnight.

The Second Information Age, 1830-1831, Shortest By Far The High Speed Steam Powered Printing Press Patented in 1830, Stifled By Copyright Extension in 1831.

The Third Information Age, ~1900, Electric Printing Press Exemplified by The Sears Catalog, the first book owned by millions of Americans. Reprint houses using such presses were stifled by the U.S. Copyright Act of 1909.

The Fourth Information Age, ~1970, The Xerox Machine made it possible for anyone to reprint anything. Responded to by the U.S. Copyright Act of 1976.

The Fifth Information Age, Today, The Internet and Web. Hundreds of thousands, perhaps even a million, books from A to Z are available either free of charge or at pricing, "Too Cheap To Meter" for download or via CD and DVD. Responded to by the "Mickey Mouse Copyright Act of 1998," The Digital Millennium Copyright Act, The Patriot Act and any number of other attempted restrictions/restructures.

Hart didn't just write about the baleful effect of copyright extensions, he also fought against them. The famous "Eldred v Ashcroft" case in the US that sought to have such unlimited copyright extensions declared unconstitutional originally involved Hart. As he later wrote:

Eldred v Ashcroft was previously labeled as in "Hart v Reno" before I saw that Larry Lessig, Esquire, had no intention of doing what I thought necessary to win. At that point I fired him and he picked up Eric Eldred as his current scapegoat du jour.

As this indicates, Hart was as uncompromising in his defense of the public domain as Stallman is of free software.

Most of his best writings are to be found in the emails that were sent out to his mailing list from time to time, although there is a Web page with links to a couple of dozen essays that are all well-worth reading to get a feeling for the man and his mind. There are also more of his writings on the Project Gutenberg site, as well as a useful history of the project.

However, it's hugely regrettable that Hart never published his many and wide-ranging insights as a coherent set of essays, since this has led to a general under-appreciation of the depth of his thinking and the crucial importance of his achievements. Arguably he did more for literature (and literacy) than any Nobel Prize laureate for that subject every will.

Fortunately, Project Gutenberg, which continues to grow and broaden its collection of freely-available texts in many languages, stands as a fitting and imperishable monument to a remarkable human being who not only gave the world great literature in abundance, but opened our eyes to the transformative power of abundance itself.

Follow me @glynmoody on Twitter or identi.ca, and on Google+

[May 02, 2011] The_institutions_of_open_source_software-_IR

A tale about the bazaar

This paper presents a cautionary tale about prevailing bazaar metaphor when applied to the analysis of F/LOS activities, and also when implemented in it55.

We have focussed on a community where the conjugation of openness and growth has taken its toll in output delivery schedules and social stability. As we have shown, these problems have been tackled through institutional strategies that alter the balance between openness and stability, but that also lead to unintended consequences, such as development stagnation or decreases in participation. These observations suggest caution in employing an idealised model of the bazaar as an analytical metaphor: norms, values, membership processes and structures for governance appear to be essential in order to understand the productive and social dynamics of a project. To the extent that open source ways of working constitute an organisational innovation, it is an innovation with structural implications and contents.56

Having established the relevance of focussing on F/LOS institutions, we have illustrated how the epistemic community and legitimate peripheral concepts, and their extension to include the emergence of hierarchies and processes governing authority distribution constitute useful tools for analysis of the evolution of the Debian community. In the case of Debian, this latter process, the delegation of power based on trust, is constrained by the decentralisation of the structures through which development takes place. This explains the difficulties faced by the Debian project leader when trying to exercise any sort of meaningful authority

[May 02, 2011] Oreg+Nov_CHB_Exploring%20Motivations%20for%20Contributing%20to%20Open%20Source%20Initiatives%20The%20Roles%20of%20Contribution%20Context%20and%20Personal%20Values_published

Specifically, we suggest that people's primary motivation to contribute to open source initiatives of one kind (e.g., software), may be different than their motivation to contribute to another (e.g., content). Second, we argue that, based on their personal value system, some people are more likely than others to express certain contribution motivations. Intrinsic motivations, on the other hand, tend to be terminal in that they emphasize inherent satisfactions rather than their separable consequence (Ryan & Deci, 2000). They include motivations such as altruism (Zeityln, 2003), fun (Torvalds & Diamond, 2001), reciprocity (McLure- Wasko & Faraj, 2005), intellectual stimulations, and a sense of obligation to contribute

(Bryant et al., 2005; Lakhani & Wolf, 2005).

Peer to Peer Production: Revolution or Subjectivation? Dr Phoebe V. Moore, Lecturer in International Relations and International Political Economy,

Sept 11, 2010 | University of Salford, Greater Manchester

Abstract

Scholars of media ecology, and what Fuller has called 'media ecologies' (2005) are interested in how real social change is possible, and in fact necessary, in the era of neoliberal digital media. By questioning hierarchical formations in arrangements of lives and production and the lines of force for interactivity, which have historically been uni-directional and fundamentally restricting, the peer to peer production movement is an example of the type of media ecology that promises 'deep' social, as well as mental, and environmental (in the sense of places of work and production), change. Do the networked commons emerging from peer to peer production founded in the free software and hardware arena, and the ecologies of productive and artistic cooperation therein, pose a resilient threat to capitalism, from within capitalism?

====

Free Software may be viewed as a social movement while Open Source is perhaps a development methodology, but it is not always necessary to isolate analysis to one or the other firstly due to the extensive overlap in software communities, and also because their rhizomatic roots emerge from a shared intellectual and moral response to exploitation of markets by powerful firms (see Elliot and Scacchi 2004). This piece queries whether the behaviours of collaborative software producers as well as the activities in the hardware production communities which are found in FabLabs that release playbots and other blueprints for machine replications as well as agricultural and construction initiatives, can indeed be perceived as revolutionary

A growing population of over-qualified, highly skilled individuals now work in the 'internal margins', or the internal ghettos that line the sidestreets of the market for knowledge workers resulting from a growing lack of stable employment. As a result of the emerging impermanence of work and as knowledge becomes increasingly commodified, several contradictions have emerged. This structural violence is accompanied by a still-hegemonic idea that that full time, permanent work is desirable, which prolongs despite the rhetoric otherwise,

particularly within the 'employability' discourse in policy. Assumptions extend into the realm of people's abilities and skill, despite the difficulties that knowledge work poses for traditional distinctions between the objective or technical skill needed for task related work and the subjective, social capabilities, that are now increasingly measured by employers in a 'war for talent' (Brown and Hesketh 2004, 65 - 88).

Bauwens (2009) separates the terms peer production, peer governance, and peer property to give a „beginner‟s guide‟ to the political economy of peer to peer production:

1) peer production: wherever a group of peers decided to engage in the production of a common resource

2) peer governance: the means they choose to govern themselves while they engage in such pursuit

3) peer property: the institutional and legal framework they choose to guard against the private appropriation of this common work; this usually takes the form of non-exclusionary forms of universal common property, as defined through the General Public License, some forms of the Creative Commons licenses, or similar derivatives.

[May 02, 2011] Between IPRs and Public-Funded Research: Is a Community just a "Fancy" Science? by Francesco Rullani

May, 2009 | Copenhagen Business School
My findings show that knowledge-intensive communities and science have the same nature, but a different cost/benefit trade-off. ... ... .. The finding is that it is extremely difficult for a science-like community to be able to attract enough researchers to survive. Since we observe the exponential growth of the FLOSS community and the birth and success of other knowledge-intensive communities such as user-innovator online communities (Jeppesen and Frederiksen, 2006), the conclusion is that the gap between the reality and the theory can be closed only if we admit that the FLOSS community, as every other knowledge-based community, must be something more than a simple science-like institution.

... ... ... It was, in fact, in this literature that the question was asked relative to whether, and to what extent, the community model seemed to just resemble the academic world. Bezroukov (1999a; 1999b) was among the first authors identifying a possible homomorphism between the two institutions in terms of the produced outcome, the involved incentives, the typology of teamwork and organization of collaboration, and the way in which the activity is financially supported. In particular, Bezroukov stresses the similar role of financial institutions, such as research institutes, universities, or private research labs, in providing the individuals with the funds to undertake their activities in the directions they desire; and the similarity between the rules upon which science is based and the practices typical of the FLOSS community, which are also based on a public debate where priority over solutions and peer review are the crucial mechanisms used to regulate and direct individuals' activities (Dasgupta and David, 1987; Lee and Cole, 2003). Kelty (2001) stresses the same similarities.

On the one hand he states that "[…T]he funding that supports many projects (in most cases indirectly) comes from those well-known scientific institutions" (Kelty, 2001, online). On the other hand, he also argues that the structure of incentives and the organization of the collaborative effort of developers and scientists are very close to one another, both based on rules connecting the openness of the results to the individual pursuit of recognition and reputation. Mustonen (2003) shares the same point of view "The essential property of the copyleft licensing scheme [i.e. GPL] is that it creates a particular incentive structure… [that] has properties that are equivalent to the incentive structures of scientific communities" (Mustonen, 2003, p. 104).

Following a similar path, Bonaccorsi and Rossi (2003b) recall the origins of FLOSS inside the university labs to claim that "Emerging as it does from the university and research environment, the movement adopts the motivations of scientific research" (Bonaccorsi and Rossi, 2003b, p. 1245). Dalle and David (2003) also share a similar point of view, stressing the parallelism between the FLOSS institutional setting and the rules of "open science," where "the norm of openness is incentive compatible with a collegiate reputational reward system based upon accepted claims to priority" (Dalle and David, 2003, pp. 3, 4). A similar point is made by Raymond (1998c), who suggests that the correspondence between the two phenomena is just the outcome of the fact that the scientific and the FLOSS enterprises had simply given the same answer to the same problem of collective knowledge production. However, similarities between science and FLOSS do not imply that the two systems simply coincide. The history of the FLOSS community 5 can be useful to understand why diffe es are and Birdzell (1986).

An overview on several aspects of the FLOSS scene and history is presented in Giuri et al. (2002). To have an idea of the cultural environment in which this community developed see Himanen et al. (2001) and Raymond (1998a). The 6 more relevant than the previous picture may seem to suggest.

FLOSS began during the eighties when Richard Stallman founded the Free Software Foundation (FSF). Stallman was a researcher at the Artificial Intelligence Labs at the MIT, but during those years, software development in the scientific environment began to be influenced by the enforcement of IPRs (Williams, 2002). Hardware and software were provided to the Labs, but with temporary un-disclosure clauses. In opposition to this practice, Stallman decided to create a new operating system, GNU, which had to be and remain totally open and free. To do that, he decided to leave the MIT to create an organization outside that scientific institution. The GNU community grew fast, producing software, creating ideas, values and programming principles. Nowadays the instruments created to guarantee software openness and freedom, as the General Public License (GPL), are widely spread also in the sciences environment, but the community is still something different form the academic structure. Had the FLOSS experience being just a new scientific enterprise, there would have been no need for the GPL and for an external organization such as the FSF. In fact, in the FLOSS-EU survey (Ghosh et al., 2002) only a bit more than 30% of the surveyed developers is composed by students or universities employees, and only 6% of the projects undertaken in the frame of SourceForge (www.sourceforge.net, January 2003) is intended for "Science/Research" or "Education" audiences. It is then true that, as Kelty (2001) argues, the FLOSS community was born in the scientific environment and culture; but the perspective offered here suggests that it moved out of it and grew autonomously, relying only partly onto the academic structure.

Given this uncertainty about the nature of FLOSS, and by extension of knowledge-intensive communities, we can ask: Is the nature of the FLOSS community the same as that of science? The same conclusion is reached by David et al. (2001), who state: "This analogy with open science research networks […] calls [for the] understanding [of] the conditions under which voluntary, open source software development can co-exist in productive balance with proprietary software development" (David et al., 2001, online). Also Giuri et al. (2002) apply a similar perspective arguing that we need to understand "whether the [FL]OSS is different from proprietary software because it is closer to open science" (Giuri et al., 2002, p. 82).

[May 02, 2011] Cowboy coding

Wikipedia

Cowboy coding is a term used to describe software development where programmers have autonomy over the development process. This includes control of the project's schedule, languages, algorithms, tools, frameworks and coding style.

[May 01, 2011] Reshaping Narrow Law and Art The Expert's Dilemma

While review is naive and its very sad that the author of this review mixes me with economists, "undisciplined Preobrazhenskyism" is an interesting touch :-). Something new many years after publishing of this paper...
A common problem faced by experts on a particular subject is hostility for ideological reasons. I've paid a lot of attention to this problem, and I think it's especially severe in economics. Economics, after all, professes to explain the whole of the social sciences using ideas that are basically pure deduction. The only other field of study I can think of that does this is theology. Economics requires a set of basic premises that are assumed to be immutably true, and while these premises are few in number, a vast body of assumptions is derived from them. These include the proposition that for-profit, privately-owned enterprises tend to allocate resources correctly, that consumers tend to make rational and free choices about how many hours they work per year, or how much they will spend on their home, or if they will take public transit to work, or any other consumption decision. Economics, because of the deductive foundation of its judgment, is of all the branches of study the most ideological. Computer science is another field of study that tends to be very ideologically bound, since critiques of its decisions suffer the same problems as in economics: the web of human motives and abilities is so complex that it relies mainly on deduction from basic principles. A common defense is, "In technology, something either works or it doesn't"; because of this, IT is supposed to be liberated from dependence on induction. In my experience, there is almost no non-trivial technical decision that is so bad that it cannot be made to "work" to some decision-maker's satisfaction. Of course I do not want to imply that this proves economics or computer science are bad disciplines, or that their practitioners are lying quacks. I am just pointing out a difficulty that confronts both fields. I think it is important for practitioners to acknowledge this (which is why when I was writing about Unix I was so impressed by Eric Raymond's books and essays.) In fact, ideology is a common tool that allows people to form orderly and structured judgments. It is very frequently used as a substitute for thinking, but it is so useful to public thinking and problem-solving that it is useful nonetheless. Therefore, I cannot bemoan the presence of ideology, either. Even if I thought it was an unmitigated bad, I should still have to concede that it is a part of life and shall remain so. At the same time, however, we often see occasions when an expert discovers facts that challenge the foundational beliefs of an ideology. The expert is a loyal supporter of the ideology, but he cannot deny the evidence. The example that comes to mind is Eric Raymond's essay, "The Cathedral and the Bazaar" (CatB; discussed here). I read the essay, then several responses that Mr. Raymond had graciously linked to at his essay page. One response to CatB provoked this aggravated rebuttal from Raymond:
Nikolai Bezroukov's article in First Monday [critiquing CatB], unfortunately, adds almost nothing useful to the debate. Instead, Mr. Bezroukov has constructed a straw man he calls vulgar Raymondism which bears so little resemblance to the actual content of my writings and talks that I have to question whether he has actually studied the work he is attacking. If vulgar Raymondism existed, I would be its harshest critic.

I wanted to like this paper. I wanted to learn from it. But I began to realize this was unlikely when, three paragraphs in, I tripped over the following: he promoted an overoptimistic and simplistic view of open source, as a variant of socialist (or, to be more exact, vulgar Marxist) interpretation of software development.

There are many sins of which I can reasonably be accused, but the imputation of vulgar Marxism won't stand up to even a casual reading of my papers. In CatB, I analogize open-source development to a free market in Adam Smith's sense and use the terminology of classical (capitalist) economics to describe it. In HtN I advance an argument for the biological groundedness of property rights and cite Ayn Rand approvingly on the dangers of altruism.
The first point I want to make here is that I would think long and hard before I made a facial challenge of anything Mr. Raymond said about computer software development. He has qualifications that are hard to match, let alone exceed. His knowledge of computer science is huge, he's devoted a lot of time to pondering the organizational or cultural implications of it, and he has a fair understanding of many other fields besides that one. Also, as it happens, he's right-even a casual reading of his work doesn't allow anyone to imagine that he's a socialist.

So I would say he's an expert, and also that he's ideologically compatible with the prevailing economic system and its ideological proclivities. If a capitalist party membership book existed, his would be in good order. And yet, his observations might be carelessly construed to negate the ideal intellectual property regime:

Nikolai Bezroukov: In a really Marxist fashion, Eric Raymond wrote in Homesteading the Noosphere "ultimately, the industrial-capitalist mode of software production was doomed to be out competed from the moment capitalism began to create enough of a wealth surplus for many programmers to live in a post-scarcity gift culture." I used to live in one society that claimed to "outcompete" capitalism long enough to be skeptical.
I have familiarity with the practice of Marxist party congress criticisms, having read much of E.H. Carr's history of the Bolshevik Revolution; and I have to say that Bezroukov's article really does sound like he imagines he's criticizing Raymond for taking the "line" of (say) "undisciplined Preobrazhenskyism" or something. The fact that Raymond actually has a huge volume of objective, reliable experience with the matter he's writing about, means nothing to Bezroukov: Raymond's somehow gone pink.

Bezroukov is not a dummy, and he has his own considerable credentials. My own suspicion was that he needed to "prove" his own ideological reliability by attacking someone who had been insufficiently guarded in his corporation-unfriendly observations. As a minor functionary in the actual institutional apparatus of the capitalist state-corporation nexus, he had to attack an attacker of Microsoft-and make him menacing. (Raymond never wrote anything like "Microsoft must be destroyed.") That attended to, he could discuss open source software as a sociological phenomenon. But by attacking Raymond as an ideologically unsafe line wobble, he illustrated that absolutely no one is safe. One must toe the official line, regardless of what one has seen, or face the consequences.

This is the Expert's Dilemma.

Labels: economics, expertise, freeware, history, semantics

[Mar 08, 2011] Conway's Law - Wikipedia, the free encyclopedia

Conway's Law is an adage named after computer programmer Melvin Conway, who introduced the idea in 1968:

...organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations.[a]

Although sometimes construed as humorous, Conway's Law was intended as a valid sociological observation. It is based on the reasoning that in order for two separate software modules to interface correctly, the designers and implementers of each module must communicate with each other. Therefore, the interface structure of a software system will reflect the social structure of the organization(s) that produced it.

[Mar 08, 2011] Brooks's law - Wikipedia, the free encyclopedia

While open source projects rarely have schedules, nonetheless they can reach a state in which they are called "late" by their sponsors, participants, and users. In such a case, Brooks's law ("adding manpower to a late software project makes it later") surely applies, for exactly the reasons that Brooks enumerates: time for the new developers to become productive and increased communication overhead. In addition, unless there are strict controls, newcomers may reduce the productivity of experienced developers by checking in buggy or inappropriate changes, which then have to be backed out.

[Aug 26, 2010] A Second Look at the Cathedral and Bazaar by Nikolai Bezroukov

May 11, 2008 | JavaRanch

Interesting paper. Its now "old" in that it was published in 1999.

Couple of quick comments:

On communication:

I have taught Software Management using Brook's Mythical Many Month as a text. I believe in it. It may be wrong, but its held up for 25 years, and still rings true to me.

Nothing about the Internet for communications, as claimed by Cathedral and Bazaar (CatB), really changes the essence of Brook's observation. Nine women can not deliver a baby in one month. Some things just don't scale. What we know from thirty or forty years of software engineering is that technology is not the problem that makes large projects "hard", its the human to human communications.

I believe that fewer good people make projects run faster, not more. And I believe that if you are going to have many people, you need large numbers of eyeball to eyeball time. With perhaps two or three percent of that time over beer.

On "Given enough eyeballs, all bugs are shallow"

I don't buy this one at all. For many, perhaps even most, bugs, more eyeballs can help. But complex interactions, race conditions, inverted interrupts and other stuff that happens in real world systems as you push for performance and scalability are hard. They are essentially hard. It takes a lot of time for an experienced artisan to get into the zone to start to understand them. The world simply does not have 'enough eyeballs' attached to expert artisans, so some bugs are hard.

[Jun 21, 2010] Why Linux owes (part of) its success to Microsoft ZDNet By Paul Murphy

December 16, 2008 | http://www.zdnet.com

One of the things that characterizes humanity is our ability to adapt quickly to external change - it's the key reason, for example, that humans aren't confined to one climatic zone on the planet.

On the other hand, we're individually often really bad at adapting to external change - and I suspect that everyone who's worked in IT for more than few months has been personally guilty of refusing to learn to use a new tool for a job simply because you felt comfortable doing it with the older, and far less efficient or effective, tool.

When IBM needed a basic OS that could run effectively on Intel's 8088 processor the answer Patterson came up with for QDOS was to minimize CP/M's overhead by stripping the kernel/shell layering - effectively eliminating Kildall's commitment to portability and multi-processing to produce something tailored specifically to the limitations of one particular piece of hardware.

Similarly, when Linus Torvalds decided to build a "free Unix for the 386″ he stripped away Tanenbaum's layered micro-kernel and hardwired Intel's approach to interrupt management directly into what has since become the largest monolithic kernel still in use.

Here's part of what Tanenbaum had to say about this in 1992:

…I think I know a bit about where operating are going in the next decade or so. Two aspects stand out:

  1. MICROKERNEL VS MONOLITHIC SYSTEM

    Most older operating systems are monolithic, that is, the whole operating system is a single a.out file that runs in 'kernel mode.' This binary contains the process management, memory management, file system and the rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS, and many more.

    The alternative is a microkernel-based system, in which most of the OS runs as separate processes, mostly outside the kernel. They communicate by message passing. The kernel's job is to handle the message passing, interrupt handling, low-level process management, and possibly the I/O. Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the not-yet-released Windows/NT.

    While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won. The only real argument for monolithic systems was performance, and there is now enough evidence showing that microkernel systems can be just as fast as monolithic systems (e.g., Rick Rashid has published papers comparing Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.

    MINIX is a microkernel-based system. The file system and memory management are separate processes, running outside the kernel. The I/O drivers are also separate processes (in the kernel, but only because the brain-dead nature of the Intel CPUs makes that difficult to do otherwise). LINUX is a monolithic style system. This is a giant step back into the 1970s. That is like taking an existing, working C program and rewriting it in BASIC. To me, writing a monolithic system in 1991 is a truly poor idea.

  2. PORTABILITY

    Once upon a time there was the 4004 CPU. When it grew up it became an 8008. Then it underwent plastic surgery and became the 8080. It begat the 8086, which begat the 8088, which begat the 80286, which begat the 80386, which begat the 80486, and so on unto the N-th generation. In the meantime, RISC chips happened, and some of them are running at over 100 MIPS. Speeds of 200 MIPS and more are likely in the coming years. These things are not going to suddenly vanish. What is going to happen is that they will gradually take over from the 80×86 line. They will run old MS-DOS programs by interpreting the 80386 in software. (I even wrote my own IBM PC simulator in C, which you can get by FTP from ftp.cs.vu.nl = 192.31.231.42 in dir minix/simulator.) I think it is a gross error to design an OS for any specific architecture, since that is not going to be around all that long.

    MINIX was designed to be reasonably portable, and has been ported from the Intel line to the 680×0 (Atari, Amiga, Macintosh), SPARC, and NS32016. LINUX is tied fairly closely to the 80×86. Not the way to go.

It's hard to believe that Tanenbaum was wrong when he said that tying Linux so closely to x86 was a mistake - and yet, x86 survives as the volume market leader and Linux has grown both with it and because of it.

Today, however, and despite having been ported to lots of other architectures, the kernel is still as closely tied to x86 as ever: try running Linux on the T52XX, for example, and you'd swear the SPARC system was hopelessly obsolete and somewhat slower than frozen molasses - because changing the Linux kernel to make effective use of a coolthreads processor would essentially require a full rewrite.

So how did this come about? Simple, Microsoft's near monopoly created the conditions needed for Torvald's strategy to succeed - because Microsoft's success depended on convincing customers that frequent adaptation to small changes reduces total risk relative to significant but infrequent change and resulted in, among many other things, a trailing cloud of low cost, Linux compatible, hardware and the resentments needed to motivate use of it.

Torvalds gets a lot of credit both for being among the first to exploit open source ideas and for his effective management of the Linux kernel development process - but I think he should also be credited for being among the first to see that a second best kernel design optimized for a third rate processor would offer the right combination for market success.

In Skins vs Believers: Linux always loses

Related Discussions on TechRepublic

Did you know you can take part in these discussions with your ZDNet membership? Ask a Question Start a Discussion
People who read this, also read...

[May 20, 2008] Linux - Governance - P2P Foundation

Nicholas Carr on the Cathedral that governs the Bazaar

"the open source model - when it works effectively - is not as egalitarian or democratic as it is often made out to be. Linux has been successful not just because so many people have been involved, but because the crowd's work has been filtered through a central authority who holds supreme power as a synthesizer and decision maker. As the Linux project has grown, Torvalds has gathered a hierarchy of talented software programmers around him to help manage the crowd and its contributions. It's not a stretch to say that the Linux bureaucracy forms a cathedral that coordinates the work of the bazaar and molds it into a unified product." (http://www.strategy-business.com/press/article/07204?pg=1)

[May 12, 2008] A Second Look at the Cathedral and Bazaar by Nikolai Bezroukov (Meaningless Drivel forum at JavaRanch) by Pat Farrell

Interesting paper. Its now "old" in that it was published in 1999. Couple of quick comments:

On communication:

I have taught Software Management using Brook's Mythical Many Month as a text. I believe in it. It may be wrong, but its held up for 25 years, and still rings true to me.

Nothing about the Internet for communications, as claimed by Cathedral and Bazaar (CatB), really changes the essence of Brook's observation. Nine women can not deliver a baby in one month. Some things just don't scale. What we know from thirty or forty years of software engineering is that technology is not the problem that makes large projects "hard", its the human to human communications.

I believe that fewer good people make projects run faster, not more. And I believe that if you are going to have many people, you need large numbers of eyeball to eyeball time. With perhaps two or three percent of that time over beer.

On "Given enough eyeballs, all bugs are shallow"

I don't buy this one at all. For many, perhaps even most, bugs, more eyeballs can help. But complex interactions, race conditions, inverted interrupts and other stuff that happens in real world systems as you push for performance and scalability are hard. They are essentially hard. It takes a lot of time for an experienced artisan to get into the zone to start to understand them. The world simply does not have 'enough eyeballs' attached to expert artisans, so some bugs are hard.

[Sep 27, 2005] The dotCommunist Manifesto by Eben Moglen

A Spectre is haunting multinational capitalism--the spectre of free information. All the powers of ``globalism'' have entered into an unholy alliance to exorcize this spectre: Microsoft and Disney, the World Trade Organization, the United States Congress and the European Commission.

Where are the advocates of freedom in the new digital society who have not been decried as pirates, anarchists, communists? Have we not seen that many of those hurling the epithets were merely thieves in power, whose talk of ``intellectual property'' was nothing more than an attempt to retain unjustifiable privileges in a society irrevocably changing? But it is acknowledged by all the Powers of Globalism that the movement for freedom is itself a Power, and it is high time that we should publish our views in the face of the whole world, to meet this nursery tale of the Spectre of Free Information with a Manifesto of our own.

[Sep 27, 2005] [PDF] Info-communism? A Critique of the Emerging Discourse on Property Rights by Milton Mueller (Syracuse University School of Information Studies).

A very interesting paper that foes more deeply into the connection of "software commonists" to communists. The author's discussion of GPL and Raymondism is much weaker that this part of the article.

Intellectual property has emerged as the central communication-information policy issue of our time. This paper analyzes the legal and political discourse on property rights in information that has followed the success of several related movements: free software, opposition to software patents, copyright resistance, and various other methods of promoting "information commons," including the debate over spectrum rights. By challenging the concept of exclusive property rights over information, this intellectual and social movement proposes to transform some basic economic institutions of the information society. (For a summary, see Kranich, 2004)As the movement has gained momentum, the word "communist" has re-entered political discourse. The rhetorical exchanges around "info-communism" are dogged by metaphors and parallels drawn from industrial-era communism. Both "leftists" and "rightists" in the ICT policy space are getting carried away with the metaphor. Free software advocate Eben Moglen pens a "dotCommunist Manifesto."1

A Forbes columnist accuses Lessig of being a "radical" who advocates "stealing intellectual property."2

Bill Gates accuses free software adherents of being "modern-day communists."

Dan Hunter, a writer who is supportive of the movement, nevertheless argues that "the culture war is a Marxist war" and repeatedly refers to the copyright resistance as "Marxism-Lessigism."3

This is not an inconsequential debate. Labels and frames are important in shaping social movements. A movement that is self-defined, or allows itself to be defined by, labels as redolent as "communist" will follow a different social and political trajectory than one that is defined differently.

A number of powerful symbolic and political factors conspire to fuel this turn in the discourse. Since industrial-era manifestations of communism have been thoroughly discredited, info-communism offers hope and revitalization to those on the left who could never quite bring themselves to accept the economic failure of social systems that attempted to abolish markets based on the exchange of private property.

The concept is also useful to their rightist counterparts, who hope to hang the albatross of old communism around the neck of emerging social movements of information.

Thus, a "new information right," capitalizing on rights-holding businesses willing to put their money where their mouthpieces are, is responding withknee-jerk defenses of intellectual property. (Liebowitz and Margolis, 2004; DeLong, 2004)1Moglen, http://emoglen.law.columbia.edu/publications/dcm.html January 2003.2Stephen Manes, "The Trouble with Larry," Forbes Magazine 29 March 2004.3Hunter, 2004. see also Legal Affairs, Nov-Dec.2004. http://www.legalaffairs.org/issues/November-December-2004/feature_hunter_novdec04.html

An obvious danger of this dialectic is a polarization of the debate over property rights in the information economy into a caricature of the 19thcentury debate over communism.

If Marxism is to be invoked, we should recall the opening sentences of Marx's The Eighteenth Brumaire of Louis Bonaparte (1852).

"Hegel remarks somewhere that all great, world-historical facts and personages occur, as it were, twice. He has forgotten to add: the first time as tragedy, the second as farce."

This paper has three goals. The first is to salvage the rationality of the debate over the nature of property institutions in information and communications, by criticallyexamining the metaphors and parallels to old-line communism.

The second goal is identify and call attention to a deep-seated tension within the information left that contributes mightily to this framing. In some cases the movement pushes shared or nonexclusive rights on practical, voluntaristic grounds, and presents it to the public as a choice they can exercise freely, based on their own desires or self-interest.

In other instances non-exclusivity is urged upon people as an ethical or moral imperative, and considerable effort is expended to use various forms of leverage to push people into that alternative. The information left needs to reflect more deeply on the implications of relying on either type of appeal. In making this argument, the paper notes that the need to motivate and sustain collective action plays an important role in shaping responses to this problem. Ethical and spiritual appeals to communal property provide stronger glue forholding together a social movement. But in this case they are also more prone to legitimate charges that the movement is "communist"and hostile to private property in any form.My third goal is to take sides regarding the dichotomy described above.

I will argue that it is impossible for moral/ethical arguments alone to justify either common or private property in all cases and that a free, contractually based economy offers the best hope of finding the right mix. Drawing on property rights theory, I argue that there is an important place for both models (commons and private property) in the present and future economy, and that there is often a dynamic interaction between the two that is superior to any attempt to push the economy into one of the two extremes.

The movement should view information commons as a vital and constructive part of a free and open market economy, not as its enemy. As Merges (2004) has argued, contractual arrangements to build commons and nonexclusive access to informational resources can be seen as a rational market response to the legal and political overreaching of rent-seeking copyright and patent holders.

Red-Baiting the Commonists: Frame or be FramedIt didn't start with Bill Gates. Back in 1998, a Slashdot feature article was already noting that "in the debate about open source software the term 'communism' comes up quite frequently."4

About three and a half years later (in 2001), Eben Moglen was using Marx's language of class struggle and of revolutionary changes in the "means of cultural production" to produce what he called a "dotCommunist Manifesto," which he eventually turned into the turgid piece that sits on the web.

What happened on January 6, 2005 merely raised the media profile of a discussion that had been going on for years. On that date, CNET news published the following exchange, in which Bill Gates was asked about movements to reform intellectual property laws:

REPORTER: In recent years, there's been a lot of people clamoring to reform and restrict intellectual-property rights. It started out with just a few people, but now there are a bunch of advocates saying, "We've got to look at patents, we've got to look at copyrights." What's driving this, and do you think intellectual-property laws need to be reformed?

GATES: No, I'd say that of the world's economies, there's more that believe in intellectual property today than ever. There are fewer communists in the world today than there were. There are some new modern-day sort of communists who want to get rid of the incentive for musicians and moviemakers and software makers under various guises. They don't think that those incentives should exist.

[Mar 28, 2005] Source Access Direct Benefits? by Jason Matusow, program manager of the Shared Source Initiative at Microsoft Corp.

There is a Russian proverb: An "objective view of a person is more common for people who do not like this person" :-) In this sense Microsoft views on open source are very valuable. The key idea of this note is that benefits are indirect as "transpa y increases trust." It also contains an interesting arguments that "The availability for source code does not deliver direct benefit to the end user" which are true if and only if the user is a non-programmer and/or we are talking about large and complex systems (Linux, gcc, Gnome, etc). This statement is probably wrong for:
Here is the full text (Jason Matusow blog contains several other interesting notes):

On Thursday last week I met with a delegation representing Eastern European members of the press (about 40 of them) at our facilities in Redmond. We talked through issues of source licensing and how they are perceived in their respective countries.

One of the questions asked struck me as a particularly simple, and good, question: How does this help end users?

I would like to say that source licensing (open, shared, whatever) has great direct benefit for the end user – but in truth it doesn't. For the person sitting in front of a machine trying to get work done, who has no development experience, there is no direct benefit from source licensing. All value is indirect in nature.

The argument that source availability fundamentally improves the quality of a given piece of software is a specious argument. It can help, but it certainly isn't a compulsory result.

In the same vein of thought is the assumption that the software would be more secure because of source availability. Both arguments have the same flaw.

Just because the code is there does not mean people are looking at it. More importantly, it does not mean that the right people are looking at it. And, as the code base matures and evolves, there is no guaranteed rigor in testing or ongoing compatibility resulting exclusively from source access.

Where does this leave us? The availability for source code does not deliver direct benefit to the end user. Direct benefits are reserved for the development community (individual and organizational) and business strategy.

This posting is provided "AS IS" with no warranties, and confers no rights.

O'Reilly Whence the Source Untangling the Open Source-Free Software Debate

This circumstantial evidence makes it pretty easy to perceive Stallman's generous, virtuoso effort as the technical foundation of the movement. Throngs of Free Software Foundation enthusiasts do, and thus seem to implicitly accept his radically socialist ideology as the One True Philosophy of source code liberation.

But there's another problem: Stallman wasn't the first.

Years before he or Eric Raymond ever hit on the idea of liberating source code, the UNIX operating system was being developed at AT&T Bell Laboratories. As a government-regulated monopoly, AT&T was barred from competition in the computer industry. Though UNIX source code was not then "free" in either the FSF or OSI sense, it could be licensed at nominal cost.

Universities were among the first to take advantage. As a result, UNIX ended up in the hands of hundreds of collaborating academic programmers. In particular, the UNIX effort at the University of California at Berkeley spawned a West Coast hacker culture to rival Stallman's MIT cohort. Ultimately, the student programmers at Berkeley created their own variation of the operating system so potent that it became a major fork in the UNIX lineage -- the Berkeley Software Distribution, or BSD.

It is difficult to overestimate the role of BSD UNIX in modern computing. Not only did it beget many key features of all future versions of UNIX, but it was also under the BSD flag that UNIX met the Internet (though at the time it went by its more ancient name, Arpanet). Much of the most common system software surrounding the TCP/IP protocol was developed at UC Berkeley, and was introduced to the world as part of BSD.

In the years since, BSD has enjoyed not only a substantial commercial run, but has also found its way into a commerce-free distribution of its own, one to rival Linux. Though not as popular or mediagenic as Linux, FreeBSD can nevertheless be widely found on the machines of hobbyists, ISPs, and major corporations alike.

So the shared source collaboration concept had received significant validation long before Raymond or Stallman showed up on the field. That would make AT&T the unlikely mother of the movement, having quietly accomplished the feat with neither Stallman's righteous rhetoric nor Raymond's theoretical grandstanding.


[Dec 11, 2003] ONLamp.com Myths Open Source Developers Tell Ourselves by chromatic

One persistent misfeature of open source development is thoughtless mimicry, copying the behaviors of other projects without considering if they work or if there are better options under the current circumstances. At best, these practices are conventional wisdom, things that everybody believes even if nobody really remembers why. At worst, they're lies we tell ourselves.

Perhaps "lies" is too strong a word. "Myths" is better; these ideas may not be true, but we don't intend to deceive ourselves. We may not even be dogmatic about them, either. Ask any experienced open source developer if his users really want to track the latest CVS sources. Chances are, he doesn't really believe that.

In practice, though, what we do is more important than what we say. Here's the problem. Many developers act as if these myths are true. Maybe it's time to reconsider our ideas about open source development. Are they true today? Were they ever true? Can we do better?

Some of these myths also apply to proprietary software development. Both proprietary and open models have much room to improve in reliability, accessibility of the codebase, and maturity of the development process. Other myths are specific to open source development, though most stem from treating code as the primary artifact of development (not binaries), not from any relative immaturity in its participants or practices.

Not every open source developer believes every one of these ideas, either. Many experienced coders already have good discipline and well-reasoned habits. The rest of us should learn from their example, understanding when and why certain practices work and don't work.

Publishing your Code Will Attract Many Skilled and Frequent Contributors

Myth: Publicly releasing open source code will attract flurries of patches and new contributors.

Reality: You'll be lucky to hear from people merely using your code, much less those interested in modifying it.

While user (and developer) feedback is an advantage of open source software, it's not required by most licenses, nor is it guaranteed by any social or technical means. When was the last time you reported a bug? When was the last time you tried to fix a bug? When was the last time you produced a patch? When was the last time you told a developer how her work solved your problem?

Related Reading

Dancing Barefoot
By Wil Wheaton

Some projects grow large and attract many developers. Many more projects have only a few developers. Most of the code in a given project comes from one or a few developers. That's not bad - most projects don't need to be huge to be successful - but it's worth keeping in mind.

The problem may be the definition of success. If your goal is to become famous, open source development probably isn't for you. If your goal is to become influential, open source development probably isn't for you. Those may both happen, but it's far more important to write and to share good software. Success is also hard to judge by other traditional means. It's difficult to count the number of people using your code, for example.

It's far more important to write and to share good software. Be content to produce a useful program of sufficiently high quality. Be happy to get a couple of patches now and then. Be proud if one or two developers join your project. There's your success.

This isn't a myth because it never happens. It's a myth because it doesn't happen as often as we'd like.

Feature Freezes Help Stability

Myth: Stopping new development for weeks or months to fix bugs is the best way to produce stable, polished software.

Reality: Stopping new development for awhile to find and fix unknown bugs is fine. That's only a part of writing good software.

The best way to write good software is not to write bugs in the first place. Several techniques can help, including test-driven development, code reviews, and working in small steps. All three ideas address the concept of managing technical debt: entropy increases, so take care of small problems before they grow large.

Think of your project as your home. If you put things back when you're done with them, take out the trash every few days, and generally keep things in order, it's easy to tidy up before having friends over. If you rush around in the hours before your party, you'll end up stuffing things in drawers and closets. That may work in the short term, but eventually you'll need something you stashed away. Good luck.

By avoiding bugs where possible, keeping the project clean and working as well as possible, and fixing things as you go, you'll make it easier for users to test your project. They'll probably find smaller bugs, as the big ones just won't be there. If you're lucky (the kind of luck attracted by clear-thinking and hard work), you'll pick up ideas for avoiding those bugs next time.

Another goal of feature freezes is to solicit feedback from a wider range of users, especially those who use the code in their own projects. This is a good practice. At best, only a portion of the intended users will participate. The only way to get feedback from your entire audience is to release your code so that it reaches as many of them as possible.

Many of the users you most want to test your code before an official release won't. The phrase "stable release" has special magic that "alpha," "beta," and "prelease" lack. The best way to get user feedback is to release your code in a stable form.

Make it easy to keep your software clean, stable, and releasable. Make it a priority to fix bugs as you find them. Seek feedback during development, but don't lose momentum for weeks on end as you try to convince everyone to switch gears from writing new code to finding and closing old bugs.

This isn't a myth because it's bad advice. It's only a myth because there's better advice.

The Best Way to Learn a Project is to Fix its Bugs and Read its Code

Myth: New developers interested in the project will best learn the project by fixing bugs and reading the source code.

Reality: Reading code is difficult. Fixing bugs is difficult and probably something you don't want to do anyway. While giving someone unglamorous work is a good way to test his dedication, it relies on unstructured learning by osmosis.

Learning a new project can be difficult. Given a huge archive of source code, where do you start? Do you just pick a corner and start reading? Do you fire up the debugger and step through? Do you search for strings seen in the user interface?

While there's no substitute for reading the code, using the code as your only guide to the project is like mapping the California coast one pebble at a time. Sure, you'll get a sense of all the details, but how will you tell one pebble from the next? It's possible to understand a project by working your way up from the details, but it's easier to understand how the individual pieces fit together if you've already seen them from ten thousand feet.

Writing any project over a few hundred lines of code means creating a vocabulary. Usually this is expressed through function and variable names. (Think of "interrupts," "pages," and "faults" in a kernel, for example.) Sometimes it takes the form of a larger metaphor. (Python's Twisted framework uses a sandwich metaphor.)

Your project needs an overview. This should describe your goals and offer enough of a roadmap so people know where development is headed. You may not be able to predict volunteer contributions (or even if you'll receive any), but you should have a rough idea of the features you've implemented, the features you want to implement, and the problems you've encountered along the way.

If you're writing automated tests as you go along (and you certainly should be), these tests can help make sense of the code. Customer tests, named appropriately, can provide conceptual links from working code to required features.

Keep your overview and your tests up-to-date, though. Outdated documentation can be better than no documentation, but misleading documentation is, at best, annoying and unpleasant.

This isn't a myth because reading the code and fixing bugs won't help people understand the project. It's a myth because the code is only an artifact of the project.

Packaging Doesn't Matter

Myth: Installation and configuration aren't as important as making the source available.

Reality: If it takes too much work just to get the software working, many people will silently quit.

Potential users become actual users through several steps. They hear about the project. Next, they find and download the software. Then they must brave the installation process. The easier it is to install your software, the sooner people can play with it. Conversely, the more difficult the installation, the more people will give up, often without giving you any feedback.

Granted, you may find people who struggle through the installation, report bugs, and even send in patches, but they're relatively few in number. (I once wrote an installation guide for a piece of open source software and then took a job working on the code several months later. Sometimes it's worth persisting.)

Difficulties often arise in two areas: managing dependencies and creating the initial configuration. For a good example of installation and customization, see Brian Ingerson's Kwiki. The amount of time he put into making installation easier has paid off by saving many users hours of customization. Those savings, in turn, have increased the number of people willing to continue using his code. It's so easy to use, why not set up a Kwiki for every good idea that comes along?

It's OK to expect that mostly programmers will use development tools and libraries. It's also OK to assume that people should skim the details in the README and INSTALL files before trying to build the code. If you can't easily build, test, and install your code on another machine, though, you have no business releasing it to other people.

It's not always possible, nor advisable, to avoid dependencies. Complex web applications likely require a database, a web server with special configurations (mod_perl, mod_php, mod_python, or a Java stack). Meta-distributions can help. Apache Toolbox can take out much of the pain of Apache configuration. Perl bundles can make it easier to install several CPAN modules. OS packages (RPMs, debs, ebuilds, ports, and packages) can help.

It takes time to make these bundles and you might not have the hardware, software, or time to write and test them on all possible combinations. That's understandable; source code is the real compatibility layer on the free Unix platforms anyway.

At a minimum, however, you should make your dependencies clear. Your configuration process should detect as many dependencies as possible without user input. It's OK to require more customization for more advanced features. However, users should be able to build and to install your software without having to dig through a manual or suck down the latest and greatest code from CVS for a dozen other projects.

This isn't a myth because people really believe software should be difficult to install. It's a myth because many projects don't make it easier to install.

It's Better to Start from Scratch

Myth: Bad or unappealing code or projects should be thrown away completely.

Reality: Solving the same simple problems again and again wastes time that could be applied to solving new, larger problems.

Writing maintainable code is important. Perhaps it's the most important practice of software development. It's secondary, though, to solving a problem. While you should strive for clean, well-tested, and well-designed code that's reasonably easy to modify as you add features, it's even more important that your code actually works.

Throwing away working code is usually a mistake. This applies to functions and libraries as well as entire programs. Sometimes it seems as if most of the effort in writing open source software goes to creating simple text editors, weblogs, and IRC clients that will never attract more than a handful of users.

Many codebases are hard to read. It's hard to justify throwing away the things the code does well, though. Software isn't physical - it's relatively easy to change, even at the design level. It's not a building, where deciding to build four stories instead of two means digging up the entire foundation and starting over. Chances are, you've already solved several problems that you'd need to rediscover, reconsider, re-code, and re-debug if you threw everything away.

Every new line of code you write has potential bugs. You will spend time debugging them. Though discipline (such as test-driven development, continual code review, and working in small steps) mitigates the effects, they don't compare in effectiveness to working on already-debugged, already-tested, and already-reviewed code.

Too much effort is spent rewriting the simple things and not enough effort is spent reusing existing code. That doesn't mean you have to put up with bad (or simply different) ideas in the existing code. Clean them up as you go along. It's usually faster to refine code into something great than to wait for it to spring fully formed and perfect from your mind.

This isn't a myth because rewriting bad code is wrong. It's a myth because it can be much easier to reuse and to refactor code than to replace it wholesale.

Programs Suck; Frameworks Rule!

Myth: It's better to provide a framework for lots of people to solve lots of problems than to solve only one problem well.

Reality: It's really hard to write a good framework unless you're already using it to solve at least one real problem.

Which is better, writing a library for one specific project or writing a library that lots of projects can use?

Software developers have long pursued abstraction and reuse. These twin goals have driven the adoption of structured programming, object orientation, and modern aspects and traits, though not exactly to roaring successes. Whether proprietary code, patent encumbrances, or not-invented-here stubbornness, there may be more people producing "reusable" code than actually reusing code.

Part of the problem is that it's more glamorous (in the delusive sense of the word) to solve a huge problem. Compare "Wouldn't it be nice if people had a fast, cross-platform engine that could handle any kind of 3D game, from shooter to multiplayer RPG to adventure?" to "Wouldn't it be nice to have a simple but fun open source shooter?"

Big ambitions, while laudable, have at least two drawbacks. First, big goals make for big projects - projects that need more resources than you may have. Can you draw in enough people to spend dozens of man-years on a project, especially as that project only makes it possible to spend more time making the actual game? Can you keep the whole project in your head?

Second, it's exceedingly difficult to know what is useful and good in a framework unless you're actually using it. Is one particular function call awkward? Does it take more setup work than you need? Have you optimized for the wrong ideas?

Curiously, some of the most portable and flexible open source projects today started out deliberately small. The Linux kernel originally ran only on x86 processors. It's now impressively portable, from embedded processors to mainframes and super-computer clusters. The architecture-dependent portions of the code tend to be small. Code reuse in the kernel grew out of refining the design over time.

Solve your real problem first. Generalize after you have working code. Repeat. This kind of reuse is opportunistic.

This isn't a myth because frameworks are bad. This is a myth because it's amazingly difficult to know what every project of a type will need until you have at least one working project of that type.

I'll Do it Right *This* Time

Myth: Even though your previous code was buggy, undocumented, hard to maintain, or slow, your next attempt will be perfect.

Reality: If you weren't disciplined then, why would you be disciplined now?

Widespread Internet connectivity and adoption of Free and Open programming languages and tools make it easy to distribute code. On one hand, this lowers the barriers for people to contribute to open source software. On the other hand, the ease of distribution makes finding errors less crucial. This article has been copyedited, but not to the same degree as a print book; it's very easy to make corrections on the Web.

It's very easy to put out code that works, though it's buggy, undocumented, slow, or hard to maintain. Of course, imperfect code that solves a problem is much better than perfect code that doesn't exist. It's OK (and even commendable) to release code with limitations, as long as you're honest about its limitations - though you should remove the ones that don't make sense.

The problem is putting out bad code knowingly, expecting that you'll fix it later. You probably won't. Don't keep bad code around. Fix it or throw it away.

This may seem to contradict the idea of not rewriting code from scratch. In conjunction, though, both ideas summarize to the rule of "Know what's worth keeping." It's OK to write quick and dirty code to figure out a problem. Just don't distribute it. Clean it up first.

Develop good coding habits. Training yourself to write clean, sensible, and well-tested code takes time. Practice on all code you write. Getting out of the habit is, unfortunately, very easy.

If you find yourself needing to rewrite code before you publish it, take notes on what you improve. If a maintainer rejects a patch over cleanliness issues, ask the project for suggestions to improve your next attempt. (If you're the maintainer, set some guidelines and spend some time coaching people along as an investment. If it doesn't immediately pay off to your project, it may help along other projects.) The opportunity for code review is a prime benefit of participating in open source development. Take advantage of it.

This isn't a myth because it's impossible to improve your coding habits. This is a myth because too few developers actually have concrete, sensible plans to improve.

Warnings Are OK

Myth: Warnings are just warnings. They're not errors and no one really cares about them.

Reality: Warnings can hide real problems, especially if you get used to them.

It's difficult to design a truly generic language, compiler, or library partially because it's impossible to imagine all of its potential uses. The same rule applies to reporting warnings. While you can detect some dangerous or nonsensical conditions, it's possible that users who really know what they are doing should be able to bypass those warnings. In effect, it's sometimes very useful to be able to say, "I realize this is a nasty hack, but I'm willing to put up with the consequences in this one situation."

Other times, what you consider a warnable or exceptional condition may not be worth mentioning in another context. Of course, the developer using the tool could just ignore the warnings, especially if they're nonfatal and are easily shunted off elsewhere (even if it is /dev/null). This is a problem.

When the "low oil pressure" or "low battery" light comes on in a car, the proper response is to make sure that everything is running well. It's possible that the light or a sensor is malfunctioning, but ignoring the real problem - whether bad light or oil leak - may exacerbate further problems. If you assume that the light has malfunctioned but never replace it, how will you know if you're really out of oil?

Similarly, an error log filled with trivial, fixable warnings may hide serious problems. Any well-designed tool generates warnings for a good reason: you're doing something suspicious.

When possible, purge all warnings from your code. If you expect a warning to occur - and if you have a good reason for it - disable it in the narrowest possible scope. If it's generated by something the user does and if the user is privy to the warning, make it clear how to avoid that condition.

Running a program that spews crazy font configuration questions and null widget access messages to the console is noisy and incredibly useless to anyone who'd rather run your software than fix your mess. Besides that, it's much easier to dig through error logs that only track real bugs and failures. Anything that makes it easier to find and fix bugs is nice.

This isn't a myth because people really ignore warnings. It's a myth because too few people take the effort to clean them up.

End Users Love Tracking CVS

Myth: Users don't mind upgrading to the latest version from CVS for a bugfix or a long-awaited feature.

Reality: If it's difficult for you to provide important bugfixes for previous releases, your CVS tree probably isn't very stable.

It's tricky to stay abreast of a project's latest development sources. Not only do you have to keep track of the latest check-ins, you may have to guess when things are likely to spend more time working than crashing and build binaries right then. You can waste a lot of time watching compiles fail. That's not much fun for a developer. It's even less exciting for someone who just wants to use the software.

Building software from CVS also likely means bypassing your distribution's usual package manager. That can get tangled very quickly. Try to keep required libraries up-to-date for only two applications you compiled on your own for awhile. You'll gain a new appreciation for people who make and test packages.

There are two main solutions to this trouble.

First, keep your main development sources stable and releasable. It should be possible for a dedicated user (or, at least, a package maintainer for a distribution) to check out the current development sources and build a working program with reasonable effort. This is also in your best interests as a developer: the easier the build and the fewer compile, build, and installation errors you allow to persist, the easier it is for existing developers to continue their work and for new developers to start their work.

Second, release your code regularly. Backport fixes if you have to fix really important bugs between releases; that's why tags and branches exist in CVS. This is much easier if you keep your code stable and releasable. Though there's no guarantee users will update every release, working on a couple of features per release tends to be easier anyway.

This isn't a myth because developers believe that development moves too fast for snapshots. It's a myth because developers aren't putting out smaller, more stable, more frequent releases.

Common Sense Conclusions

Again, these aren't myths because they're never true. There are good reasons to have a feature freeze. There are good reasons to invite new developers to get to know a project by looking through small or simple bug reports. Sometimes, it does make sense to write a framework. They're just not always true.

It's always worth examining why you do what you do. What prevents you from releasing a new stable version every month or two? Can you solve that problem? Solve it. Would building up a good test suite help you cut your bug rates? Build it. Can you refactor a scary piece of code into something saner in a series of small steps? Refactor it.

Making your source code available to the world doesn't make all of the problems of software development go away. You still need discipline, intelligence, and sometimes, creative solutions to weird problems. Fortunately, open source developers have more options. Not only can we work with smart people from all over the world, we have the opportunity to watch other projects solve problems well (and, occasionally, poorly).

Learn from their examples, not just their code.

chromatic is the technical editor of the O'Reilly Network and the co-author of Perl Testing: A Developer's Notebook.

First Monday Papers

I wrote two papers for the First Monday in 1999: Open Source Software Development as a Special Type of Academic Research and A second look at The Cathedral and The Bazaar

Comments about this page

[ Nov 27, 2002] Google Search raymondism -- pretty amusing views from some Linux enthusiasts ;-)
From: Angerthas.Daeron ([email protected])
Subject: Max Burke spreads Microsoft fud
View: Complete Thread (4 articles)
Original Format

Newsgroups: comp.os.linux.advocacy
Date: 2002-11-27 08:28:39 PST

In a previous post Max Burke (mlvburke@%$%#@.nz) wrote the following :
<snip>
".. # Before I get flamed for this, please understand that a holy war,
"Linux uber alles" of sorts,  is a self-defeating strategy. I hope
that  there is a healthy "silent majority" of the open source
community (that why I actually am writing this FAQ) who are just
writing code as best they can, and/or submitting patches bug reports.
But that does not mean that we can just ignore the ranting and raving
of the zealots. But the public tend to define the open source
community in terms of its most outspoken members which in this
particular case means zealots...
http://www.softpanorama.org/OSS/Bla_faq/raymondism.shtml  .."
He accuses us on COLA of promoting a holy war and of being zealots.
He points to a web page to back up his arguments. It's written by one
"Nikolai Bezroukov" of Kiev University of commerce and economics,
Ukraine which makes him eminently qualified to comment on the issue.

I make this point as Mr Bezroukov himself doesn't think Linus Torvalds
is qualified to give opinions. He purports to be an unbiased
commentator but from the tone and content I for one suspect his
motives. I had not been aware of it's existence so here is a belated
response.

Firstly I am not a zealot. I am just a user of the technology. I come
here to disguise Linux with like minded individuals. The same cannot
be said for Mr Burke and others with the same hidden agenda as
himself. I suspect that they are in fact disguised proponents for the
Microsoft Corporation. I guess you knew that already.

I don't use the Microsoft product and can go for ages without
mentioning it. I have no axe to grind either way. What I do object to
is these WinTrolls coming over here pretending to engage in dialogue.
But secretly trying to undermine the Open Source Community in general.

It cannot be a co-incidence that the language, deceptions and
mis-information they use is strikingly similar to the product coming
out of Redmond. You might suspect that it has been written by the same
people. It's also strange how most of the fud mentioned here at one
time or another bears a striking resemblance to that web page. It's
remarkable how they are all so ON MESSAGE.

There has been a change of tack recently. Rather that the direct
assault they are going for a more suttle approach.
In the web page referred to above the author one  "Nikolai Bezroukov"
resorted to the personal attack. Referring to something called
"Raymondism". This I assume is a gratuitous personal insult aimed at
"Eric S. Raymond" author of "The Cathedral and the Bazaar" amongst
other things. "Raymondism" can also be refered to as "childish
diseases" and  "bad advocacy" he informs us.

He defines the affliction as "naive .. blind fold Linux chauvinism
("Linux uber alles")". The quotes are himself quoting himself. Linux
advocates he don't agree with are just like  Nazis - (it's the German
quotation - get it !).

What he does like is "a credible OSS advocacy" - what this is he
doesn't say.  O.S.S  ".. can play a positive role in developing
countries .." he says, get that people ? Some colonials might have a
use for it. Don't even think of going head to head against W2K. He
attacks ERS for ".. primitive anti Microsoft rhetoric ..". 

http://www.zdnet.com/sr/stories/issue/0,4537,384326,00.html
".. Since Jan. 1998 Eric Raymond successfully promoted "open source"
as a distinct and slightly anti-Stallman movement. See for example his
interview with Smart Reseller Straight From The Source where Eric was
called a Godfather of Linux ;-) .."

There was no refe e to a "Godfather of Linux" in the quoted
article. Is this a case of someone making up their own quotes again ?
I've tried to find the quote on google but to no avail.

He also quotes  ESR and attacks him because some of the same arguments
could be applied to Microsoft. Here he betrays his true position, not
as an "objective" reviewer of OSS but as an apologist for the
Microsoft corporation. This me-too-ism runs through the article(s). 
I'm curious as to his motives for this hatchet job on OSS. He even
quotes Pope Boniface VII at one point to bolster his arguments. OSS
you see - it's the devils work !

".. At the same time the movement is still in its early stages (and
not last days, as some predict) .." Who ever said OSS was on its last
days ? Nobody. You print a falsehood only to retract it in the same
sentence. This bares similarities to an "Ericism". Are they by any
chance related ?

".. What ESR and Co failed to realize is that people who are
developing and using Solaris, Novell and Microsoft products are also
professionals and many of them are of a caliber far superior to the
author of low to middle-range open source products like EMACS editor
macros, a mail utility, and like ;-). For any intelligent professional
an open demonstration of arrogance naturally creates a strong negative
reaction, a backlash that is damaging to the movement credibility and
future .."

Why is such a superior company desperately trying to gather cudos by
comparing themselves to a bunch of sandle wearing OSS advocates. The
words  Inferiority Complex comes to mind. See how he has to rope in
Solaris and Novell to bolster his arguament. I suspect neither of them
would be quick to defend the beast. Again he tacks a negative
signifier on to the end of the sentence possible in the hope that no
one would notice.

This again reminds me of a typical fud posting here. See how he gets
'author' 'low' and 'open source products' together in the same
sentence. For such an expert on OSS the only apps he can think of are
"EMACS" "editor macros" and  "a mail utility".

A Microsoft defender accuses the OSS movement of "arrogance". Has he
ever heard the expression Pot Kettle Black ?

The only war is the one being prosecuted from One Microsoft Way. OSS
people have neither the time or the inclination to mount "WARS". He
uses the word on more than one occasion. Bill Gates may be at war with
the rest of the world but that's his own paranoia.

".. The same problems exist with primitive anti Microsoft rhetoric .."
er the TRUTH ! This arrogant bastard then goes on to abuse Linus
Torvalds ".. technical judgments are very suspect .. things about
which he actually has very little real knowledge due to the specifics
of his career .."

Could Mr Bezroukov please enlighten us as to his own qualifications.
As he is not impressed with Richard Stallman, Eric Raymond" or Linus
Torvalds. There is an old football expression around here - if you
can't go for the ball then go for the man. We can be sure which
philosophy fires up Bezroukov - go for the man.

"..  We should suspect any OSS advocacy that includes the following
features .."

Is that the royal we or do you have an invisible friend sitting on
your sholder as you type ?

".. open source software .. is called economism or Vulgar Marxism .."
Get that folks OSS is communist. YOUR ALL A BUNCH OF NO GOOD COMMIES !

Sorry I lost it there for a minute - to continue. We can take it that
Nikolai has embraced the one true Church of the All Mighty Dollar and
as such is displaying all the zeal of the convert. The Good Lord loves
a believer.

".. See Is "Vulgar Marxism" a legitimate scientific term .." - Answer
NO it's just more of the same abuse from a vulgar troll!

"..  concealment of the facts about the true economic origin of ..
(OSS) .. products .. 'taxpayer-funded'  (university-funded) .."

Do Microsoft see the Universities as a threat now. 'my god there are
people actually thinking there - without a licence'

Didn't his BillNess use an unauthorized terminal to bash out code in
his early years - all paid for by his college ?

".. Linus Torvalds was financed .. remunerated him quite nicely ..
most highly paid developers in the Unix word .."

What is the point - are we supposed to feel jealous. Yes he makes
money. He probably lives in a house eats food and sleeps in his own
bed.  Bezroukov cannot support his position on it's merits so he
trashes the personal reputation of OSS advocates instead. Do these
people have no integrity ?

".. disrespect of other developers .. especially Microsoft .." -
Finally we come to the kernal of the matter.

We've hurt their feelings. Sitting hunched out there in Redmond
hacking out "Dog Food" all day is bad enough but getting maligned by
your peers - that really hurts.

" Instigation of hatred of the members of the commercial community is
unproductive and unethical .."

OH The irony ! Blackmailing and intimidating your own commercial
partners is also unethical. Getting lectures on ethics from you people
is ludicrous  as well as insulting !
Linux Kernel Mailing List, Archive by Week Closed-door develop

http://www.softpanorama.org/OSS/index.shtml

CatB in a new light. This fall Raymond has been touring Europe promoting his book. He most kindly made his way by Trondheim where he gave an unforgettable series of speeches. In one of these speeches he poses the question why nobody had articulated the bazaar mode of development prior. He says there are a handful of intelligent, articulated hackers that had already observed the phenomena, but none had spelled it out. Why is that? Raymond asks. His suggestion is that hackers like to think that their success in developing software is due to their own brilliance. All hackers liked to think so, and that is why nobody had tried to look into the matter more closely before Raymond did.

Let's permutate Raymonds question a bit, and ask: why is it that the hacker community is not questioning the apparent flaws in the 'Cathedral and the Bazaar'. Paul Feyerabend writes:

There comes then a moment when the theory is no longer an esoteric discussion topic for advanced seminars and confe es, but enters the public domain. There are introductory texts, popularizations; examinations questions start dealing with problems solved in its terms. Scientists from distant fields and philosophers, trying to show off, drop a hint here and there, and this often quite uninformed desire to be on the right side is taken as a further sign of the importance of the theory.

Unfortunately, this increase in importance is not accompanied by better understanding; the very opposite is the case. Problematic aspects which were originally introduced with the help of carefully constructed arguments now become basic principles; doubtful points turn into slogans; debates with opponents become standardised and also quite unrealistic, for the opponents, having to express themselves in terms which presupposes what they contest, seem to raise quibbles, or to misuse words. Alternatives are still employed but they no longer contain realistic counter-proposals; they only serve as a background for the splendour of the new theory. Thus we do have success; but it is the success of a manoeuvre carried out in a void, overcoming difficulties that were set up in advance for easy solution. (1993, p. 30)

Answering almost with Raymond's own words: can it be that we hackers like to think that the success of our software is due to a genial, new way of development that we have come up with ourselves? Is it truly so? CatB is not a software engineering essay. It is an anthropological study. However, it contains material about the bazaar, the hackers' way of doing software engineering. Central traits to the bazaar is the open process, the freedom to do with the code what each individual developer wants, and a high degree of
cooperation. Raymond mentions Linux as an archetypal bazaar, yet in a letter to the author David Miller, a central Linux developer, writes:

Date: Tue, 5 Oct 1999 18:40:06 +0200 (CEST)
From: David S. Miller

> is it so that you core Linux kernel developers are doing much of the discussing and
> planning outside of the Linux kernel mailinglist?

It is true to a large extent, and in my opinion it's the gem that keeps us at such high productivity rates.
It's a surefire method by which us core developers can obtain the best signal to noise ratio. Discussions happen
more efficiently and productively when you know you're talking to someone with a clue and you don't get barraged with responses from folks who are perhaps not so clueful and not so weathered on the topic as the core developers.
I.e. there is a mismatch between the map and the terrain, the map here being Raymond's bazaar and the terrain being how things work in the real world. While there is an open forum, areas for community building, the development in itself is being done in a closed
fashion. The results, i.e. the source code, is up for public review, so the product itself is still open. The process, however, is a closed one, and it is the process more than the product that Raymond emphasize as the bazaar model.


Comments on the second paper (SLcatB)

Text of the paper: Feedback:

"Open source is a very interesting and influential phenomenon. It is especially intriguing to me because I believe that it can play a positive role in developing countries. In order to ensure its long-term sustainability we need to see it "as is" and clearly identify possible pitfalls as well as open source's strong and weak points. Fundamentally, we need a reliable map of the open source environment."

"The publication of Eric Raymond's (ESR) new book The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary...makes a fresh and critical review of his most influential paper even more necessary. Besides The Cathedral and the Bazaar (CatB), several papers by ESR are included in this book. None of the papers are so well written, influential and important as CatB. It is no wonder that The Cathedral and the Bazaar is sometimes considered as a Manifesto of the Open Source Movement. This paper will try to analyze just CatB."

"In my earlier paper I argued that the bazaar metaphor is internally contradictive. In this paper we would like to concentrate on the entire CatB paper and try to dissect the main ideas of CatB."

seph - Subject: Re: Actually, Open Source development validates Brooks.... (Dec 11, 1999, 23:47:57 )

>Adding a *small* number of people late in the project, actually *does* help, but unfortunately the
> payoff is a lot less than you expect. Getting linearly better is not enough. Large projects are targetting
> things that are exponentially harder than they used to.

This is a lot of jabber. Adding people to a project late means they have to bring people up to speed. You obviously have no clue about projects.

>Brooks would not be the least surprised with open source development.

Of course he would. If he'd thought of it, he would have written about it. You're too far out on the skinny brances here. How can you speculate what he would or would not have been surprised about. This is only your opinion. Whatever he said is all he said. Not what you attempt to add to it. Is it me or do you seem to what to bring other people into a discussion so you can be right about it.

Open source is what it is. Whether or not it will last depends on market forces which at best are unpredictable. Go back 18 months and tell me you could have envisioned this world when most of the press said "they don't have anyone to call if you have a problem"

Slashdot

Re:Man-month Postulate and Cathedral and Bazaar (Score:3, Insightful)
by amyzing ([email protected]) on Sunday January 09, @02:09AM EST (#40)
(User Info)

Hmmm. Interesting and well-considered comment, but some of the quotes, at least, are more than a bit skewed (well, quoting the Halloween document's criticism of open-source initiative is almost funny).

Specifically, in response to the quote that attempts to explain Brooks in simpler terms, there is an implicit flaw--while these projects might take one programmer 12 months, but would not be completed in one month if twelve programmers were assigned, it might be the case that two could finish in six months, or that three could finish in four. One programmer might not be the ideal number for the project. In fact, more programmers might even finish more quickly, depending upon how the project fits into other releases with which it must be coordinated.

The open source paradigm adds programmers where parallelism is possible. To break things down, it's design, code, test, fix ... design isn't easily divisible (at least the large-scale ought to be done by one person; the module designs might be done by as many people as there are programmers, but there's some loss there for coordination, and adding more programmers than there are modules immediately invokes Brooks' Law, which is really about the fact that once all the slots are filled, extra manpower is overhead, not advantage). Depending on the design, modules might be coded by more than one person, and *certainly* testing and fixing can be parallelized efficiently.

The ability of open source to test massively is both one of its greatest time savers (more on the order of: open source code is typically higher quality, because it's been tested more thoroughly, including broader code and design reviews) and one of the things that leverages initiative, in direct contradiction of Halloween I. Win2K isn't likely to have IPv6; Linux does. There are a multitude of other examples; for any given computer-use problem, there is probably a standards-based, well-tested open source solution that is going to be more effective than a proprietary solution. IRC is going to spearhead the whole concept of live chat years before vendors implement their own solutions (which is not to claim that IRC is that much better; IRC 2, though, is likely to be--IRC is the one thrown away, but the proprietary folks haven't managed to learn from its mistakes).

Where open source tends to fall down is not in lack of innovation, but in a failure to achieve the same level of limited function and high glitz as proprietary solutions. In cooking terms, open source is nutritionally balanced, tasty, digestible, and healthy, but poorly presented; proprietary is fast food, with extreme good looks and little value as nutrition (and probably a somewhat chemical aftertaste as well).

Mind, Brooks is my *hero*, and I have an autographed copy; I think MMM is brilliant. But he doesn't argue that only one programmer should ever be assigned to a project, and I believe that there is less contradiction between the open source model and Brooks' Law than ESR argues in CatB. Where open source shines is in the testing and revision cycle (and in the ownership of the code by programmers, not by managers ... if someone in charge of a module doesn't get anything useful from an extra programmer, she can ignore him; in a managed software development cycle, everyone has to justify their paychecks, possibly by reducing someone else's productivity noisily).


Comments on the first paper (OSSDAR)

Paper: Attention: here is Webliography to the paper. Due to its volume it was not included in the text
Webliography to the paper. Educational Links to the paper Linux Today Story and discussion Slashdot Discussion of the paper ESR response The letter by Paolo Pumilia to FM and my response
ESR comes under fire Ars Technica Slashdot discussion of ESR's response THE REGISTER Linux guru Raymond accused of 'vulgar Marxism' Open code and Marxist Principles Linux Today discussion of this page Netsurfer Digest 05.32
A Critique of the Bazaar, with a Postscript by Godwin
Letter 2 to FM DfultonLinks Nettime: Eric S. Raymond the theory and practice of going ballistic Other refe es

Educational links to the paper

It's interesting to note that most responses are limited to just one day October 8, 1999. News last just one day in the atmosphere of information overload ;-)

...Your article is powerful. Of course it will help many developers to build up more objective view on the issue. It is a perfect critics and everything is 90% true. BUT it looks at the Open Source only the negative way.

I think if this article will be seen by a developer with weak independence, one will never participate in OSS projects.

The author of this letter began to participate himself in one of such projects not long ago and this article was like a bomb, exploded in the core of his heart. He will continue to go, but he was shaken in his confidence. This may be compared with telling to a small child that Santa Claus doesn't exist.

Breaking the Internet Dream and annihilating the idea of Open System and Open Software IS AN INTELLECTUAL SIN.

[????] open source by dfulton

...Cost is an important factor to the educational world in many aspects. Cost per hour of instruction, cost of infrastructure, cost of educational products, etc.; they are all there. However, in the sense of open source, schools may indeed have been to the cathedral. So much so, that they cannot separate themselves from the "church". The reason is simple. Those who run our schools, i.e., school boards, administrators, parents, and others, are afraid or inept of the use of technology. Therefore, they pay someone else to do their job and substantiate their responsibilities. The teachers in the t hes do not have that prerogative. We have to become technologically expert.

Before Microsoft (MS) established itself as the operating system of choice, software and the code behind it came in this manner, hand to hand, computer to computer. It was IBM's intent to retain open source distribution, but people began buying computers and marketers saw great opportunity. Thanks to you, Mr. Gates. The hallmark of the MS success has been that their operating systems are almost everything to everybody. However, that is the main reason not to have the MS operating systems. They are authored to fit everyone's purpose without sufficient specificity. Therefore, open source software could, in theory, overcome that issue. It will be modified to fit specific purposes. The MS operating systems cannot because their source codes are proprietary, therefore, unattainable...

The implications of open source software for the future provide interest, to say the least. No longer will school administrations be subject to unscrupulous vendors who take advantage of the situation and rob our students of all of the value of their education. Yes, administrators will have to dedicate people to provide open source materials instead of relying on the business to keep our students best interests in mind. Of course, this will require a radical change in our systems and the way they run. As we all know, education is one of the last institutions to affect change due to bureaucratic attitudes.

I had just begun to understand what Open Source was, and had begun to think of the educational possibilities when I read Bezroukov's critique of "Open Source Software Development as a Special Type of Academic Research", recommended by Chip on 10/8/99. I also realize that Open Source cannot be modified to fit everyone's needs because not everyone is capable of doing the modifications. Now I am becoming aware of the multitude of problems that Open Source may "open up."

Seminar Digitale Gesellschaft -- Open Source (German)

L'Etat d'Internet 1999 Logiciel libre, " open source " et Linux (F h)

Re:Irking (Score:0)
by Anonymous Coward on Monday January 31, @11:44AM EST (#43)
It is most likely that CMU has been pretty carefull about patents and stuff. I would think/hope that universities learned their lesson when AT&T tried fucking with Berkely about BSD. It would be pretty stupid of CMU to release something with out letting their lawyers give the go-ahead.

As for /. encouraging people to download the stuff one can argue in true Raymondism that they are a mother station to a gift culture that tries to protect people from patent bullies and encourages intellectual advance.

Other Refe es:

Early Critique of "The Cathedral and the Bazaar"

Sorry, but due to size this part of the page was moves to a separate file.

Late critique of of "The Cathedral and the Bazaar"
(after publication of the papers)

FreeBSD Mail Archives "As to fetchmail: it is an abomination before God. If someone in the press ever paid for an audit of the source code, the result would refute the paper "The Cathedral and the Bazaar" to such an extent that it could damage the Open Source movement, which has pinned so much on the paper, in ill-considered haste."

Date:      Sat, 17 Feb 2001 23:49:58 +0000 (GMT)
From:      Terry Lambert <[email protected]>
To:        [email protected] (Peter Pentchev)
Cc:        [email protected]
Subject:   UUCP must stay; fetchmail sucks (was list 'o things)
Message-ID:  <[email protected]>
In-Reply-To: <[email protected]> from "Peter Pentchev" at Feb 17, 2001 05:30:19 PM

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help


> Just a minor comment-with-a-question.  What is UUCP used for - mainly mail?
> If so, then here's a datapoint - about two years ago I took part in
> converting an existing UUCP mail transfer config to one using fetchmail.
> Quite simple - invoke fetchmail -d from the PPP link-up script, kill it
> in the link-down script in such a way that it sends a QUIT to avoid
> message duplicates.  There were a couple of other issues too, but in
> the end, it all started working, and it's been working flawlessly for
> the past two years.

UCP belongs in the base system; you can skip the rest of this, if
you are not interested in the gory details of UUCP vs. fetchmail.



UUCP is the UNIX-UNIX Copy Program.  It is used for copying files
around.  I formerly used it to copy TCP/IP and other packages to
SVR4 boxes, since it was faster to do it over a null-modem cable
than to use floppies.

Primarily, it is used for email and usenet in areas of poor
connectivity.  The UUCP 'g' protocol is much more forgiving
of noise than PPP or SL/IP over the same noisy connection.

-- A tangential diatribe on the unsuitability of fetchmail -------

As to fetchmail: it is an abomination before God.  If someone in
the press ever paid for an audit of the source code, the result
would refute the paper "The Cathedral and the Bazaar" to such an
extent that it could damage the Open Source movement, which has
pinned so much on the paper, in ill-considered haste.

ESR has constantly maintained that fetchmail is "not an MTA", and
he is right: it could be, but it's not.

When mail is delivered to a POP3 maildrop, envelope information
is destroyed.  To combat this, you would need to tunnel the
enveleope information in headers.  Generally, sendmail does not
support "X-Envelope-To:" because it exposes "Bcc:" recipients,
since fetchmail-like programs generally _stupidly_ do not strip
such headers before local re-injection of the email.

Without this information, it can not recover the intended
recipient of the email.  The fetchmail program delivers this
mail to "root".

The program has another bug, even if you elect single message
delivery (in order to ensure a "for <user@domain>" in the
"Received:" timestamp line.  The bug is that it assumes the
machine from which the download is occurring is a valid MX for
your domain.  Many ISPs use one machine to do the virtual domain
expansion, and another to do final deliver into ISP hosts POP3
maildrops.  The net effect of this is that it attempts to use
the "for <domain-account@isphost>" stamp, since it does not
reverse-priority order "Received:" timestamp lines.

Similarly, fetchmail fails to order headers in "confidence"
order.  This means that, given an email with a "valid" (MX
matches in the "by <MX>" and a "for <user@domain>" exists)
"Received:" timestamp line, a "To:", "Cc:", or "Bcc:" line, or
an "X-Envelope-To:" line (which must be configured, and which
is terrifically screwed up by qmail, requiring un-munging),
fetchmail -- takes the first one it sees, not the most correct
one.

Using the "To:", "Cc:", or "Bcc:" lines ("data") to do the
delivery buys a spammer the ability to relay mail, though the
route it must take is rather circuitous.  It also means that
if the "Bcc:" was properly stripped before handing the RFC 822
message to an MTA, or if you are a list recipient, that data
is useless for recovering envelope information.  This means
that root gets all mailing list mail from lists which do not
do recipient rewriting (like the SVBUG list does), and root
also gets all mail addressed to non-existant local users that
was intended for particular local users (all SPAM and all
mail that was requested but not sent specifically targetted to
a particular user, via email header data).

Unfortunately, ESR would not accept patches for the mistaken
MX problem, nor for the prefe e order problem, nor for the
tunneled envelope information stripping problem.  He seemed to
be too busy with speaking engagements, and has since declared
fetchmail to be in "maintenance mode", in order to demonstrate
a recognizable commercial software lifecycle for an Open Source
project, to give business the warm fuzzies.

-- End diatribe ------------------------------------------------

UUCP, comparatively, avoids this whole mess, by not destroying
the envelope information, which normally exists only on on a
mail transport (in SMTP, this is the "MAIL FROM:<addrspec>" and
"RCPT TO:<addrspec>"; in UUCP, it's the control file contents).


					Terry Lambert
					[email protected]
---
Any opinions in this posting are my own and not those of my present
or previous employers.


To Unsubscribe: send mail to [email protected]
with "unsubscribe freebsd-arch" in the body of the message

The Emperor Has No Clothes

"Today I am one of the senior technical cadre that makes the Internet work, and a core Linux and open-source developer."

---Eric S. Raymond
(http://www.prospect.org/controversy/open_source/raymond-e-1.html)

Shut Up And Show Them The Code

Several years ago ESR's advice to RMS , no less, was to "shut up and show them the code". Let us apply the method of the master to the master himself, by examining the code that backs up the grand declaration that heads this page.

Core Linux Developer?

Hmm, easy to check. Let's see, shall we?

$ sed -n '/Eric S. Raymond/,/^$/p' /usr/src/linux/CREDITS
N: Eric S. Raymond
E: [email protected]
W: http://www.tuxedo.org/~esr/
D: terminfo master file maintainer
D: Editor: Installation HOWTO, Distributions HOWTO, XFree86 HOWTO
D: Author: fetchmail, Emacs VC mode, Emacs GUD mode
S: 6 Karen Drive
S: Malvern, Pennsylvania 19355
S: USA

So: terminfo database, maintainer of; three howtos, writer of; fetchmail, coder of; A bunch of emacs macros, coder of. Core Linux developer, did he say? Core Linux developer, even?

Fetchmail

"one of the senior technical cadre that makes the Internet work". That had me puzzled. Then the penny dropped: translated out of ESR-speak, this means "I wrote fetchmail"! In reality he didn't, he took over an existing program called "popclient", and added some bells and whistle, but let's leave that aside. The work doesn't really match the self-description, does it?

... ... ...

"Hacking Social Systems"; or, how to lose friends and alienate people

ESR's announcement of his CML2 project on linux-kernel sparked the first of several flame wars on the subject. These were notable for our hero's complete lack of ability to work with other people. Matters came to a head when ESR adopted some rather "cathedral-like" tactics . Eventually, kernel hackers simply gave up trying to reason with him .

"Note that kbuild 2.5 and CML2 are independent, each can function without the other, complaints about CML2 have nothing to do with kbuild 2.5."

---Keith Owens, kbuild maintainer
(http://www.kerneltrap.org/node.php?id=127)

So CML2 would seem to be finally dead and buried .

Always scribble, scribble, scribble, eh, Mr Raymond?

The Jargon File

The original jargon file was maintained on MIT-AI for many years before being published by Guy Steele and others as the Hackers's Dictionary. Many years after the original book went out of print ESR picked it up, updated it and republished it as the New Hacker's Dictionary.

Picked up, updated... and destroyed, in one hacker's judgement. Another goes so far as to say that "the "author" stole the Jargon File fair and square.".

Although the "author" is a noted advocate of "Open Source" (that's Free Software to you and me), the production of successive versions of the jargon file is not open. That's bogus.

Cathedrals, Cauldrons, and... Charlatans

ESR, notwithstanding his limited experience (see above), has written copiously on the right way and the wrong way to do software development. But his three long essays are summed up nicely in one phrase: "Vulgar Raymondism" .

An entry you'll never see in the "jargon file":

Raymondism: The deluded belief that free software defies Brooks' law, has fewer security exploits than non-free software and that just because thousands of people have access to the source code those same thousands of people will actually examine it."

And Then He Finally Lost It

"Now, you have an unprecedented opportunity to witness one man's descent into insanity online. Apparently having begun his "journey" by dressing up as James Bond and pretending his CD is a gun, computer nerd Eric S. Raymond has been on a slide into insanity ever since.

"His descent into insanity is exemplified by a series of posts, so self-evident in their detachment from reality, that they really require no commentary. Over at his site, Raymond has been going through the motions of putting together an Idiotarian Manifesto or some such. He's been trying to get the words right, trying to work out whether the terrorists, who he defines rather broadly, are "feral beasts" or "rabid dogs". This manifesto is the latest in a long line of ridiculous offerings from Raymond, beginning with his series of factually-challenged screeds ranting and raving about the evils of Islam and the hitherto unknown spectre of "Islamofacism"."

Read more at Warblogger Watch .

ESR Watch

Sun Jun 8 2003

From NTK comes this blast:

Good to see the increasingly eccentric ERIC S RAYMOND keeping himself occupied these days. His latest tweaks: a version bump or two to the JARGON FILE, the ancient hacker bible of which he is current custodian. But how steady is his hand on the sacred tome? Worrying is esr's recent inclusion of unfamiliar terms like "Aunt Tillie" and "GandhiCon", which on closer search-engine examination, appear to have been used almost exclusively by Raymond himself. And esr's current expansions of hacker dialect is curious too. New terms include "fisking" - a term pretty much restricted to the warblogosphere, and defined by your impartial host as "Named after a Robert Fisk, a British journalist who was a frequent (and deserving) early target of such treatment". Also included is "anti-idiotarianism", as in Eric's Anti-Idiotarian Manifesto, a fascinating call to arms that implies "Anti-Idiotarian" means "To be against listening to anyone who would tell you you're sounding like an idiot these days". Finally (and not included in the changelogs), Eric has tweaked the Hacker Politics page, from its previous description as "vaguely liberal-moderate" to "moderate-to-neoconservative (hackers too were affected by the collapse of socialism)". Go tell that to the Kuro5hinners, Eric. Recalling Raymond's familiar defence of previous changes, "rather than complaining that I am 'rewriting history', help me write it!", let it be noted that if someone did want to fork the Jargon File, now would be the time to do it. Raymond's previous googlejuice at tuxedo.org has been cast to the winds. A new, reformatted and popularly linked-to upstart could quickly seize the top Google slot. Ha, ha, as we apparently all say, only serious.

Mon Jun 23 2003

The Jargon File Lexicon .

Even though Eric Raymond makes the hypertext freely available, he does not make the tools and masters that generate the hypertext freely available. It's bogus, and there's no apology. That's not very open-source.

Open source is designed to advance the intellectual property of the corporation at the expense of effort by individuals outside the corporation. As such, it falls under corporatism, as defined in John Ralston Saul's dictionary The Doubter's Companion.

Could a community-maintained Wiki replace the jargon file?



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: January 02, 2020