Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Fighting Software Overcomplexity and Relevance of the KISS Principle in SE

News See Also Recommended Books Recommended Links Program Understanding Unix Component Model Reference
Pipes Scripting Open Source and Bloatware History Humor Random Findings Etc
Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction.

Albert Einstein

"featuritis" -- the tendency to add feature after feature with each new software release -- probably has more to do with code bloat than any other single factor.

Bloat is caused by feature creep at every layer, not just programming layer or feature set layer. Complex software systems definitely can be build, but never can be fully comprehended.

Only by breaking a project down into manageable parts can we hope to interact with it in an effective manner. Furthermore, the inertia against changes is much less severe when they are small and simple. this idea was expressed in different ways in such popular quotes as Ockham’s razor, Einstein’s statement about the simplicity of theories or simply reciting the KISS (Keep It Simple Stupid) mantra. But independently of what quote about the value of simplicity we prefer, reducing complexity is of  paramount importance in software.  Here is how "software bloat" is defined in famous Jargon File:

software bloat

   <jargon, abuse> The result of adding new features to a program  or system to the point where the benefit of the new features  is outweighed by the extra resources consumed (RAM, disk   space or performance) and complexity of use.  Software bloat  is an instance of Parkinson's Law: resource requirements  expand to consume the resources available.  Causes of software  bloat include second-system effect and creeping  featuritis.  Commonly cited examples include Unix's "ls(1)"  command, the X Window System, BSD, Missed'em-fiveOS/2 and any Microsoft product.

creeping featurism, with its own spoonerization: `feeping  creaturitis'.  Some people like to reserve this form for the  disease as it actually manifests in software or hardware, as  opposed to the lurking general tendency in designers' minds.   (After all, -ism means `condition' or `pursuit of', whereas   -itis usually means `inflammation of'.)

Jargon File

Fighting featuritis is known to be a tremendously difficult task that requires a lot of innovation and self-discipline.  One fruitful approach is raising the level of your implementation language and using scripting language along with lower level language instead of using just one.

Also adding features until you convert your product into a Christmas tree is easy and natural strategy, almost irresistible...  Actually that's why I value scripting languages so highly: they provide a novel way to reduce (actually not exactly reduce, but at least hide) the complexity permitting the developer to operate at the higher level of abstractions. And it permit creating more complex and powerful systems that have shorter and thus more maintainable code.

Some very good analogies are used to explain the principles, with my favorite being the broken window tale. The basic story is simple: abandoned buildings (or automobiles on the street) remain untouched until a window is broken. Left un-repaired, this sends a message that the object is fair game so within a very short time, vandals destroy the rest. The same thing happens in software development. Once a sub par feature is passed as acceptable, the signal to everyone is clear, and the quality of the remaining work suffers.

This is especially true for general purpose libraries. Once they became "too generally purpose" they became useless. Good example is glib 2.x vs glib 1.x:  glib-1.2.10 is 1/2 of uclibc in size. glib-2.2.2 is 2 times uclibc. For times growth  with very little useful functionality premium. Here is an interesting post on this topic from Tim Hockin:

On Sun, Oct 31, 2004 at 01:11:07AM +0300, Denis Vlasenko wrote:
> I am not a code genius, but want to help.
>
> Hmm probably some bloat-detection tools would be helpful,
> like "show me source_lines/object_size ratios of fonctions in
> this ELF object file". Those with low ratio are suspects of
> excessive inlining etc.

The problem with apps of this sort is the multiple layers of abstraction.

Xlib, GLib, GTK, GNOME, Pango, XML, etc.

No one wants to duplicate effort (rightly so).  Each of these libs tries to do EVERY POSSIBLE thing.  They all end up bloated.  Then you have to link them all in.  You end up bloated.  Then it is very easy to rely on
those libs for EVERYTHING, rather thank actually thinking.

So you end up with the mindset of, for example, "if it's text it's XML". You have to parse everything as XML, when simple parsers would be tons faster and simpler and smaller.

Bloat is cause by feature creep at every layer, not just the app.

Youck.

Over the history of software development (or at least since advent of IBM mainframes) bigger software was often equated with being better software. Commercial companies like Adobe (Acrobat 6 is really horrible bloatware, probably the champion of the field), IBM (Webshere, Tivoli, you name it), Microsoft (Windows XP; although it can be trimmed to barebones) and Oracle have a stake in producing big complex software. At the same time products like Excel while big are surprisingly flexible and robust. I would not call Excel 2003 bloatware but it definitely several times bigger then, say, Excel 97 which has approximately 80% of Excel 2003 functionality.

Linux suffered from the same disease (especially in Red Hat and Suse distributions) and in the level of bloat Linux is pretty close to Windows.

Regarding the eternal vicious circle of bloated software faster processors ->  even more bloated software  -> even faster processors, do any of you honestly suspect there might be some kind of liason between the parties concerned in order to perpetuate these largely unnecessary upgrades :-)

I have 2 Windows PC's: PC A  (Celeron 1.3, 512M of RAM runs Win2003 / Office 2003 and is ~3 years old. PC B (Celeron 1.2, 512M of RAM runs Windows 98 and Office 97 (dual booted with Red Hat 8). I don't have any problems moving data between the two systems at all. Aside from Excel I do not see relevant increases in functionality of the MS Office applications.

There may be a conspiracy between the hardware manufactures and software designers in order to design software to use up these CPU cycles and eat RAM like there's no tomorrow...

I also have several Linux PCs: an old Compaq laptop (133MHz, 96M of RAM) that is running Caldera 1.2 and newer Celeron 1.2/512M of ram desktop running Red Hat 8. Sometimes I honestly think that the laptop is faster ;-). I know Gnome is a horrible desktop manager, but still to achieve this effect you need to be very talented bloatware designer.

Tradeoffs Connected with the Simplicity

A delicate balance is necessary between sticking with the things you know and can rely upon, and exploring things which have the potential to be better.  Assuming that either of these strategies is the one true way is silly. 

-- Graydon Hoare

There is no free lunch. Many pressures tend to make programs more complicated (and therefore more expensive and buggy). One is technical machismo. Programmers are bright people who are (justly) proud of their ability to handle complexity and juggle abstractions. Often they compete with their peers to see who can build the most intricate and beautiful complexities. Just as often, their ability to design outstrips their ability to implement and debug, and the result is an expensive failure.

Often (at least in the commercial software world) excessive complexity comes from project requirements that are based on the marketing fad of the month rather than the reality of what customers want or software can actually deliver. Also complexity in commercial products is a time-tested defense against competitors. Many a good design has been smothered under marketing's pile of “check-list features” — features which, few customers benefit from. But here you can trap competition pretty nicely: usually competitors feel that they has to compete with chrome by adding more chrome. They forget that chrome tend to benefit the first comer. And that along tend to protect you from all, but the most talented competitors who can transcend this "more chrome" strategy and concentrate on better functionality and compatibility. For everybody else massive bloat naturally diminishes compatibility and leads to incompatibilities that segment the field; your former competitor suddenly moves into a different niche or just die because he does not have the same resources as you. Look at Quattro Pro and Word Perfect as two interesting examples.

The only way to avoid these traps is to encourage a software culture that actively resists bloat and complexity — an engineering tradition that puts a high value on simple solutions, looks for ways to break program systems up into small cooperating pieces, and reflexively fights attempts to gussy up programs with a lot of chrome (or, even worse, to design programs around the chrome). This tradition is associated with Unix and we need a conscious efforts to preserve it despite many Windows-emulators that now operated in Linux world.

Dr. Nikolai Bezroukov


Top updates

Bulletin Latest Past week Past month
Google Search


NEWS CONTENTS

Old News ;-)

[Nov 03, 2012] The Paradox of Choice Why More Is Less Barry Schwartz 

Amazon.com

Book description

In the spirit of Alvin Toffler’s Future Shock, a social critique of our obsession with choice, and how it contributes to anxiety, dissatisfaction and regret. This paperback includes a new P.S. section with author interviews, insights, features, suggested readings, and more.

Whether we’re buying a pair of jeans, ordering a cup of coffee, selecting a long-distance carrier, applying to college, choosing a doctor, or setting up a 401(k), everyday decisions--both big and small--have become increasingly complex due to the overwhelming abundance of choice with which we are presented.

We assume that more choice means better options and greater satisfaction. But beware of excessive choice: choice overload can make you question the decisions you make before you even make them, it can set you up for unrealistically high expectations, and it can make you blame yourself for any and all failures. In the long run, this can lead to decision-making paralysis, anxiety, and perpetual stress. And, in a culture that tells us that there is no excuse for falling short of perfection when your options are limitless, too much choice can lead to clinical depression.

In The Paradox of Choice, Barry Schwartz explains at what point choice--the hallmark of individual freedom and self-determination that we so cherish--becomes detrimental to our psychological and emotional well-being. In accessible, engaging, and anecdotal prose, Schwartz shows how the dramatic explosion in choice--from the mundane to the profound challenges of balancing career, family, and individual needs--has paradoxically become a problem instead of a solution. Schwartz also shows how our obsession with choice encourages us to seek that which makes us feel worse.

By synthesizing current research in the social sciences, Schwartz makes the counterintuitive case that eliminating choices can greatly reduce the stress, anxiety, and busyness of our lives. He offers eleven practical steps on how to limit choices to a manageable number, have the discipline to focus on the important ones and ignore the rest, and ultimately derive greater satisfaction from the choices you have to make.

[Dec 04, 2011] Simplicity is the core of a good infrastructure by Steve Webb

Simplicity is the core of a good infrastructure

I’ve seen many infrastructures in my day. I work for a company with a very complicated infrastructure now. They’ve got a dev/stage/prod environment for every product (and they’ve got many of them). Trust is not a word spoken lightly here. There is no ‘trust’ for even sysadmins (I’ve been working here for 7 months now and still don’t have production sudo access). Developers constantly complain about not having the access that they need to do their jobs and there are multiple failures a week that can only be fixed by a small handful of people that know the (very complex) systems in place. Not only that, but in order to save work, they’ve used every cutting-edge piece of software that they can get their hands on (mainly to learn it so they can put it on their resume, I assume), but this causes more complexity that only a handful of people can manage. As a result of this the site uptime is (on a good month) 3 nines at best.

In my last position (pronto.com) I put together an infrastructure that any idiot could maintain. I used unmanaged switches behind a load-balancer/firewall and a few VPNs around to the different sites. It was simple. It had very little complexity, and a new sysadmin could take over in a very short time if I were to be hit by a bus. A single person could run the network and servers and if the documentation was lost, a new sysadmin could figure it out without much trouble.

Over time, I handed off my ownership of many of the Infrastructure components to other people in the operations group and of course, complexity took over. We ended up with a multi-tier network with bunches of VLANs and complexity that could only be understood with charts, documentation and a CCNA. Now the team is 4+ people and if something happens, people run around like chickens with their heads cut off not knowing what to do or who to contact when something goes wrong.

Complexity kills productivity. Security is inversely proportionate to usability. Keep it simple, stupid. These are all rules to live by in my book.

Downtimes: Beatport: not unlikely to have 1-2 hours downtime for the main site per month. Pronto: several 10-15 minute outages a year Pronto (under my supervision): a few seconds a month (mostly human error though, no mechanical failure)

Ok, rant over. :)

[Oct 28, 2010] The Oil Drum Campfire The Abandonment of Technology

October 18, 2010

This is a guest post by Cameron Leckie, known on The Oil Drum as leckos. Cameron is an officer in the Australian army. He is a member of ASPO Australia and lives in Brisbane with his wife and two young children.

The other day, whilst visiting the in-laws, I was involved in a conversation that in my view opened a window to the future of technology. My mother in law, who works in a small retail outlet was packing her lunch. My wife asked why she was putting an ice block in with her lunch box. The answer was that the owner of the shop had removed the staff refrigerator (and turned off the hot water system) to save a couple of hundred dollars a year. As someone who strongly believes that the most likely outcome for a debt based economic system approaching a world of declining net energy supplies is economic contraction and lower standards of living (at least materially), this started me thinking about the process by which industrial civilisation may abandon some of the technologies that we currently take for granted.

There are many reasons why we humans adopt new technologies, but in my view the root cause is that the benefit provided by a new technology outweighs its cost. Importantly costs and benefits can be measured both in financial terms and by other less tangible factors, something that will be important when considering which technologies are abandoned. One reason that we may abandon a technology is the flip side of the reason for its adoption - that the costs outweigh the benefits obtained. Thus the fridge has been abandoned because the cost of maintaining it outweighs the benefit of keeping lunch cold. Other reasons might be that the technology is no longer supportable (for example, If you cannot access fuel, your car is not going anywhere) or another technology appears/reappears to replace it.

In this post, I would like to propose a theory by which some, or potentially many, modern technologies could be abandoned. This is an important issue because of its implications for government policy, business investment and of course society as a whole. I will briefly examine the relationship between technology and complexity, detail a theory to explain how technologies might be abandoned and finally propose some questions for discussion.

Technology and complexity

Virtually all technologies increase the complexity of the organisation/society that adopts the technology. Whilst to the end user a new technology might appear simpler, from a systems perspective, complexity has increased. Consider a hunter gatherer versus a modern consumer’s procuring of food. The hunter gather had to work much harder to obtain and prepare food than the modern consumer reliant upon supermarkets and pre-prepared food. The system required to support our food system however is orders of magnitude more complex than that of a hunter gatherer. This increased level of complexity comes at a cost in terms of the capital, resources and energy required to maintain a level of complexity.

For example, to maintain our road networks requires significant financial and human capital, a vast array of equipment, and resources such as sand, gravel, bitumen, steel, aluminium and concrete. This is all supported by the expenditure of energy, such as diesel and electricity. Whilst the global economy has grown meeting these maintenance costs has been in the most part achievable. It is highly unlikely however that society will be able to meet these maintenance costs in a contracting economy. Indeed this is already occurring in some parts of the world, such as the US, where in some instances financially pressured local governments have been turning bitumen roads into gravel roads to reduce costs.

The theory

So how could a technology be abandoned? Figure 1 summarises the theory that I am proposing. Figure 1 represents a single technology, such as a car. Rather than using a specific number of units (e.g. cars) or other measures (e.g. Vehicle Kilometres Travelled), I have used percentages to represent the level of abandonment, with 100% representing the maximum uptake of a particular technology and 0% being its complete abandonment. Obviously how individual technologies are abandoned will vary considerably both in time and level of abandonment, thus the general case represented in Figure 1 is generic only to assist with explaining the theory.

 

 

 

Figure 1. The abandonment of technology.

 

General case. In the general case, technology is abandoned in four stages:

Some general comments on the theory. Although this is explained in a linear fashion, the transition between stages is likely to overlap and could even occur concurrently between different regions or nations. Indeed some nations might be increasing the uptake of a technology at the same time another nation is abandoning it.

Also, it is not necessarily a one way process; it is likely to be dynamic. All that it will take to reverse the process is for the cost benefit analysis to alter direction, assuming that a technology is still supportable. In many industries we are likely to have major over capacity problems in the years ahead as the global economy contracts. Economic and systemic abandonment, whilst likely to be highly disruptive, may result in some technologies being able to remain viable for much longer as excess capacity is removed.

Finally synchronous failure, to use Thomas Homer-Dixon’s phrase, could rapidly accelerate this process due to the interdependencies between many technologies. As an example, if the US Air Forces Global Positioning System constellation were to fail, this could render a whole host of technologies that rely upon it immediately useless.

Questions

The key assumption that underpins this theory is that the future path of the global economy will be one of contraction. Taken in this context, detailed below are some questions for discussion on the theory of technological abandonment:

Author's Note

This campfire post is an extension of my thoughts on the future of technology explained in a paper that has recently been published in the Australian Defence Force Journal titled ‘Lasers or Longbows: A paradox of military technology’ (from page 44). The paradox I define in the paper as being ‘The advantage provided by the increased complexity of a military capability increases the vulnerability of that same capability to systemic collapse due to its reliance on complex supply chains.’ Whilst this paper was describing the impact on the military, I believe that it is equally relevant to all technologies. This post expands upon the argument presented in the paper to examine how individual technologies may be abandoned.

297 comments on The Abandonment of Technology

 

ShareThis| Show without comments | PDF version

[-] David Sucher on October 16, 2010 - 3:36pm Permalink | Subthread | Comments top

I don't believe that it makes sense that we in the USA have been abandoning our roads because it relates to overall peak oil etc., as you say here:

"Indeed this is already occurring in some parts of the world, such as the US, where in some instances financially pressured local governments have been turning bitumen roads into gravel roads to reduce costs."

We simply have not been spending our money wisely. One reason that the USA is not spending money to maintain its infrastructure simply because the USA has such idiotic foreign policy and we spend too much on "defense" spending. Consider the many many billions if the USA wasn't so lacking in astuteness.

 

[-] speculawyer on October 16, 2010 - 3:41pm 

Yeah, transport is really the sector that will change. There is plenty of electricity available. Peak oil is about . . . well, oil. Between coal, natural gas, nuclear power, solar, and wind; there will be plenty of electricity for my lifetime and my child's lifetime.

 

[-] Will Stewart on October 16, 2010 - 7:35pm 

We must keep in mind that there are considerable interdependencies between all sectors of critical infrastructure. With high oil prices come economic uncertainty, and low budgets for roads, which are expensive to maintain due to the high bitumen costs. Lack of road maintenance impacts almost every other sector - these supply chain risks are already being examined;


Critical Infrastructure Interdependencies from the National Infrastructure Simulation and Analysis Center

If we look at the impacts of road disruptions in specific areas over time;

If we take just a peek at some of the details in the Communications sector alone, we'll see how delicate our current (chaotic) system really is;

Just like the climate, we don't know exactly what will happen when the system is perturbed in any number of ways, so something as complex as our critical infrastructure must be modeled;

We are being supported by an n-dimensional house of cards. Pull one lower card out and...

[-] FMagyar on October 17, 2010 - 5:17pm 

Does complexity unwind the same way built up through progressive transformations?

I'd say that simplification of a complex system can only be achieved through progressive transformations. Unless of course you have a catastrophic failure of the system in which case I wouldn't call it a mere 'simplification' process. However that simplification process will not likely be a straight forward reversal of the process that originally led to the existence of the complex system.

All one has to do is think of the increasing complexity over time that a fertilized egg must undergo so that it eventually becomes, say, a highly accomplished neuroscientist... and compare that process to the cascade of organ failures the leads to his or her death followed by the decomposition of the body.

WHT's inadvertently coined word 'Entrophy' might be a good way to describe the later.
Entrophy being an amalgamation of the words Entropy and Eutrophication.

If one could obtain data with which to graph the delta 'Entrophy' from the moment of the first cellular disruption leading to the series of organ failures that would eventually cause the cascade of failing organs, one should be able to obtain momentary snapshots of the simplification process (decreasing complexity) in action along a chosen timeline.

I believe the same picture can be obtained for any complex system undergoing an 'Enthrophic' process, even our complex industrialized civilization which is IMHO now undergoing it's first cellular disruptions due to declining energy sources and diminishing resources. This particular oil candle has burned exceedingly brightly by having been lit at both ends.

[-] Iaato on October 17, 2010 - 5:32pm 

Here's your analogy of the highly accomplished neuro-scientist, pictured below in terms of transformities. But to equate the physical DNA, the cells of his liver with the legacy of his work (books, medical knowledge that then gets passed on, and other cultural DNA) is mixing two different streams of transformity. The physical stream of DNA and cells is what makes us mammals as part of the food chain. The information stream is many exponents greater in terms of transformation, and it is what makes the human culture so amazing. The body dies, but the legacy lives on within the culture, as does the DNA of the neuroscientist, if propagated? The failure of both streams of transformation may occur at drastically different rates. Cultural DNA is arguably just as valuable as genetic DNA, but on a different, shorter time scale, but with massive embodied energy from our FF culture?

[-] Paul Nash on October 17, 2010 - 12:45am 

So far there is no question of abandoning any technology, or of being forced by cost to abandon any real capability.

I'm not quite sure that's true. Many military technologies have been abandoned - like big gun battleships, sail power, and coal power
In most military situations it is not because of cost that they are abandoned, but because they are rendered obsolete (e.g. coal) or ineffective (battleships).

There is a recent example of cost abandonment - the US Navy used to have some nuclear powered cruisers, which are more operationally capable than conventional ones, but were "abandoned" )i.e. not replaced) for cost reasons. The nuke cruiser concept was refloated, and re-abandoned, as recently as this year.

http://www.fas.org/sgp/crs/weapons/RL33946.pdf

I can think of no better example of an increase in capability (i.e. "infinite" range), for a massive increase in complexity. For large ships (carriers) and submarines, it clearly makes sense, for small ones it does not - the cruisers are clearly near the crossover point.

mcain6925 on October 17, 2010 - 3:58am 

...but because they are rendered obsolete (e.g. coal)...

Following the oil price spike at the end of the 1970s, there was some consideration given to the possibility of building new coal-fired steamships. At least one, the SS Energy Independence, a small bulk freighter, was actually built. The thinking was that improvements in boiler design and automation of fuel handling developed by the electric generating industry appeared to make such ships feasible. There were a whole list of practical problems that would have had to be solved, such as contemporary ports lack of infrastructure for handling coal fuel. At some level, coal is "obsolete" only as long as you have oil.

robert wilson on October 16, 2010 - 4:44pm 

A remarkable book that discusses the complexity of modern medicine is The Checklist Manifesto by Atul Gawande. Complexity is one of the drivers for rising medical expenses. For decades I have been astonished by the progress seen in nuclear medicine, digital radiography, computed tomography, PET and especially MRI and fMRI (magic resonance imaging?). Although I have recently benefited from the service of a super specialist, a certified cardiologist with a sub-specialty in cardiac electrophysiology, I doubt that increasing medical complexity is sustainable. In coming decades society may be forced into acceptance of simpler medical care.

http://gawande.com/the-checklist-manifesto

Alaska_geo on October 17, 2010 - 11:50am 

Gawande's Checklist Manifesto is indeed a most excellent book. It has many insights that go well beyond medicine. I highly recommend it.

daxr on October 17, 2010 - 7:30pm 

Just for fun:

http://www.newsweek.com/photo/2010/08/24/dumb-things-americans-believe.html

The truly scary thing is how easy it seems to be to get people to believe ridiculous crap...in an autocracy it doesn't make much difference how stupid the populace is, but in a democracy everything tends to get dragged down toward the lowest common denominator.

[-] aangel on October 16, 2010 - 6:19pm 

That may be true but I think there is much more to it than that.

1. 7 billion people competing for a shrinking pie (in all areas — energy, metals, fresh water, etc.) is a very different context than the Middle Ages/Dark Ages
2. it's very possible (likely?) that we will see the rollback of The Enlightenment in many areas of the world that have it now.

Even with a broadly available public education system, the best the U.S. has gotten to is this:

Half of all Americans believe they are protected by guardian angels, one-fifth say they've heard God speak to them, one-quarter say they have witnessed miraculous healings, 16 percent say they've received one and 8 percent say they pray in tongues, according to a survey released Thursday by Baylor University.

http://www.washingtontimes.com/news/2008/sep/19/half-of-americans-believ...

It's not just technology...it's the whole of society that will regress.

Nick on October 18, 2010 - 12:41pm 

That's something that has amazed me for years.

People like to believe that the rich/elite gather in conspiracies which know everything. In fact, the rich/elite believe their own propaganda, and often end up far stupider than can be believed.

Merrill on October 16, 2010 - 4:24pm 

Most examples of "abandonment" are really "replacements". New technologies which are better, easier, and cheaper replace old ones (though the "cheaper" part doesn't matter much for military technology).

Audio recording has gone through mechanical recording on Edison cylinders, 78, 45 and 33-1/3 discs, magnetic recording on wire, 1/4" reel-to-reel, 8-track cartridge, Philips cassette, optical recording on CDs, and media-independent computer files, such as MP3s. Of these only the 33-1/2 vinyl records, CDs, and computer files survive in any volume.

However, the audio recording technologies are all replacements. As was the replacement of black powder by smokeless powder, matchlocks by flintlocks by percussion caps by cartridges, wooden ships by iron hulls by steel hulls, etc. It wasn't that the supply of vinyl or saltpeter or flint or wood ran out.

Abandonment of a technology because a factor needed for its production or operation is pretty rare.

I think that in some cases irrigated agriculture has been abandoned when water supplies became too meager to sustain it. However, in most cases irrigation had been abandoned when the soil becomes to alkaline or salty for further irrigation, which is a different type of failure.

Carving of ivory objects or objects made from rhinocerous horn might be an example. I can't think of a mineral or a plant substance that has become totally unavailable, but several animals have become endangered or extinct. Market hunting of passenger pigeons and other game birds is no longer done. Beaver hat making has been abandoned.

 

[-] half full on October 16, 2010 - 6:36pm 

Yes. Replacement, not abandonment, will be the significant trend. Those technologies most directly associated with coal, oil, and gas will be replaced with those for capture, storage, transformation, and utilization of solar energy. Our current technology materials mix is coloured by our use of carbon as a metallurgical reductant and petrochemicals as polymer feedstocks. Solar electricity used in electo-metallurgy will result in greater use of metals like magnesium, titanium, manganese and aluminum (which are hard to reduce with carbon), and biomass feedstocks will be useful for a wider variety of 'designer' polymers.

There is no reason why secondary industries like manufacture of refrigerators should be affected beyond materials choices.

If i were an Australian i would be mindful of the fact that enough solar energy could be captured in the outback to power most of the world. That resource should identified as something of strategic importance and defended. Solar technologies are not ready yet, but will be long before the last of the fossil fuels are gone.

 

[-] pancake on October 17, 2010 - 12:25am 

Most examples of "abandonment" are really "replacements". New technologies which are better, easier, and cheaper replace old ones (though the "cheaper" part doesn't matter much for military technology).

What if the replacement technology does not make your life easier. Does that count as an "abandonment" are a "replacement"?
Old-fashioned washing machine

 

[-] Martian on October 17, 2010 - 1:46pm 

Sorry, but your example is not quite right. Look at what is there in the video.

Plastic Bucket?
Vitreous China sink?
Modern detergents?
Michelle Obama? (just kidding)
Where did the water come from?
And many more...

This is a very poorly designed use of a very basic technology. Kinda like pushing a rope. Reminds me of a bicycle with square wheels.......

Most technology today, and for the last 50 years, is nothing more than mental masterbation.
Technology for technologies sake....a Child's Merry-go-round to nothing.

 

[-] jokuhl on October 17, 2010 - 3:23pm 

I'm wondering what you're referring to with recent tech..

There is a lot of junk, but the core advancement they are built on, Microprocessors, for example, have proven to be highly adaptable and useful in real ways as well.

I remember doing corporate videos 15 years back, and noting all the different applications I was seeing 'old' IBM AT's and PC's still being used for, from CAD-CAM app's to Making T-shirts, to Trading, Bookkeeping, Desktop Publishing, DataLogging, etc.. with programs and interfaces that could largely be installed from the Screenprinter's machine onto the Trader's machine and be expected to run just fine. Even if the products they were creating were unnecessary or built on the 'Overconsumption Model', that doesn't mean that the flexibility of this technology was at fault.. and Now, I've got a handheld from HP in 1995 that can run any program that an old IBM 8086 or 286 would run, powered by a pair of AA batteries. There are all sorts of useful and meaningful ways such tech could still be put to use for localized groups.

 

[-] Nick on October 18, 2010 - 1:13pm 

all the different applications I was seeing 'old' IBM AT's and PC's still being used for

The most striking example for me was the space shuttle, powered by the original PC-XT 8086 processor!

 

[-] jokuhl on October 19, 2010 - 4:38am 

Which brings to mind this old internet gem.. of the dependence of Modern Tech on the Ancients.

THE SPACE SHUTTLE AND ROMAN CHARIOTS

Does the statement, "We've always done it like that" ring any bells?

The US standard railroad gauge (distance between the rails) is 4 feet, 8.5 inches. That's an exceedingly odd number. Why was that gauge used?

Because that's the way they built them in England , and English expatriates built the US Railroads.

Why did the English build them like that?

Because the first rail lines were built by the same people who built the pre-railroad tramways, and that's the gauge they used.

Why did "they" use that gauge then?

Because the people who built the tramways used the same jigs and tools that they used for building wagons, which used that wheel spacing.

Okay! Why did the wagons have that particular odd wheel spacing?

Well, if they tried to use any other spacing, the wagon wheels would break on some of the old, long distance roads in England , because that's the spacing of the wheel ruts.

So who built those old rutted roads?

Imperial Rome built the first long distance roads in Europe (and England ) for their legions. The roads have been used ever since.

And the ruts in the roads?

Roman war chariots formed the initial ruts, which everyone else had to match for fear of destroying their wagon wheels. Since the chariots were made for Imperial Rome , they were all alike in the matter of wheel spacing.

The United States standard railroad gauge of 4 feet, 8.5 inches is derived from the original specifications for an Imperial Roman war chariot. And bureaucracies live forever.

So the next time you are handed a specification and wonder what horse's ass came up with it, you may be exactly right, because the Imperial Roman army chariots were made just wide enough to accommodate the back ends of two war horses!

Now, the twist to the story

When you see a Space Shuttle sitting on its launch pad, there are two big booster rockets attached to the sides of the main fuel tank. These are solid rocket boosters, or SRBs.
The SRBs are made by Thiokol at their factory at Utah . The engineers who designed the SRBs would have preferred to make them a bit fatter, but the SRBs had to be shipped by train from the factory to the launch site.
The railroad line from the factory happens to run through a tunnel in the mountains.
The SRBs had to fit through that tunnel.

The tunnel is slightly wider than the railroad track, and the railroad track, as you now know, is about as wide as two horses' behinds.

So, a major Space Shuttle design feature of what is arguably the world's most advanced transportation system was determined over two thousand years ago by the width of a horse's butt.

SNOPES demurs on this one a bit, but gives it some credit here and there.
http://www.snopes.com/history/american/gauge.asp

 joule on October 16, 2010 - 4:25pm 

Leckos -

I think your theory is quite reasonable at least as a broad generalization. However, technology is a very nebulous subject, and the very term 'technology' often connotes different things to different people. Even the dictionary offers several different definitions of technology, none of which are terribly helpful, to wit: i) the study, development, and application of devices, machines, and techniques for manufacturing and productive processes, ii) a method of methodology that applies technical knowledge or tools, iii) the sum of a society's or culture's practical knowledge, especially with reference to its material culture.

Note that none of these definitions talk about specific products as being 'technology', and I think that's where some people get into trouble when using the term technology too loosely. If I drive a monstrous Ford pick-up truck while you drive a small, fuel-efficient Smart Car, we are both using automotive technology but in the form of vastly different products. However, if I go to a bicycle, then that is switch to a different technology. If the modern Australian Air Force flies jets while the WW II RAF flew piston-engined planes, they were both using aviation technology, but vastly different propulsion technologies. One often encounters this conflating of technology with products in technical advertising. A company with a new line of water filters will boast about their new 'technology' when in actuality it is nothing more than a new product based on fundamental filtration technology, and thus really just a different embodiment of an existing technology.

My purpose here is not semantical nit-picking, but rather to point out that technology and products are not the same thing. And I think much of what you call 'abandonment of technology' is often just a case of restricting or limiting the application of products based on that technology. In the example of your mother-in-law's boss who removed the employee's refrigerator, he was not abandoning refrigeration technology, but rather was stingily restricting access to a particular product based on that technology. I'm sure the boss still maintain a nice refrigerator in his own home. But if we replace a coal-fired power plant with a wind farm or solar array, then that is clear an abandonment of one technology and the taking up of another technology.

Then we have the problem of complexity, which can also be somewhat in the eye of the beholder, as illustrated by the following question: which is more complex a classic steam locomotive or a modern diesel-electric locomotive? The answer is not as obvious as it looks. While the steam locomotive uses lower-tech coal instead of higher-tech petroleum (and the vast infrastructure associated with such), it is a plumbing nightmare, far more complex in operation, and requires far more complex maintenance. An external combustion engine is inherently more complex and inefficient than an internal combustion engine, even though it came first. So, one might argue that this forward step in technology, i.e. going from steam to diesel, actually reduced rather than increased complexity. Ditto for going from vacuum tube to solid state electronics.

And lastly, many people tend to view individual technologies in a vacuum and don't fully appreciate the extent to which one technology is dependent upon many others, with economics being the main driving force. If I can no longer drive my car because a whole chain of parts suppliers have gone bankrupt and I can no longer get replacement parts, it's not that I have abandoned automotive technology, but rather that a systemic failure of infrastructure has occurred.

I prefer to view what you're talking about as products being abandoned due to economic and infrastructure problems, rather than abandonment of technologies, per se. Regardless of which way one wants to phrase it, you are on the right track, and the technological landscape of the future will not necessarily be more advanced than that of the present, as what has been viewed as 'progress' often turns out to not be progress after all. For things to work, the technological mix must be appropriate and fit the set of circumstances under which we live. However, I fear this will never happen in an ordered and painless way, and we will largely keep doing what we're doing until we can't.

 

[-] RickM on October 16, 2010 - 5:49pm 

I agree, Joule:
Cam is on the right track, and we humans tend to keep doing what we're doing until we can't.
That truism applies not only to beneficial behavioural patterns (I automatically get out of bed, shower and go to work each morning, all of which has benefits) but also to all sorts of vices (smoking, drinking, junk food, wasting time on the internet, even thievery... people often must be forced in some way before they will stop).

Many Ontario farms have a spot in the trees where a half-century ago, the farmer dragged off his horse-drawn equipment: seed drills, wagons, two-furrow riding plows, etc. They did so, often reluctantly, trusting that gasoline (and eventually diesel) tractors would prove superior to live horsepower. Many farmers bought a tractor but still kept the horses "just in case," and sometimes that paid off.

But I think your distinction between abandonment for economic & infrastructure reasons (vs abandoning the technology itself) sort of parallels the debate over what will prove to be the "cause" of peak oil: what 'causes' it will matter far less than the fact that it has occurred, and the result may be pretty much the same.

I do agree that a person may not abandon the technology on some emotional level (a power failure in no way diminishes my appreciation of my computer, even though it's completely useless for a while) but we may be forced into it because of infrastructure problems (what if the grid became unstable and we had occasional voltage surges which damaged our expensive electronics?). Or financial hardship may force us to give up something which we certainly have no desire to abandon (as you correctly pointed out).

In any event the infrastructure really can't be meaningfully isolated from the technology, since the technology can't really operate effectively without it, just as the infra would not exist were it not for the technology.

But the fear here is the vulnerability which is inherent in the interconnectedness and the complexity of both our technologies and their attendant infrastructures, including the economic and fiscal systems which support them.

And that is, I think, the central message of the recent Bundeswehr report on peak oil: oil is so fundamental to our mobility, our supply chains for food and everything else, our jobs and the tax base, military capabilities, etc... anything which interferes with our large-scale access to affordable oil could very quickly put us outside that narrow band of economic and social stability.
The Bundeswehr report is unprecedented in the publicly-available military literature for that reason: instead of focusing on the usual set of concerns, most of which are external (choke-points, NOCs, resource wars, geopolitics, Chindian demand, etc) this report has flagged the potential for economic, social and even technological unraveling on the home front.

As the Amish continue to prove, there is great resiliency in technologies and an infrastructure which are under one's own direct control. They are the classic example of a society which thinks long & hard before it decides to change its technologies or its infrastructure. (Some of us make such changes in an eye-blink, following the latest commercial, with no real thought.)

The Amish (and many other farmers) just have to pray that the rest of society will leave them alone when the flashy, consumptive lifestyle which many of us have chosen runs into trouble, which probably won't take too long....

[-] vertigo on October 16, 2010 - 8:52pm 

Just a note re. the Amish saving us. I come from a Mennonite family that used buggies long after everyone else was in cars. That society was dependent on having large families to provide the farm labor; my grandmother was one of ten. Overpopulation was prevented by exporting the excess into the surrounding communities, so my grandmother stopped that lifestyle and became modernized, along with many of her siblings and most of their descendents. Without high birth rates, the Amish/Mennonite system is not going to work too well.

geek7 on October 16, 2010 - 6:15pm 

This redirection away from the word technology and towards the word product seems a good idea to me. Actually, I had been thinking of posting some thoughts about renaming the target of discussion as modern behaviors which will be abandoned. And I'm inclined to believe that still has merit.

When I started feeling uncomfortable with the abandonment of technology as a topic, the behavior that first came to mind, for me, was the phenomenon of world leaders gathering in person at the United Nations building in New York on a annual basis. Although the cost of this trip is a trivial burden on USA for our President, there are other nations for which it must be a difficult burden, or a source of corrupt influence on the leadership by foreign business interests. This is an example of a topic which doesn't fit well with either technology or product, but belongs somewhere in a discussion of what the future might bring. But for now, lets concentrate on technology.

For a technology to exist, there must be a sub-set of the population who count themselves as being experts in that technology. These people must maintain their skills and must recruit and train their replacements before they retire (or die). So every technology has a base level of cost even if it is very little used. And every technology also seems to depend on other technologies to some extent. So for a technology to survive, its supplier technologies must also survive. There was a time when an economist (Leontief) tried to create a diagram of how all the economic activity in the US was linked into a gigantic whole. (Most economic activity is the actual implementation of a few inter-related technologies.) If we had that linkage map, we might be able to read off it what technologies would live, or die, as groups. But we don't have that map because that work was itself a technology that has not survived.

Paul Nash on October 17, 2010 - 1:24am 

I have to disagree absolutely that the steam engine, or steam locomotive, is more complex than a diesel one.

The number of moving parts, and precision machined parts, is much smaller. (an excellent photo comparison at http://www.cyclonepower.com/comparison.html)

The metallurgical treatments involved is much smaller. The steam engine can be operated without any electrical system at all (as can a diesel engine, but not a diesel electric). Steam engines do not de-rate with altitude. They can, suitably equipped, run on any fuel.
The "plumbing nightmare" is not much worse than the plumbing nightmare in and diesel - look at a map of lubricating oil flows in a diesel engine.
And, to cap it off, look at the *very* complex NOx emission control systems being fitted to modern diesels - these are not required with steam engines as the combustion takes place at atmospheric pressure, and NOx is not formed. The nature of continous combustion also minimises Co and HC emissions

Granted that the diesel IS more thermodynamically efficient, especially given the non- condensing nature of railroad steam engines, but I would argue there is a considerable increase in system complexity to get this. A diesel may be simpler for the operator, but it is a perfect example of the supply line issue - no diesel, no go. A suitably equipped steam engine can use almost any liquid, gas or solid fuel, and solid fuel can be, if needed, obtained in the field.

I think where the mismatch comes from is that steam locomotives are old, and were not developed over the last 80yrs, so they reflect knowledge as at the 1930's. Built with todays knowledge,many of the problems disappear - here is an example of a company in Switzerland making new steam locomotives (fired by oil)

http://www.dlm-ag.ch/attachments/Typenblatt_99.10xx_1d1_en.pdf

I would argue the steam engine (in modern form) is not more complex, just less efficient. It is then a case of trading off complexity for efficiency.

joule on October 17, 2010 - 9:34am 

Paul Nash -

As I said, a good deal of complexity is in the eye of the beholder. I was largely talking about a steam locomotive versus a diesel-electric locomotive, not merely a generic steam engine versus a diesel engine. If you've ever examined detailed construction drawings of a 1940s-vintage steam locomotive, it should be evident that it has a very large number of parts, many of which are large and not easy to manufacture.

Both a steam engine and a diesel have cylinders and pistons, but a steam engine has the added complexity (and major headache) of a boiler. Plus, the valve train on one of these steam locomotives is far more complex than the valve and camshaft system on a diesel.

Much of the complexity associated with steam locomotives is separate from the physical object. While large and robust looking, a steam locomotive was a rather temperamental piece of machinery and required constant maintenance and repair. The main weak point was the boiler, which was prone to scale build-up from minerals in the feed water, required constant cleaning, and needed frequent replacement of boiler tubes. If in constant service, the locomotives often had to keep a banked fire in their fire box overnight so they could start up the next day without a long wait to get up steam. Railroads needed coaling and water tanks positioned along the route. Plus there was ash handling and disposal. All this was very labor-intensive, but back in those days labor was cheap.

Now with a diesel locomotive, you just fill her up like a car, and off you go. While the diesel engine is directly coupled to a generator and electric motor, no external electrical power is required. Of course, the electrical controls on a modern diesel do represent additional complexity.

Now, if you've ever examined the detailed drawings of a steam automobile .... now there's a plumbing and maintenance nightmare! No wonder it turned out to be a technological dead end.

One other thing, as a final note: steam locomotives were not really mass produced in the current sense of the word, but were turned out in relatively small batches, sort of like military aircraft. The manufacture of diesel engines on the other hand closely parallels that of the automotive industry, with all the associated economies of scale that entails.

But I guess these comparisons can only be taken so far, as after a while it's like asking: which is more complex, an apple or an orange?

Recision on October 16, 2010 - 7:26pm 

Always and everywhere, the adoption or abandonment of technology is the cost/benefit equation. New technology/products replaces old technology/products because it is more cost effective/efficient (per the perceptions of the user). Technology is a tool for obtaining a result. When one type of technology becomes price prohibitive (or uncompetitive), it will be replaced with an alternative.
Technology per-se is essentially an intellectual understanding of our physical world and an ability to manipulate it. From that, all you need is some very basic tools in order to build more advanced tools in order to fabricate the ultimate tool or product you want.
While one "technology" or another may be adopted or abandoned due to the availability/cost of resources, that technology is really just a technique.
The real question is, have the techniques we have used to date to prosper, outrun our resource base due to an aberrantly high EROI we wont ever see again?
How much will we need to contract (if at all) over the next 10/50/100 years?

daxr on October 16, 2010 - 8:25pm 

Always and everywhere, the adoption or abandonment of technology is the cost/benefit equation

Only if you include "status" as one of the perceived benefits. Much of the electronics industry (and the auto industry as well, come to think of it) is devoted to developing status-conferring devices, and then selling them at a premium by emphasizing how cool you will be...

ebHubbleTelescope on October 16, 2010 - 8:12pm 

I agree completely. This is something the Logistic function is better suited for as it brings about the concept of carrying capacity. The adoption of new technology saturates at some level (possibly below 100%) which is related to the maximum carrying capacity. (note that this is not anywhere near the same as using logistic for oil depletion, which has a completely different derivation)

If things die-off it could be simply a replacement with new technology which has the logistic shape. And that reflextion around the y-axis is exactly what Merrill is referring to. Brilliant!

So the question is whether something will die-off without something better taking its place?

Its all so tricky to model in any predictive sense in that we have no idea what the saturation level will be for any new product. Interesting to think about though.

 PaulS on October 16, 2010 - 8:37pm 

It occurs to me that a good chunk of what's being discussed is actually cost-shifting rather than actual abandonment. The employee with the lunch box gets to waste time futzing with the block of ice (and with the meltwater) so that the employer can save a few cents a day per employee. The time-cost of futzing with the ice doesn't appear on the employer's books, so it's free to the employer, and no one ever gets to discover whether the exercise is actually cost-effective overall, or not.

Something similar will happen with the gravel roads. The costs of cracked windshields, of extra accidents from the more rough and slippery surface, and of extra wear-and-tear on everything and anything where the clouds of gritty dust settle, don't appear on the county books. Those costs are free to the county, allowing it to squander the money on politicians' pet projects and the like. And again, those costs are never counted, so we never get to discover whether the exercise is cost-effective overall, or not.

Similarly with trying to force people onto slow, tardy, unreliable city buses. The immense time-cost is nowhere tallied, nor the psychological cost of people seeing even less of their families, so according to the reports and assessments that will all be free, with only the oil "savings" being counted.

Somehow I expect to see a lot more of this sort of arrant nonsense if and as things continue to get tougher. It seems like a shell game in much the same spirit as "Don't tax you, don’t tax me, tax the fellow behind that tree."

Leckos,

Great article, and your one in the Defence Force Journal makes for good reading too. I am of the view that most abandonment of technologies to date has been for reasons of obsolesence and/or cost and/or environmental consequences (e.g. lead pipes, PCB's etc). It is hard to think of any that are directly from unavailability of a material, although reduced availability usually manifests itself as increased cost (as we are seeing with oil).

For this reason, I think modern society struggles with the concept of having to give something up (i.e. oil based personal transport) not because a better option is available, but because the current option is no longer available (at an affordable cost). The thought of having to back to something, such as the gravel roads, is anathema to most people, but sometimes there are advantages.

If you are not familiar with it, an excellent collection of such things is at the Low Tech Magazine

In the case of Australia (writing here as an expat Aussie living in Canada), part of the supply line problem is that Australia is often at the end of it, so I can thoroughly understand why the Australian military is keeping an eye on this. In a regional conflict situation this would be exacerbated even further.

My personal favourite example of a successful real world decision to specifically use an obsolete technology, in prefernce to the state of the art, is this device;

De Havilland Mosquito, 1942

The designers of this, in the late 30's, knew that aluminium would be in short supply in a war situation, as would machinists, and their machines. The decision to build it out of plywood was initially laughed at by the RAF, but DeHavillands reasoning was that wood would be in more plentiful supply (and is cheaper), and can easily be worked by carpenters, furniture makers etc, with simple equipment. This represents a perfect example of a conscious decision to decrease complexity.

And, of course, it proved it self to be faster and more capable than any other aircraft then in the skies.

So, there are cases, if the designers really look hard, where you can get the win-win of both increased capability and decreased complexity. This obviously works for the military, but clearly is a concept that most industries that sell to consumers (e.g. carmakers) have rejected.

eckos on October 17, 2010 - 6:52am 

Thanks for your comments Paul.

For this reason, I think modern society struggles with the concept of having to give something up (i.e. oil based personal transport) not because a better option is available, but because the current option is no longer available (at an affordable cost). The thought of having to back to something, such as the gravel roads, is anathema to most people, but sometimes there are advantages.

That is an excellent description of what I was trying to explain in the article, but I don't think that I did as well as I could.

The Mosquito is an excellent example that you provide. Other examples could be the British Sten and Australian Owen sub machine guns developed during WWII which were crude but effective weapons.

The military (the situation would be no different for industry as well) has a very difficult balancing act to make. All modern militaries (state based anyway) are on a similiar path of increasing technological development. There are obvious reasons for this (measure, counter-measure and so on). This is essentially the basis of the paradox described in the journal article. But at some point there will need to be a transition to simpler more robust technologies. The issue is that the military that does this first has the potential to be at a significant disadvantage for as long as other militaries are still capable of maintaining their 'advanced' technological advantage.

The asymmetrical approach I guess is one way around this as has been demonstrated through successful insurgencies by technologically inferior forces.

Merrill on October 17, 2010 - 12:26pm 

Suppose that NATO were to fight in Afghanistan without aircraft, drones, helicopters, armored vehicles, night vision goggles, GPS recievers, etc.

In other words, limit the forces to light infantry in trucks with rifles, grenade launchers, machine guns, mortars, light artillery, binoculars, maps, and other low tech supplies.

I'd think that a force of about 500,000 could eliminate the insurgency in about 6 months and take casulties of no more than 50,000.

Most of the military high-tech is oriented towards:
- fighting the war with very low casualty rates because otherwise political support will end, and
- proving out technologies and tactics in case a war with a similarly high-tech adversary occurs.

Problem with military high-technology is that there is no price constraint. As a result, it may evolve to the point where political support ends because military expenditures are damaging to the military's national economy, rather than acting as a Keyensian stimulus to the economy and as a source of pork for elected politicians.

[-] Paul Nash on October 17, 2010 - 1:51pm 

Yep, the low tech magazine is one of my favourite places - the old school techniques always had a reason why they were what they were, and we tend to forget them rapidly when they are obsolete, but many (such as the woodenpipes) still have their niche applications.

With the Mosquito, yes, wooden planes is what DH did, and that alone is one reason why they were not building many for the air force. Also correct about the concerns for tropical use - that is one of the reasons why wood became obsolete.

BUt, part of their pitch was the (likely) scarcity of aluminium and machinists, and once the war had started, it also became obvious there were plenty of skilled woodworkers available.
It was also obvious that if Britain was not defended successfully, the tropics would not matter.

The key thing is, the wooden construction was considered, by industry and government, to be obsolete, and had been discarded by the military. DH knew the many advantages, simplicity of construction being one of them, and the Mosquito proved them right- brilliantly.
A great combination of modern engineering applied to old school methods/materials.

Grouch on October 17, 2010 - 10:53am 

"For example, maintaining mechanical items is likely to be more achievable than sophisticated electronic items."

I disagree. Microprocessors (specifically embedded microprocessors and microcontrollers) are cheap, non-perishable, lightweight, and mind-bogglingly useful.

My argument rests on these pillars:

After a couple of years of lurking on this site, I see Peak Oil unfolding as a "Great Depression that never ends" type scenario. If you think this scenario is unlikely, my comments will be of limited interest.

Microprocessors/microntrollers are cheap: only a couple of dollars. In a "never-ending great-depression" type of peak-oil scenario

Paul Nash on October 17, 2010 - 3:08pm 

Grouch, Welcome to TOD!

I view the microprocessors as a perfect example of moving the complexity upstream, away from the user. And they are great, as long as you still have access to them, and someone knows how to build and program them.
That knowledge, however, becomes increasingly concentrated, and, those who make the chips decide which ones they wil continue to make, based on theior reasons, which can often lead to good chips systems becoming obsolete/unreproducible, even though they work fine. The same cannot be said of an engine crank - the ability to make that is everywhere.

Case in point - I know a fellow who makes control systems for stand alone micro hydro systems. You need to have a governor to dump excess load, and give certain loads priority when their is excess demand. He developed such a system based on old style PLC's - works brilliantly. Except that said PLC's went out of production 1999. he bought several hundred from the last batch, but was told he would need to order 10,000 before they would even consider doing another production run.

There are other ways to achieve the same end, you can, of course, program a computer to do that, but that is a much more complex system. The remote nature off off grid hydro systems demands they be simple and reliable - dialing up for help is not always an option.

So good things can become obsolete not because something better replaces them, but because someone doesn't want to make them anymore. It leads to standardisation, but sometimes is a barrier to innovation. Overall, you are right in that we are better off, but there are of course cases where we are not.

[-] jokuhl on October 17, 2010 - 3:36pm 

Don't know about that example, Paul.

There are so many consumer level MicroProc's and PIC's available now, plus the countless hacks available on the web for using other consumer Electronics as Control Systems (Pocket PCs, etc) .. It seems that the genie has left the factory.. even tho' there is that equal/opposite force of Operating Systems and Hardware that has, indeed, grown more opaque and untouchable.. still, look at how the open-source community has been thriving, with numerous variants on Linux, etc, and other OS's and apps.

There is also a broad offering for small-shop custom Motherboards, Controllers and Op systems. Sure, the fab's can stop making a chip, but which is more important to them, control or sales? As soon as one chooses 'control', a bunch of competing upstarts seem to show up with visions of Sugardaddies dancing in their eyes instead..

[-] Paul Nash on October 17, 2010 - 5:40pm 

Well, that was as he told it to me five years ago. There are probably suitable alternatives available now, but he was adamant there weren;t then.

I think Linux and other OS stuff is actually the best thing since sliced bread, because it reverses the concentration and control of the information. not great for a controlling corporation, but great for the development of the stuff, as long as the signal to noise ratio is good enough.

With OS stuff you have less chance of "the old man with the secret died", as, by definition, it is not secret.

I think that will turn out to be the single largest benefit enabled by the internet.

, that's affordable. Also, this cheap price is made possible by the economies of scale of a large factory, and a large factory or 3 can be enough to produce the entire world demand.

Microprossors/microcontrollers are non-perishable: Electronics feel perishable because someone might invent a better device during a particularly long transit-time. However, in real life, they can sit on the shelf for decades and they're just as useful as the day they were built. In a "great depression that never ends" scenario, the actual utility of an electronic device (rather than how it performs relative to the one next to it on the shelf) will be the most important factor.

Lightweight: a chip the size of my thumbnail doesn't weigh much. Being lightweight and non-perishable makes it easy to ship, just like spices.

Mind-bogglingly useful: They can also make a mechanical system simpler while maintaining a complex behavior. For instance, a microcontroller can monitor a pyrometer, thermometer, rain gage, and a number of other factors to open and close the windows of a building or greenhouse in order to reduce the energy consumption when the owner is away. Note that I'm not talking about iPods here. This is a different kind of electronics here -- it's the parent of the electronics under the hood of your car, or that runs your microwave.

So, I think we'll be shipping microprocessors around the world, even in most Peak Oil scenarios. And I think that they wil be useful. My thought is that our technology will change quite a bit (and become much more user-serviceable) to reflect our new needs, but electromechanical systems are going to stay with us. The kids in any community who would have taken up HAM radio in the past can surely learn to program a microconteroller.

 

 

[Sep 06, 2010] Programming Things I Wish I Knew Earlier

"Raw intellect ain't always all it's cracked up to be, advises Ted Dziuba in his introduction to Programming Things I Wish I Knew Earlier, so don't be too stubborn to learn the things that can save you from the headaches of over-engineering. Here's some sample how-to-avoid-over-complicating-things advice: 'If Linux can do it, you shouldn't. Don't use Hadoop MapReduce until you have a solid reason why xargs won't solve your problem. Don't implement your own lockservice when Linux's advisory file locking works just fine. Don't do image processing work with PIL unless you have proven that command-line ImageMagick won't do the job. Modern Linux distributions are capable of a lot, and most hard problems are already solved for you. You just need to know where to look.' Any cautionary tips you'd like to share from your own experience?"

msobkow

The truth is that the "hard" way of doing things is often more fun, because you have the challenge of learning a new tool or API. Plus sometimes it's actually easier in the long run because you've engineered a solution for the outer bounds conditions of scalability, so if your application takes off, it can handle the load.

I guess the real issue is that you have to engineer a "good enough" solution rather than a "worst case" solution

petes_PoV :

You might learn something from doing things the hard way, but all you'll achieve is a version #1. As we all know (or will learn) version #1 of pretty much everything should be thrown away and should NEVER see the light of a production server. However, timescales being what they are as soon as an application gets close to functional it gets snatched away and put live - no matter how ugly it is. After that, all you ever have time for is to patch the worst parts. Doing a complete rewrite from the ground up, to do it right, is a luxury few of us experience.

melted:

Do not make things super-modular and generic unless they 100% have to be. In 99.9% of the projects no one, including yourself, will use your stupid dependency injection, and logging / access control can be done just fine without AOP. Don't layer patterns where there's no need. Aim for the simplest possible design that will work. Don't overemphasize extensibility and flexibility, unless you KNOW you will need it, follow the YAGNI principle (you ain't gonna need it).

[Dec 22, 2009] The Real Con More Complex Is Better by Michael M

Dec 22, 2009 | oftwominds.com

Regarding The World Is Too Complex (Guest essay by Subuddh Parekh): I was first positively surprised by the headline, but I found the essay doesn't quite cut it! Let me explain:

Things are getting more and more complex in today's world. That is a statement few would doubt.

However the question is why. Is it necessary? Is it always necessary? How many people can keep up with changes and understand them? How can democracy / any form of participation still work if the majority doesn't even understand the broad overview anymore?

The real con of today is "more complex is better."

I am not proposing everybody is capable or should be allowed to fly a jet airliner or operate a nuclear power plant. And research & development as well as advanced manufacturing is definitely becoming more complex often for good reasons.

But the important or necessary parts of everyday life like shopping, household budgeting, insurance, taxes, and even retirement investments need to be handled adequately by the broad majority of people without weeks of special training or consulting a real or self-declared expert.

Otherwise the highly intelligent have just disenfranchised the masses, from there exploitation is just a tiny step away.

For example, look at the excesses from today's product descriptions on corporate websites to credit card contracts: vital information is intentionally hidden behind dozens of marketing pages or legal blather, or completely withheld, making it an especially tedious or impossible task to try to compare competitors.

Jim Quinn writes that Huxley foresaw that approach ( BRAVE NEW WORLD - 2009). While I sure have read Orwell--and by the way believe the most important point in his book "1984" is not total surveillance but Newspeak--it seems I should also read Huxley's "Brave New World."

The first time I realized that approach myself was funnily enough with Microsoft, the self-declared fighter for the "easy to use" personal computer since the advent of Windows operating system. I however worked as Senior Integrator on small to medium enterprise solutions. Having started with Novell 3.12 and playing around with Windows NT 3.51 during some low workload time, suddenly I got thrown so much new Microsoft stuff at me all the time, new Windows version, new Office version, new NT Server, Windows Domain concept, new Exchange Server -- that I never found time to take a closer look at competitor's offerings like NDS (Novell Directory Services) introduced with Novell 4.x release.

After 18 months I reflected on that and realized I need to start to ignore some Microsoft offerings to avoid total capture. I would state this has now become common approach for most IT product vendors.

Another special form manifests itself in the "idiot tax":

(1) Here is the bottom line:

Laibson and Gabaix's explanation relies on a good bit of math, too, but it can be summarized pretty simply using a hypothetical example. Imagine two hotel chains. The first, Hidden Price Inn, has a very low room rate of $80 a night, but makes liberal use of high "shrouded" fees: Three bucks for a minibar Dr Pepper, $25 for parking, $12 for eggs at breakfast.

The unsophisticated traveler cheerily (if unwittingly) forks over the fees, all the while patting herself on the back for getting a cheap room.

Now imagine a second chain, Straightforward Suites. It charges much more reasonably for the extra costs ($1, say, for that Dr Pepper), but because it makes less on the extras, it has to charge slightly more for the room-- $95, instead of $80.

Even an unsophisticated traveler can tell $95 isn't as good as $80. Through an aggressive ad campaign, Straightforward could try to point out how devious the approach of Hidden Price Inn is and how much less deceptive its own prices are. But Laibson and Gabaix show that there's a catch in this strategy: Hidden Price Inn actually has two key types of customers.

Yes, there are the clueless consumers (the economists prefer to call them "myopic"). But there are also the sophisticated ones, who know that if they avoid the hotel restaurant, take a taxi instead of using the parking garage, and call home with a cellphone, they'll actually get a better deal at Hidden Price than at Straightforward.

Straightforward Suites's ad campaign, then, might just end up increasing the ranks of sophisticated consumers who will in turn dial up Hidden Price Inn for a cut-rate room. Rather than play this self-defeating game, Straightforward will most likely just lower its own room prices and stick it to the customers on the extras. (from Why are there hidden fees? )

Subuddh Parekh comes to the wrong conclusion (as pretty much everybody else, I can't blame him): "So what are the 'solutions'? There aren’t any as yet. We just have to deal with this complexity in whatever way we can."

Einstein is attributed of having said: "Make everything as simple as possible, but not simpler."

The solution is: Unnecessary complexity has to be cut down. If we don't do it voluntarily, a societal collapse will do it for us.

Therefore:

Remember: For every complex problem, there is a solution that is simple, neat, and wrong. – H. L. Mencken

But the necessities of life need to remain simple enough that pretty much everyone can comprehend them.

Notes:

1.) I also believe the '68ers to play an important role towards today's mess. While they started beneficial in breaking up narrow views, they overdid it by finally declaring pretty much every opinion correct and of equal value, thereby (unintentionally?) paving the way to unfettered individualism. When this started to become offensive and/or unsustainable after a decade or so, some con artists started to cover it up with complexity. And this complies with what I (unofficially) call the "Milgram effect": given multiple choices most people will avoid selecting one which would invalidate their previous behavior. (His book remains a must read: Obedience to Authority: An Experimental View )

2.) A banker once told me: Complex investment products are/were only invented so the seller can charge higher fees as no investor can value them himself nor compare them. 3.) The next step is upper management and other "leaders" pretend they understand the newest complex models, the underlying assumptions (!) and the implications. Great article which I fully agree with is Mad Mathesis.


CHS note: I also recommend another book on experiments in inducing obedience to authority: The Lucifer Effect: Understanding How Good People Turn Evil.

[Aug 14, 2009] Manage complexity like debt

“Manage complexity like debt,” Cunningham told attendees. Using this analogy, he likened skipping designs to borrowing money; dealing with maintenance headaches like incurring interest payments; refactoring, which is improving the design of existing code, like repaying debt; and creating engineering policies like devising financial policies.

Written by Chris Chedgey, September 07th, 2006

3 Comments

Ben Hosking writes in Managing Complexity - The aim of Designing Code that:

The most important part of design is managing complexity

I like the simplicity of that. What happens if you don’t manage complexity. Well, it starts to cost. Talking at OOPSLA 2004, Ward Cunningham (Mr. Wiki) compared complexity with debt:

“Manage complexity like debt,” Cunningham told attendees. Using this analogy, he likened skipping designs to borrowing money; dealing with maintenance headaches like incurring interest payments; refactoring, which is improving the design of existing code, like repaying debt; and creating engineering policies like devising financial policies.

In an interview with Bill Venners (Artima), Andy Hunt (Pragmatic Programmer) extends the analogy concisely:

“But just like real debt, it doesn’t take much to get to the point where you can never pay it back, where you have so many problems you can never go back and address them.”

It’s a lovely metaphor. But it does breaks down in one place. Project managers don’t get a pile of bills through the door every month. Even if they wanted to, they can’t rip them open, sum them up, compare them against income and outgoings and discover just how fragged they are, or even hell, that they can afford loads more debt!

Well it’s not quite that bad. We can at least measure and sum up the complexity of items at different levels of design breakout (methods, classes, packages, subsystems and projects).  We may not be able to put a hard complexity number on the tipping point (insolvency), but we can give you a number. With this you can compare projects, monitor trends that show where it’s getting more or less complex, and discover which items at what level are causing the trend.

[Aug 14, 2009] Managing Complexity - The aim of Designing Code

I was reading Code Complete and the chapter was talking about Design and it was saying one of the most important parts of Design is managing complexity.

This makes perfect sense really, the whole process of designing is breaking the problem into smaller more manageable bits. He states that humans struggle to comprehend one massive complicated piece of software but can understand it easier if you split it down into small subsections.

If you think about the way you start designing you work with large abstract ideas and then slowly work down into smaller and smaller sections, until you end up with lots of small sections.

What I like about the idea of Managing complexity is that it means you start with something simple and then battle with it to keep it simple. It reminds me of seeing the code for a design pattern or a piece of code my some Java ninja, it always strikes me how simple it looks (and then you think, I could have done that).

I also like the word complexity because it's at the heart of making reusable code, reducing the complexity using encapsulation, and cohesion. I also think of complexity as being directly linked to the number of classes linked to a class. e.g. coupling. A simple class/package has loose coupling and is linked to the smallest amount of other classes as possible.

This is easier to understand, maintain and test.

What strikes me about linking Design with Managing Complexity is it is explaining simply what you are aiming to do when designing your code and just having that in my mind will help me focus on the objective of managing complexity.

This chapter is actually a free download on the Code Complete site, so if you would like to read more, firstly I would suggest you buy the book because I am finding it very useful and interesting but if you would like a taster to see if you would like the book here is the link

http://cc2e.com/docs/Chapter5-Design.pdf

I have talked about this book before and given links to two sample chapters, if you want a rough outline of the book a list of the contents, check out my previous blog entry

http://hoskinator.blogspot.com/2006/06/design-in-construction-code-complete-2.html

expect to see me talk about more of the topics mentioned in this book

[Jun 9, 2009] http://www.jbox.dk/links.htm

Minimalism, Architecture and Development


Curbralan This site is run by Kevlin Henney who is an independent consultant and developer.

Specifically Kevlin is one of the few people able to write and present the ideas of minimalism and agile software development in a serious, yet humerous way.

Kevlin gave an excellent speak on minimalism at JAOO 2002 called "Minimalism: A Practical Guide to Writing Less Code".

The above paper is a practical perspective on general ideas on the subject presented in two articles on minimalism:

Many good articles on architecture and development beyond the few mentioned above can be found at Curbralan.

SAP DB SAP may be one of the last companies people think of when the talk falls on minimalism, agility and simplicity.

SAP DB, an Enterprise class Open Source database, that can fill the role of the database for SAP implementations is first of all an example of strategic products becoming commodities, but try and read the the article "SAP DB - The Enterprise Open Source Database", and you will find a relieving touch of minimalism thoughts.

The SAP DB is released under GPL/LGPL and runs under a variety of operating systems, including Linux.

Conspicio Bjane Hansen's blog about system architecture.
Complexity and Simplicity Articles about complexity in software:

[Apr 10, 2009] The Collapse of Complex Societies (New Studies in Archaeology) (Paperback)

5.0 out of 5 stars Fascinating and deeply disturbing, May 29, 2004
By Chris Stolz (canada) - See all my reviews
(REAL NAME)   
Tainter's project here is to articulate his grand unifying theory to explain the strange and disturbing fact that every complex civilisation the world has ever seen has collapsed.

Tainter first elegantly disposes of the usual theories of social decline (disappearance of natural resources, invasions of barbarians, etc). He then lays out his theory of decline: as societies become more complex, the costs of meeting new challenges increase, until there comes a point where extra resources devoted to meeting new challenges produce diminishing and then negative returns. At this point, societies become less complex (they collapse into smaller societies). For Tainter, social problems are always (ultimately) a problem of recruiting enough energy to "fuel" the increasing social complexity which is necessary to solve ever-newer problems.

Complexity, writes Tainter, describes a variety of characteristics in a number of societies. Some aspects of complexity include many differentiated social roles, a large class of administrators not involved in the production of primary resources, energy devoted to different kinds of communication, centralized government, etc. Societies become more complex in order to solve problems. Complexity, for Tainter, is quantifiable. Where, for example, the Cherokee natives of the U.S. had about 5,000 cultural artifacts (things ranging from recipes to tools to tents) which were integral to their culture, the Allied troops landing on the Normandy coast in 1944 had about 40,000.

Herein, however, lies the rub. Since, as Tainter writes, the "number of challenges with which the Universe can confront a society is, for practical purposes, infinite," complex societies need to keep on increasing their level of complexity in order to survive new challenges. Tainter's thesis is that these "investments in additional complexity" produce fewer and fewer returns with time, until eventually society cannot muster enough energy to fuel complexity. At this point, society collapses.

Consider this example: A simple hunter-gatherer society with limited agriculture (i.e. garden plots) is faced with a problem, such as a seasonal drop in food production (or an invasion from its neighbours who have the same problem and are coming over for food). The bottom line is, this society faces an energy shortage. This society could respond to the food crisis by either voluntarily declining in numbers (die-off, and unlikely) or by increasing production. Most societies choose the latter. In order to increase production, this society will need to either expand territorially (invade somebody else) or increase agricultural production . In either case, this investment can pay off substantially in either increased access to already-produced food or increased food production.

But the hunter-gatherers of the above example incur costs as they try to solve their food-shortage problem. If they conquer their neighbors, they have to garrison those territories, thus raising the cost of government. If they start agriculture on a larger or more intense scale in their own territories, they have to create a new class of citizens to man the farms, distribute and store the grain, and guard it from animals and invaders. In either case, the increases in access to energy (food) are offset somewhat by the increased cost of social complexity.

But, as the society gets MORE complex to confront newer challenges, the returns on these increases in complexity diminish. Eventually, the costs of maintaining garrisons (as the Romans found) is so high that both home and occupied populations revolt, and welcome the invaders with their simpler way of life and their lower taxes. Or, agricultural challenges (a massive drought, or degradation of soils) are so great that the society cannot muster the energy reserves to deal with them.

Tainter's book examines the Mayan, Chacoan and Roman collapses in terms of his theory of diminishing marginal returns on investments in complexity. This is the fascinating part of the book; the disturbing sections are Chapter Four and the final chapter. In Chapter 4, Tainter musters a massive array of statistics that show that modern society has been facing diminishing returns on investments in complexity. There is a very simple reason for this: we solve the easiest problems first. Take oil, for example. In 1950, spending the energy equivalent of one barrel of oil in searching for more oil yielded 100 barrels in discovered oil. In 2004, the world's five largest energy companies found less oil energy than they expended in looking for that energy. The per-dollar return on R&D investment has dropped for fifty years. In education, additional investments in programs, technology etc. no longer produce increases in outcomes. In short, industrial society is looking at steadily fewer returns on its investments in both non-human and human capital.

When a new challenge comes, Tainter argues, society will eventually be unable to muster the necessary resources to deal with the crisis, and will revert-- in a painful and unhappy way-- to a much simpler way of life.

In his final chapter, Tainter describes the modern world's "arms race of complexity" and makes some uncomfortable suggestions about our own future. (...). In an age where, for example, the U.S. invasion of Iraq has yielded net negative returns on investment even for the invaders (where's that cheap oil?), and where additional investments in education and health care in industrialised countries make no significant increases in outcomes, the historical focus of Tainter's work starts to become eerily prescient.

The scary thing about this deeply thoughtful and thoroughly researched book is its contention that the future, for all our knowledge and technology, might be an awful lot like the past.


5.0 out of 5 stars A Landmark Study in Why Societies Collapse, January 22, 2006

By Allen B. Hundley (Mountain Home, AR) - See all my reviews
(REAL NAME)   
To get an idea of the impact this book has had both among scholars and on the general public one has only to look at its publishing record. It was written by an academic for academics and published by a university press (Cambridge no less) yet it is now in its fourteenth printing since its initial release in 1988.

Tainter argues that human societies exist to solve problems. He looks at a score of societal collapses, focusing on three: Rome, the Maya, and the Chacoan Indians of the American Southwest. As these societies solved problems - food production, security, public works - they became increasingly complex. Complexity however carries with it overhead costs, e.g. administration, maintaining an army, tax collection, infrastructure maintenance, etc. As the society confronts new problems additional complexity is required to solve them. Eventually a point is reached where the overhead costs that are generated result in diminishing returns in terms of effectiveness. The society wastefully expends its resources trying to maintain its bloated condition until it finally collapses into smaller, simpler, more efficient units. (Does this sound like any contemporary societies we know?)

One of the powerful attractions of this book is that, although written by an academic for a scholarly audience, the author is fully aware of his theory's relevance to the future of our own society, comments upon which he reserves for the final chapter. While Tainter states explicitly (writing in 1988) that he does not believe the collapse of our civilization is imminent, in a remarkably candid passage he characterizes the survivalist movement in the U.S. (excluding the lunatic fringe element) as being a rational response to concerns about the viability of our current political system. The same goes for those in the self reliance, grow you own food movement. "The whole concern with collapse and self-sufficiency may itself be a significant social indicator, the expectable scanning behavior of a social system under stress..." (p.211).

Keep in mind that Tainter is writing before the first Gulf War, Y2K, 9-11 and before our current involvement in Iraq. New energy sources are the key, he says, to maintaining economic well-being. "A new energy subsidy is necessary if a declining standard of living and a future global collapse are to be averted." By subsidy he means the development of new forms of energy. This "development must be an item of the highest priority even if, as predicted, this requires reallocation of resources from other economic sectors." (p. 215).

Almost twenty years have passed since Tainter wrote those words. I leave it for you the reader of this review to judge the capability of our current political system to respond to such a grave and obvious crisis.

I have given this book 5 stars not because it is the final answer to the question of how civilizations or societies collapse but because it represents an important step along the way to that answer. As Jared Diamond correctly points out in his new "Collapse: How Societies Choose to Fail or Succeed," complex societies would be expected to be the best at staving off collapse because they are by definition the most highly organized, with the best information, resource and administrative structures to deal with new challenges. Clearly other factors must be at work. Tainter however dismisses all previous theories of collapse, calling many of them `mystical'. Included in this latter group are many of the world's greatest thinkers from Plato and Polybius to Gibbon and Toynbee.

What Tainter really means is that their explanations are not quantifiable, therefore not scientific, and therefore unworthy of further consideration. This is a most unfortunate mistake. Insight is insight regardless of whether or not it is quantifiable. If a scientific approach to societal decision-making always worked Robert McNamara's faith in body count statistics should surely have resulted in a U.S. victory in Vietnam.

At one point Tainter states that individuals can never alter the course of world history, only powerful long-term societal forces. This flies in the face of overwhelming evidence to the contrary, from the 300 Spartans at Thermopylae to Lee's bungling at Gettysburg, to Winston Churchill and Lord Dowding in the Battle of Britain. (See my review on the latter.) The fact that at critical junctures in history a handful of individuals have made a huge difference is extremely frustrating to those in the `social science' community. They would like to believe that with enough good statistics you can predict the future with precision. This has never been and likely never will be the case, a reality I came to terms with many years ago and the main reason I never completed my doctoral studies in `political science'.

Allowing that Tainter's complexity model really does have considerable explanatory power, the important question is can you have an advanced society that is immune to complexity's dangers? The answer in this reviewer's opinion is a qualified `yes' but such a society would have to be organized very differently with far less interdependence, and hence fragility, than anything we now know. If world events (terrorism, Iran, North Korea, etc.) continue along the track they have taken in recent years, we may soon, for better or worse, have the opportunity to find out. 31 of 31 people found the following review helpful:

5.0 out of 5 stars Scholarly but gripping, March 30, 2006 By Erik D. Curren (Staunton, VA) - See all my reviews
(REAL NAME)   

In contrast to Jered Diamond's "Collapse," this volume does not just focus on one theory of why societies collapse--depletion of natural resources--but presents in summary several different theories. In academic style, Tainter examines the pros and cons of each, offering a cornucopia of references that would be an invaluable source for future research.

While he sees some merit to most theories, one he holds in complete contempt, while another he tends to prefer. Tainter has no patience for "mystical" notions that societies collapse because their moral fiber has degenerated, a theory made famous by Gibbon, Spengler and Toynbee. What he does believe is that complex societies always at some point reach a stage where they become too complex, where the costs to citizens and elites alike begin to outweigh the benefits of keeping the society together. At that point, the society is vulnerable to breaking up.

This is what happened to the Western Roman Empire in the fifth century. The burden of inflation and taxes became so heavy on the populace that even the Italians began to yearn for "liberation" by barbarian tribes. And collapse is not always a bad thing: tribes like the Vandals actually governed their sections of the old empire more effectively.

So, what about us? Because of globalization, any collapse would affect all industrialized countries together. So, the US cannot collapse without either being taken over by a competitor or bringing everyone else down with us. Oil running out might be the end of our era of complexity, an anomaly in human history, but we still have time to make changes that could forestall collapse. Overall, a fresh view of history key to understanding the present.

grumpOps

Ahem! [Permalink]

Fri Jul 22 13:56:52 EDT 2005
Category [Internet Politics]

This was sent to me by a colleague. From "S4—The System Standards Stockholm Syndrome" by John G. Waclawsky, Ph.D.:

The “Stockholm Syndrome” describes the behavior of some hostages. The “System Standards Stockholm Syndrome” (S4) describes the behavior of system standards participants who, over time, become addicted to technology complexity and hostages of group thinking.

Read the whole thing over at BCR.

And while this particularly picks on the ITU types, it should hit close to home to a whole host of other "endeavors".

What is complexity

Complexity has turned out to be very difficult to define. The dozens of definitions that have been offered all fall short in one respect or another, classifying something as complex which we intuitively would see as simple, or denying an obviously complex phenomenon the label of complexity. Moreover, these definitions are either only applicable to a very restricted domain, such as computer algorithms or genomes, or so vague as to be almost meaningless. Edmonds (1996) gives a good review of the different definitions and their shortcomings, concluding that complexity necessarily depends on the language that is used to model the system. Still, I believe there is a common, "objective" core in the different concepts of complexity. Let us go back to the original Latin word complexus, which signifies "entwined", "twisted together". This may be interpreted in the following way: in order to have a complex you need two or more components, which are joined in such a way that it is difficult to separate them. Similarly, the Oxford Dictionary defines something as "complex" if it is "made of (usually several) closely connected parts". Here we find the basic duality between parts which are at the same time distinct and connected. Intuitively then, a system would be more complex if more parts could be distinguished, and if more connections between them existed. More parts to be represented means more extensive models, which require more time to be searched or computed. Since the components of a complex cannot be separated without destroying it, the method of analysis or decomposition into independent modules cannot be used to develop or simplify such models. This implies that complex entities will be difficult to model, that eventual models will be difficult to use for prediction or control, and that problems will be difficult to solve. This accounts for the connotation of difficult, which the word "complex" has received in later periods.

The aspects of distinction and connection determine two dimensions characterizing complexity. Distinction corresponds to variety, to heterogeneity, to the fact that different parts of the complex behave differently. Connection corresponds to constraint, to redundancy, to the fact that different parts are not independent, but that the knowledge of one part allows the determination of features of the other parts. Distinction leads in the limit to disorder, chaos or entropy, like in a gas, where the position of any gas molecule is completely independent of the position of the other molecules. Connection leads to order or negentropy, like in a perfect crystal, where the position of a molecule is completely determined by the positions of the neighbouring molecules to which it is bound. Complexity can only exist if both aspects are present: neither perfect disorder (which can be described statistically through the law of large numbers), nor perfect order (which can be described by traditional deterministic methods) are complex. It thus can be said to be situated in between order and disorder, or, using a recently fashionable expression, "on the edge of chaos".

The simplest way to model order is through the concept of symmetry, i.e. invariance of a pattern under a group of transformations. In symmetric patterns one part of the pattern is sufficient to reconstruct the whole. For example, in order to reconstruct a mirror-symmetric pattern, like the human face, you need to know one half and then simply add its mirror image. The larger the group of symmetry transformations, the smaller the part needed to reconstruct the whole, and the more redundant or "ordered" the pattern. For example, a crystal structure is typically invariant under a discrete group of translations and rotations. A small assembly of connected molecules will be a sufficient "seed", out of which the positions of all other molecules can be generated by applying the different transformations. Empty space is maximally symmetric or ordered: it is invariant under any possible transformation, and any part, however small, can be used to generate any other part.

It is interesting to note that maximal disorder too is characterized by symmetry, not of the actual positions of the components, but of the probabilities that a component will be found at a particular position. For example, a gas is statistically homogeneous: any position is as likely to contain a gas molecule as any other position. In actuality, the individual molecules will not be evenly spread. But if we look at averages, e.g. the centers of gravity of large assemblies of molecules, because of the law of large numbers the actual spread will again be symmetric or homogeneous. Similarly, a random process, like Brownian motion, can be defined by the fact that all possible transitions or movements are equally probable.

Complexity can then be characterized by lack of symmetry or "symmetry breaking", by the fact that no part or aspect of a complex entitity can provide sufficient information to actually or statistically predict the properties of the others parts. This again connects to the difficulty of modelling associated with complex systems.

(1996) notes that the definition of complexity as midpoint between order and disorder depends on the level of representation: what seems complex in one representation, may seem ordered or disordered in a representation at a different scale. For example, a pattern of cracks in dried mud may seem very complex. When we zoom out, and look at the mud plain as a whole, though, we may see just a flat, homogeneous surface. When we zoom in and look at the different clay particles forming the mud, we see a completely disordered array. The paradox can be elucidated by noting that scale is just another dimension characterizing space or time (Havel, 1995), and that invariance under geometrical transformations, like rotations or translations, can be similarly extended to scale transformations (homotheties).

Havel (1995) calls a system "scale-thin" if its distinguishable structure extends only over one or a few scales. For example, a perfect geometrical form, like a triangle or circle, is scale-thin: if we zoom out, the circle becomes a dot and disappears from view in the surrounding empty space; if we zoom in, the circle similarly disappears from view and only homogeneous space remains. A typical building seen from the outside has distinguishable structure on 2 or 3 scales: the building as a whole, the windows and doors, and perhaps the individual bricks. A fractal or self-similar shape, on the other hand, has infinite scale extension: however deeply we zoom in, we will always find the same recurrent structure. A fractal is invariant under a discrete group of scale transformations, and is as such orderly or symmetric on the scale dimension. The fractal is somewhat more complex than the triangle, in the same sense that a crystal is more complex than a single molecule: both consist of a multiplicity of parts or levels, but these parts are completely similar.

To find real complexity on the scale dimension, we may look at the human body: if we zoom in we encounter complex structures at least at the levels of complete organism, organs, tissues, cells, organelles, polymers, monomers, atoms, nucleons, and elementary particles. Though there may be superficial similarities between the levels, e.g. between organs and organelles, the relations and dependencies between the different levels are quite heterogeneous, characterized by both distinction and connection, and by symmetry breaking.

We may conclude that complexity increases when the variety (distinction), and dependency (connection) of parts or aspects increase, and this in several dimensions. These include at least the ordinary 3 dimensions of spatial, geometrical structure, the dimension of spatial scale, the dimension of time or dynamics, and the dimension of temporal or dynamical scale. In order to show that complexity has increased overall, it suffices to show, that - all other things being equal - variety and/or connection have increased in at least one dimension.

The process of increase of variety may be called differentiation, the process of increase in the number or strength of connections may be called integration. We will now show that evolution automatically produces differentiation and integration, and this at least along the dimensions of space, spatial scale, time and temporal scale. The complexity produced by differentiation and integration in the spatial dimension may be called "structural", in the temporal dimension "functional", in the spatial scale dimension "structural hierarchical", and in the temporal scale dimension "functional hierarchical".

It may still be objected that distinction and connection are in general not given, objective properties. Variety and constraint will depend upon what is distinguished by the observer, and in realistically complex systems determining what to distinguish is a far from trivial matter. What the observer does is picking up those distinctions which are somehow the most important, creating high-level classes of similar phenomena, and neglecting the differences which exist between the members of those classes (Heylighen, 1990). Depending on which distinctions the observer makes, he or she may see their variety and dependency (and thus the complexity of the model) to be larger or smaller, and this will also determine whether the complexity is seen to increase or decrease.

For example, when I noted that a building has distinguishable structure down to the level of bricks, I implicitly ignored the molecular, atomic and particle structure of those bricks, since it seems irrelevant to how the building is constructed or used. This is possible because the structure of the bricks is independent of the particular molecules out of which they are built: it does not really matter whether they are made out of concrete, clay, plaster or even plastic. On the other hand, in the example of the human body, the functioning of the cells critically depends on which molecular structures are present, and that is why it is much more difficult to ignore the molecular level when building a useful model of the body. In the first case, we might say that the brick is a "closed" structure: its inside components do not really influence its outside appearance or behavior (Heylighen, 1990). In the case of cells, though, there is no pronounced closure, and that makes it difficult to abstract away the inside parts.

Although there will always be a subjective element involved in the observer's choice of which aspects of a system are worth modelling, the reliability of models will critically depend on the degree of independence between the features included in the model and the ones that were not included. That degree of independence will be determined by the "objective" complexity of the system. Though we are in principle unable to build a complete model of a system, the introduction of the different dimensions discussed above helps us at least to get a better grasp of its intrinsic complexity, by reminding us to include at least distinctions on different scales and in different temporal and spatial domains.

References:

See also:

The Growth of Complexity

blind variation and selective retention tend to produce increases in both structural and functional complexity of evolving systems
At least since the days of Darwin, evolution has been associated with the increase of complexity: if we go back in time we see originally only simple systems (elementary particles, atoms, molecules, unicellular organisms) while more and more complex systems appear in later stages. However, from the point of view of classical evolutionary theory there is no a priori reason why more complicated systems would be preferred by natural selection. Evolution tends to increase fitness, but fitness can be achieved as well by very complex as by very simple systems. For example, according to some theories, viruses, the simplest of living systems, are degenerated forms of what were initially much more complex organisms. Since viruses live as parasites, using the host organisms as an environment that provides all the resources they need to reproduce themselves, maintaining a metabolism and reproductory systems of their own is just a waste of resources. Eventually, natural selection will eliminate all superfluous structures, and thus partially decrease complexity.

Complexity increase for individual (control) systems

The question of why complexity of individual systems appears to increase so strongly during evolution can be easily answered by combining the traditional cybernetic idea of the "Law of Requisite Variety" and a concept of coevolution, as used in the evolutionary "Red Queen Principle".

Ashby's Law of Requisite Variety states that in order to achieve complete control, the variety of actions a control system should be able to execute must be at least as great as the variety of environmental perturbations that need to be compensated. Evolutionary systems (organisms, societies, self-organizing processes, ...) obviously would be fitter if they would have greater control over their environments, because that would make it easier for them to survive and reproduce. Thus, evolution through natural selection would tend to increase control, and therefore internal variety. Since we may assume that the environment as a whole has always more variety than the system itself, the evolving system would never be able to achieve complete control, but it would at least be able to gather sufficient variety to more or less control its most direct neighbourhood. We might imagine a continuing process where the variety of an evolving system A slowly increases towards but never actually matches the infinite variety of the environment.

However, according to the complementary principles of selective variety and of requisite constraint, Ashby's law should be restricted in its scope: at a certain point further increases in variety diminish rather than increase the control that system A has over its environment. A will asymptotically reach a trade-off point, depending on the variety of perturbations in its environment, where requisite variety is in balance with requisite constraint. For viruses, the balance point will be characterised by a very low variety, for human beings by a very high one.

This analysis assumes that the environment is stable and a priori given. However, the environment of A itself consists of evolutionary systems (say B, C, D...), which are in general undergoing the same asymptotic increase of variety towards their trade-off points. Since B is in the environment of A, and A in the environment of B, the increase in variety in the one will create a higher need (trade-off point) in variety for the other, since it will now need to control a more complex environment. Thus, instead of an increase in complexity characterised by an asymptotic slowing down, we get a positive feedback process, where the increase in variety in one system creates a stronger need for variety increase in the other. The net result is that many evolutionary systems that are in direct interaction with each other will tend to grow more complex, and this with an ever increasing speed.

As an example, in our present society, individuals and organizations tend to gather more knowledge and more resources, increasing the range of actions they can take, since this will allow them to cope better with the possible problems appearing in their environment. However, if the people you cooperate or compete with (e.g. colleagues) become more knowledgeable and resourceful, you too will have to become more knowledgeable and resourceful in order to respond to the challenges they pose to you. The result is an ever faster race towards more knowledge and better tools, creating the "information explosion" we all know so well.

The present argument does not imply that all evolutionary systems will increase in complexity: those (like viruses, snails or mosses) that have reached a good trade-off point and are not confronted by an environment putting more complex demands on them will maintain their present level of complexity. But it suffices that some systems in the larger ecosystem are involved in the complexity race to see an overall increase of available complexity.

Complexity increase for global (eco)systems

The resoning above explains why individual systems will on average tend to increase in complexity. However, the argument can be extended to show how complexity of the environment as a whole increases. Let us consider a global system, consisting of a multitude of co-evolving subsystems. The typical example would be an ecosystem, where the subsystems are organisms belonging to different species.

Now, it is well-documented by ecologists and evolutionary biologists that ecosystems tend to become more complex: the number of different species increases, and the number of dependencies and other linkages between species increases. This has been observed as well over the geological history of the earth, as in specific cases such as island ecologies which initially contained very few species, but where more and more species arose by immigration or by differentiation of a single species specializing on different niches (like the famous Darwin's finches on the Galapagos islands).

As is well explained by E.O. Wilson in his "The Diversity of Life", not only do ecosystems contain typically lots of niches that will eventually be filled by new species, there is a self-reinforcing tendency to create new niches. Indeed, a hypothetical new species (let's call them "bovers") occupying a hitherto empty niche, by its mere presence creates a set of new niches. Different other species can now specialize in somehow using the resources produced by that new species, e.g. as parasites that suck the bover's blood or live in its intestines, as predators that catch and eat bovers, as plants that grow on the bovers excrements, as furrowers that use abandoned bover holes, etc. etc. Each of those new species again creates new niches, that can give rise to even further species, and so on, ad infinitum. These species all depend on each other: take the bovers away and dozens of other species may go extinct.

This principle is not limited to ecosystems or biological species: if in a global system (e.g. the inside of a star, the primordial soup containing different interacting chemicals, ...) a stable system of a new type appears through evolution (e.g. a new element in a star, or new chemical compound), this will in general create a new environment or selector. This means that different variations will either be adapted to the new system (and thus be selected) or not (and thus be eliminated). Elimination of unfit systems may decrease complexity, selection of fit systems is an opportunity for increasing complexity, since it makes it possible for systems to appear which were not able to survive before. For example, the appearance of a new species creates an opportunity for the appearance of species-specific parasites or predators, but it may also cause the extinction of less fit competitors or prey.

However, in general the power for elimination of other systems will be limited in space, since the new system cannot immediately occupy all possible places where other systems exist. E.g. the appearance of a particular molecule in a pool of "primordial soup" will not affect the survival of molecules in other pools. So, though some systems in the neighbourhood of the new system may be eliminated, in general not all systems of that kind will disappear. The power for facilitating the appearance of new systems will similarly be limited to a neighbourhood, but that does not change the fact that it increases the overall variety of systems existing in the global system. The net effect is the creation of a number of new local environments or neighbourhoods containing different types of systems, while other parts of the environment stay unchanged. The environment as a whole becomes more differentiated and, hence, increases its complexity.

Sun Inner Circle

What's preventing businesses from realizing better utilization rates is an outbreak of the 1:1:1 ratio — one application per operating environment per server. While this ratio is effective for meeting peak load targets, it's off the mark for achieving IT efficiency. The more things that need to be managed, the more time-consuming and expensive that infrastructure becomes. Clearly, this approach to managing the infrastructure doesn't scale effectively — a big problem when saving IT dollars is the primary goal.

The old paradigm of managing infrastructure resources is largely to blame for the current system bloat. Traditionally, organizations have invested in people to manage this legacy of 1:1:1. So as the business grew, people-management costs significantly increased — a practice that is prohibitively expensive.

Many IT managers also thought having dedicated server environments was a more reliable way to ensure performance and availability while mitigating security risks. But allowing each department to control its own resources has exacerbated the problem of doing more with more, rather than doing more with less. Finally, to meet workload requirements, IT managers often looked to peak workloads as the barometer for system needs. Yet basing server needs on peak usage levels is costly and inefficient, as normal loads typically require just a fraction of those resources and not all applications will peak at the same time.

An obvious way to combat these costly practices is through server consolidation. But simply getting more applications onto fewer servers is not enough. Effective server consolidation is contingent on maintaining the confidence of IT managers that applications will have the resources they need to meet performance levels. It's also important that applications housed on the same server can be isolated to avoid fault propagation.

Said another way, server consolidation is only valuable if IT managers have the same assurance that performance levels will be on par with levels achieved by the 1:1:1 ratio and the confidence that one application will not adversely impact the security or availability of other applications co-hosted on the same server. There is a way to make this a reality. By implementing the technique of virtualization into your data center utilization strategy, you can achieve all the benefits of the 1:1:1 setup while simultaneously reducing IT expenses.

Deadly Sins - common start-up errors Entrepreneur - Find Articles

Also be aware of the other side of the same coin: excessive debt and overhead. Debt can destroy your start-up, so stick to your business plan, and don't let appetite exceed budget or planned expenditures. "Too many entrepreneurs bring the infrastructure 'bloat' from their previous corporate careers to their start-up," says Pierce Johnson, founder of Chicago-based Johnson Technologies Inc. (which he sold last year to eSkye Solutions). "Most of the failed companies I know added too many employees too soon."

Basic rule: Stick to the business plan, and if a particular expenditure isn't budgeted there, forget about it.

'Smart Growth' Innovating to Meet the Needs of the Market without Feeding the Beast of Complexity - Knowledge@Wharton

Managing Complexity

Wilson points out that complexity can be an organizational drag, consuming resources, diluting focus and impacting profitability. In that way, it can be a drag on innovation efforts. But conversely, he notes, it is important to understand how the current innovation system helps or hinders the issue of complexity. "In many situations, the innovation system itself can be one of the drivers -- a poor innovation system can lead to clutter and complexity," he says.

Next, companies must get a grip on what causes that complexity. "Is it a lack of customer knowledge, or poor understanding of the economics of the situation?" asks Wilson. Additionally, he says, they need to get an accurate picture of the real effects of complexity.

There are corrective strategies for complexity, he notes. "One of them is to reduce complexity in your portfolio or in your processes. But reducing your portfolio is only one strategy, and it may not be the right strategy for your organization."

Another strategy, says Wilson, is to "make your complexity more approachable for the customer and make the choices digestible." Indeed, there exist ways to empower the customer to comfortably deal with the full range of a company's offerings.

Wharton marketing professor Barbara Kahn says discovering that golden mean of how much is not too much is the trick. "If it's too much, they won't deal with it; if it's too little, then they may be able to deal with it," she says of customers' buying patterns.

That's where customer expertise comes into play, according to Kahn. "One of the factors that makes [a higher number of offerings possible] is expertise," she says. "The more people become experts, the more they articulate their preferences -- and the more they have a consumption vocabulary and know what the relevant attributes are, the more variety they will be able to take." She also suggests "arranging [product] assortment in such a way that consumers just see what it is they want and they don't have to see all that they don't want. Websites are really good at that."

Kahn likens the process of empowering customers with how salad bars help patrons navigate a mind-boggling range of options. "If you thought of all the different kinds of salads that you could make, and you presented [customers] all the different options, people wouldn't be able to deal with that -- there would be too much variety," she says. "But if you do it the way [restaurants] do with salad bars, and divide salads up into attributes ... they can deal with that variety because they can deal with those different attributes."

... ... ...

Companies that take the quick route to de-proliferate their offerings in an attempt to reduce complexity might end up returning to the same situation two years later, according to Wilson. That could lead to another danger, he says, of "cutting too shallow or too often." He warns companies not to underestimate customers' memory of portfolio changes. "The last thing you want to do is reduce some of the complexity, and then two years later tell the customer, "We didn't do it properly the last time; we're doing it again."

[Nov 11, 2006] Knowledge@Wharton Newsletter Special Report October 25, 2006

Special Section 'Smart Growth': Innovating to Meet the Needs of the Market without Feeding the Beast of Complexity As companies struggle to innovate in today's competitive environment, they need to continually guard against adding to their "clutter" -- the creeping impact of complexity on efficiency and cost-competitiveness. In this three-part special report, experts from Wharton and George Group Consulting discuss how management can approach this problem by thinking "ambidextrously" -- that is, focusing on innovation and broad exploration while minimizing the impact of clutter on operational processes and costs. Also, in the accompanying podcast (with transcript), Mike McCallister, CEO of Humana, discusses balancing innovation and complexity in the health care industry. http://knowledge.wharton.upenn.edu/special_section.cfm?specialID=58

Part I: Innovation vs. Proliferation: Getting to the Heart of the Customer How can companies innovate without falling into the trap of needless proliferation in their products or services? The key, according to Wharton faculty and experts from George Group Consulting, is understanding unmet and unarticulated consumer needs while aligning innovation processes to those insights. http://knowledge.wharton.upenn.edu/article/1585.cfm#part1

Part III: Getting a Grip on the Costs of Complexity Determining the financial impacts of innovation-related complexity begins with taking a close look at existing operations to understand the actual cost incurred and value generated at each step in the process -- all the way from idea generation through product development, manufacturing, marketing and customer support, among other back-office functions. http://knowledge.wharton.upenn.edu/article/1585.cfm#part3

[Sep 19, 2006] EiffelWorld Column by Dr. Bertrand Meyer

May 2005 (EiffelWorld) The power of simplicity (May 2005)

The best defense against falling prey to technology fashion is to be skeptical of complex solutions. Is the complexity warranted? Sometimes it is, but often it's just a smokescreen to hide the existence of simple and effective answers. Take the basic idea of object technology: to use the power of software modeling techniques -- essentially, abstract data types -- to describe systems of just any kind.

The idea was there from the beginning, and Eiffel took it to its full development thanks to Design by Contract (and multiple inheritance, genericity, deferred classes, Uniform Access, More...

[Mar 11, 2005] The Fishbowl / Catching a Silver Bullet

A storm in a teacup was launched last week by an ONLamp.com article making wild claims about Ruby on Rails:

What would you think if I told you that you could develop a web application at least ten times faster with Rails than you could with a typical Java framework? You can—without making any sacrifices in the quality of your application! How is this possible?

I’m not going to be drawn into the Rails vs The World debate. Rails may be wonderful. It may make me significantly more efficient than I would be coding in Webwork. and Java. I’ve not tried it beyond throwing together a toy application, and I’m going to withhold judgement until I’ve done something serious with it. But I can categorically say that Rails is not going to make me ten times more efficient. When I encounter this sort of hyperbole, I always find myself returning to the words of Fred Brooks:

But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity. — Fred Brooks, No Silver Bullet

The core of Brooks’ argument concerns complexity. Writing software is a complex business, and it essentially comes down to the combination of two types of complexity: Essential complexity is the complexity inherent in the problem being solved. Accidental complexity is the complexity that derives from the environment that the problem is being solved in.

Consider an impossibly perfect tool that reduces accidental complexity to zero. For this magical tool to give you a ten-fold increase in productivity, that would have to mean that you are spending 90% of your time fighting your current tools, and only 10% of your time solving the problem you are coding.

I’ve been in one or two truly pathological environments where I’ve felt like this (usually involving EJB 1.1), but you have to do a lot of concerted work applying layer upon layer of antipatterns to get the ratio that high.

This is precisely why demonstrations are to be taken with a grain of salt. Any task that can be performed in a demo must necessarily have an essential complexity close to zero: it’s a solved problem before the demo even begins. So if one vendor’s demonstration takes a tenth of the time as another vendor’s, all that time is accidental complexity.

And even the accidental complexity of a demo is usually of a totally different nature to that you’d encounter on a real project: the kind of complexity that accrues around a task that can be completed in the course of a lecture is vastly different to that which is encountered in a year-long multiple-developer project.

to tie pre-built web services together with the click of two buttons. Configuration is a much more significant overhead for a 45 minute “from scratch” or “here’s something I prepared earlier” demo than it is over the lifetime of a real project.

It’s like those debates about optimisation. “I just made this loop 10 times faster!” “Great, except we only call that method once a day.”

A few years ago, a marketroid visited my then employer to give a demo of a recently re-launched IDE. The demo was very slick, showing how you could throw together an EJB project or a SOAP interface in a few clicks of the mouse.

Then, we started asking about the product’s refactoring support, and the answer was “Oh, we don’t have that.” It was at this point the developers in the audience switched off. The IDE might have made it really easy to set up a web application skeleton or add new actions, but in any reasonably complex task you’re going to be spending most of your time dealing with code. Refactoring tools only become necessary when you are iteratively refining your code: something you do constantly as you move towards the solution to the essential challenges of the programming task, but totally irrelevant to a slick, pre-packaged demo.

This isn’t to say that improving processes, using more powerful languages and so on can’t give you a significant advantage. If something makes you even ten percent more efficient, that’s a huge advantage to have over your competition. But if anyone promises to make you ten times more efficient, they’re discrediting themselves before they start, which is a disservice to all involved.

[July 04, 2005] Q&A Internet Pioneer Looks Ahead - Computerworld Q&A: An Internet Pioneer Looks Ahead Leonard Kleinrock predicts 'really smart' handhelds, but warns of out-of-control complexity.

July 04, 2005 (Computerworld) You have warned that we are "hitting a wall of complexity." What do you mean? We once arrogantly thought that any man-made system could be completely understood, because we created it. But we have reached the point where we can't predict how the systems we design will perform, and it's inhibiting our ability to do some really interesting system designs. We are allowing distributed control and intelligent agents to govern the way these systems behave. But that has its own dangers; there are cascading failures and dependencies we don't understand in these automatic protective mechanisms.

Will we see catastrophic failures of complex systems, like the Internet or power grid? Yes. The better you design a system, the more likely it is to fail catastrophically. It's designed to perform very well up to some limit, and if you can't tell how close it is to this limit, the collapse will occur suddenly and surprisingly. On the other hand, if a system slowly erodes, you can tell when it's weakening; typically, a well-designed system doesn't expose that.

So, how can complex systems be made more safe and reliable? Put the protective control functions in one portion of the design, one portion of the code, so you can see it. People, in an ad hoc fashion, add a little control here, a little protocol there, and they can't see the big picture of how these things interact. When you are willy-nilly patching new controls on top of old ones, that's one way you get unpredictable behavior.

[Jan 4, 2005] Discussion of Open Source Software, A Moderated by Bill Thomas. contains interesting thought of feather bloat problem in open source.

... ... ...

Why Do Research on Open Source Software?

Bill Thomas: Why do you think that open source software warrants research at this time?

Chuck Weinstock: One reason is that it appears that the community is treating open source software as the next silver bullet. We all know that silver bullets very seldom find their target, and the community moves on to the next silver bullet.

Pat Place: I would add that there is a substantial amount of software available these days that is open source. If you are interested in building systems out of existing components—be they open source or any other form of source—you need to understand at least the risks as well as the benefits of doing so. If we can say anything that helps, then I think that's a good thing.

Scott Hissam: The phenomenon that is happening with the Linux environment is getting everybody's attention to open source software—more attention than has ever been paid to it before, at least in the media. People are enamored and believe that Linux is a successful, stable development environment, and that somehow every piece of open source software that they get is going to be just as stable and just as reliable as the Linux platform—if you believe that it is as stable and reliable as it is touted to be in the press.

Jed Pickel: Open source has been around for a long time—probably more than 20 years. I think one of the reasons why it's getting so much attention now is because commercial interests are developing it. That's why we’re seeing the media interest.

Is There a Community of Developers?

Dan Plakosh: You can release your source code, but I’m not sure people really know what to do with it. I released open source software two years ago and I’ve had very few people dive into developing it.

Place: It gets even worse when you get something that is hundreds of thousands of lines or a million lines. The historical aspect is interesting because you can look back at the history of programs that were open source, or close to open source, with lots of people helping and providing fixes. You can start to see what was successful about them and what is different about what is becoming the current open source movement, which I honestly believe is going to lead to disaster. I can provide anecdotal evidence of examples where you've got something like a tcsh [an expanded version of the original C shell for the UNIX operating system] which is not that complicated a program, but has features and peculiarities that are so weird that you'd never even want them. And yet somebody has said, "Oh, I'll just go and stick this thing into the system." For example, in tcsh, if you have time displayed as part of your prompt and it happens to hit the hour, it'll go "ding" instead of printing the time. I mean, this is insanity: feature upon feature upon feature that leads to code that's got more junk in it than you can possibly be interested in. It ends up becoming ultimately unmaintainable code.

Hissam: But I would say that the tcsh example that you gave is an unbounded development activity that nobody really paid attention to. Nobody really cared about it and that's why it got unwieldy. Not every piece of open source software is developed in that way.

Place: That's exactly true. I think that's the key to the difference between those things that have been and will be successful and those things that will not be successful. Somebody or some very small group of people have a very clear idea as to what that system is going to be, what it's going to do, and how it's going to be architected. And they keep it that way.

Weinstock: Some people refer to those people as the "arbiters of good taste."

Place: That's the phrase that was used primarily about the original UNIX developers. They were arbiters of good taste. With all of the stuff that people from all the universities shipped to them back in the mid-1970s and early ’80s, they decided what went into the source and what was not in the source. For the longest time with Linux, Linus Torvalds [the Finnish graduate student who created the original Linux operating system] was the person who did that. He had a vision as to what it was going to be. That seems to be drifting out. Linux is perhaps losing some of that arbiter-of-good-taste quality.

Pickel: On your tcsh example: there are plenty of examples of closed source software having very similar things. For example, one commercial product is such that if you hit certain key sequences, you can end up with a flight simulator—which is a little bit different from tcsh beeping at the end of a line. The difference, though, is that should the community come across that item in tcsh, and feel like it needs to be removed, and there are enough people who agree with that, then it would be removed from tcsh. You can easily go and change that if it bothered you enough.

Place: Absolutely. I've done that because I wanted tcsh to be as small as possible and I’ve used it as small as possible.

Hissam: So you removed a whole bunch of things out of tcsh that you didn't like. Right? Now let’s say the next version of tcsh comes out and you want to adopt it.

Place: I have developed the version of the shell that has the capabilities I need. If anything does come out in tcsh that I'm interested in, I might take that as a patch file and patch my source with those changes. But I'm not taking all their stuff again.

Weinstock: You now own the problem.

Place: Yes, absolutely. I willingly accept that. Of course, the advantage is that it was open source. I could choose to take on the risk and build something that was what I wanted.

Who Has Accountability for Open Source Software?

Thomas: It seems that no one has any accountability with open source software. It’s strictly "buyer beware."

Pickel: I would disagree with that. Let's go back to the tcsh example again, because I think that's a good one in that a person maintains it and is accountable for listening to the users. Pat didn't speak up in this case. He decided to split off his own version and now he's accountable for that.

Someone wants the X widget or the Y widget, so they just go and put that fix in, and you get this loss of sanity.

Place: If you look at the actual source code, you'll see all of these different names of people who've added this and added that. The risk I see with open source is that all of these features are getting thrown into a basically good system. Someone wants the X widget or the Y widget, so they just go and put that fix in and you get this loss of a sense of stability of the project—this loss of sanity. tcsh going "ding" is kind of stupid. It goes to my notion of good taste; it's below the cut line.

Pickel: What you're describing is an example of open source working in the most optimal sense in that there are people who have different goals from a project than you do. You decide to split off your own version; that's open source working right there.

Hissam: It depends on what your own goals are. If your own goals are to keep up with the latest and greatest, then him vectoring off his own version—he's stuck.

Place: That's disaster if you want to keep up with the latest.

Plakosh: I don't think people develop open source software with the intent that people will take it and go off and build their own products from it. I think it's more so that people will contribute to mature whatever piece of software that they're doing.

Place: The freedom for anyone to make a change leads to the fact that the product will never mature because it will always be in a state of flux.

Plakosh: There is not the freedom for anybody to make a change. I really think you're looking at isolated cases. For example, take Linux. The majority of people who use Linux never look at the source. In the majority of most open source products, I would bet that people do not look at the source. They don't care. They don't recompile it. They don't want to have anything to do with it. It's only the people who are working toward the development of Linux who are looking at the source, or occasionally someone who has the in-depth knowledge finds a bug and looks at the source. They may fix it but they may also submit it to one of the Linux working groups to have it corrected.

What does it mean to be accountable?

Hissam: Let’s go back to the earlier question about accountability. We disagreed about whether anyone is accountable. What does it mean to be accountable? It means that there's liability on the part of somebody.

Place: I wouldn't even go as far as that. Dan has released source. He's put his name up saying, "I think this is a good piece of source." In an open source project, other than a couple of special cases, there's a substantial body of existing code that gets released and then people can work on it, rather than working on stuff from scratch. I see a potential split there. Take Linux. Who is accountable for Linux these days? Does Linus put his name on it saying, "I think this is all good source code"? I don't think so anymore.

Hissam: No. The worst thing that can happen to the people who are "accountable" for Linux is that the world turns their back on it. But concerning accountability: I think the answer is that no one is accountable outright and I think it is buyer beware.

Weinstock: So that's why people go to places like [Linux vendor] Red Hat instead of just downloading it off the Web.

Hissam: Because they want to hand money to somebody. They want to be able to say, "Give me this and give me that."

Weinstock: Red Hat also sells support. You can go just buy a Red Hat CD and you get nothing with it other than the CD. But you can also go to them and get support for Linux.

Pickel: That's how Linux has made it into the corporate world: doing support.

Plakosh: I've dealt with support before and support is not typically all that it’s cracked up to be. Support is usually geared toward people who have problems in reading the documentation or who don't understand things. Linux got into the commercial domain mainly due to the attractiveness of it being free and being somewhat reliable.

What Are the Benefits and Drawbacks of Developing Open Source?

Thomas: Let's backtrack a little bit here. What would you say are the benefits and drawbacks of developing software in an open source environment, from the standpoint of the developer?

Weinstock: There are different ways of looking at that. Why would I want to participate in the development or why would I want to put myself more out there for free...

Place: I'll tell you at least one person's motivation for the latter. For the last two years, he's been unable to further his software at all, so since it was previously freely available, he said, "Okay, let's make this an open source project in the official open source project way. We'll get people who have ideas and have some suggestions for developing this further, and/or who have bug fixes to be able to maintain this thing."

Hissam: So, would you say that the rate of change on this project has increased or decreased, and have those changes been dramatic?

Place: Well, it's certainly increased. There's also one place where you can get an official source version that has the bug fixes in it, which you couldn't do previously.

Thomas: Would you say that putting out a program with open source code is a way of testing the market for it?

Pickel: Exactly. That's another good point that I wanted to make. One of the interesting things about open source is that you build on other people's software. When you release something, you never quite know how other people are going to make use of it. You learn quickly that way because they give you immediate feedback and contribute changes. It's a great way to figure out market demand.

Hissam: That would be a benefit. But if that evolution is unchecked, you're going to get the tcsh phenomenon. It's almost like a cancer: cancerous features.

Pickel: You choose the branch of the code that most suits your goals at a given time.

Weinstock: But that presents the consumer with a real problem, right? Which branch? What happens to the uneducated consumer who doesn’t have a basis for picking a branch?

Pickel: They go to places like Red Hat.

Place: If you want a version of BSD [a popular version of UNIX; BSD stands for Berkeley Software Distribution], which one do you pick right now? There are three versions of BSD that are all based upon 4.4, BSD Light, which was the last release. So which one do you choose?

Weinstock: Getting back to what you said about consumers going to Red Hat because they don't know how to make that choice: That's fine for an open source project where there is a Red Hat, but my guess is that most of them don't have a Red Hat. I mean, how do I know which Emacs to choose, for instance?

Plakosh: The only reason you have companies like Red Hat out there is because the distribution package for Linux is so large and so complicated—or at least it's viewed that way by the consumer. For a small piece of software, you're not going to have these distributors.

Pickel: Going back to your point about what to do if there is no Red Hat: I think these companies are out there for the purpose of infiltrating the corporate world—getting this kind of software into the corporate world. The techies and the geeks aren't necessarily interested in a Red Hat, though they may use it because they don't necessarily care to package all the software. But there are projects that don't have corporate backing behind them, or a very organized way of going about things. They're just not going to make it to as large an audience. They won't make it into the corporate world quite as easily.

Hissam: Every techie and geek on the planet right now, and every open source activity, started with dreams of IPOs [initial public offerings of stock]. They want to be the next millionaire. They want to start the next company.

Pickel: If you look at the past year or so, that might be one of the motivations behind open source software: people think they're going to make a killing off it. If you look over the past 20 years, there haven’t necessarily been financial reasons. One of the ways you get paid for leading a successful open source project is by getting your name out there, by getting well known, by becoming the guy who started that project.

Is Open Source Right for the Department of Defense?

Weinstock: Let’s talk about open source as applied to our Department of Defense client. What are the advantages of developing something using the open source model? When applied to the DoD, the notoriety factor is probably not important to them.

Place: There's another question that I should like to raise with respect to DoD customers. What systems do you envision the DoD would like to build with open source? Is it the weapons systems? Is it the payroll systems? How many people out there are interested in the payroll system?

Weinstock: It would seem to me that we're probably talking about subsystems first of all. Pieces of systems.

Place: So we’re talking back at the level of things like the operating system (OS), the database, the underlying components—the bits that we take for granted. That's one of the issues, when we talk about DoD customers. We need to understand that they're not going to build systems with this stuff.

Weinstock: But they will build systems that contain parts.

Place: Then the question is, which parts? Clearly it's going to be exactly those things—the OS parts, the database parts, the GUI [graphical user interface] parts.

What Are the Criteria for Success in Open Source Development?

Hissam: We can go back to the premise that these large organizations, be they DoD or not, think that they can get something done with access to a large, talented pool of engineers. There's some belief that they can get access to a lot of peer reviews, beta testers, people out there to look at their software and make it better and get it done quicker. That seems to be the running belief. If that's a model of success in Linux, then it must be true for every piece of open source software. But we should debunk that. Past performance should not be used as an indicator of future performance. That's one of the reasons that the Software Engineering Institute has to start looking at the processes that are used in open source development. What are the criteria that have to be there in order for it to be successful?

Past performance should not be used as an indicator of future performance.

Place: There are instances of projects that have been very successful. I would claim that Linux certainly has been one of them. It has achieved a level of reliability. The BSDs are reasonably successful, and some other open source things are reasonably successful. One of the common themes I've seen through either open or freely available source projects over the last 20 years is that there has been a substantial body of software—i.e., the system is basically there—before its release. People are bug fixing rather than developing new features, so that you've got a system with a structure and a design, and other people are now fixing the things that "he forgot" or that "he got wrong."

Weinstock: That suggests that it should start off with a user base or some sort of base of people who care about it.

Place: You certainly need people to care about it, and the people who care about this typically are the users.

The user base of Linux was people who developed software, not users of software, per se.

Plakosh: But if you look at how Linux kicked off, it kicked off by being more or less the toy of software engineers to build upon. It came with a lot of people's research projects. There were a lot of people looking at this functionality and that functionality. The user base of Linux was people who developed software, not users of software, per se.

Place: That’s a good point. The other thing, in thinking about what has been successful, is that the initial release was something that was a well-designed system.

What Are the Advantages to the User?

Thomas: Let's talk a little bit about open source from the user's perspective. What are the advantages to using open source software?

Hissam: Let me start off by cutting to the chase. It's a two-edged sword. The advantages are that users can get the latest and greatest and the fastest fixes. The disadvantages are that they have to get the latest and greatest and the fastest fixes. They might spend 75% of their time using a product and 25% of their time upgrading the product.

Just because a product is out in open source doesn't mean that I, as a user, have to track it.

Plakosh: That's not necessarily true. Just because a product is out in open source doesn't mean that I, as a user, have to track it. There are a lot of internal releases that I don't need to worry about or that I may not want to worry about. That perspective is somewhat from the mentality of the world that we live in, developing software.

What Are the Security Implications of Using Open Source Software?

Pickel: From a security perspective, you could look at it from a couple of standpoints. You really have to pay close attention to the software because if the community becomes aware of a vulnerability, then they're going to exploit it. So you need to beat them.

When you were talking about the double-edged sword, I realized that it also applies to the perspective of the developers in that there's an advantage to having people working on your software, but you also have to be ready to deal with them. If you have a lot of demand, and a lot of people developing your software, it's going to be tough to deal with them.

If the community becomes aware of a vulnerability, then they're going to exploit it.

Place: You raised the issue of security. What trust do you place in open source software, given that it's changing so rapidly? How much analysis can you do on the 5,000 fixes that came in last week?

Weinstock: Do you use Linux in a secure environment?

Pickel: Absolutely. Actually, I run Linux on all my machines. I'm not going to look at every single line of code. I'm not going to look at every update. But the thing is that there are people out there who are.

Weinstock: You hope. Do you believe that every nook and cranny of Linux has been looked at with that in mind?

Pickel: Not necessarily. But I believe that a lot more nooks and crannies have been looked at than are looked at in closed source environments.

Plakosh: That's an interesting statement because I would tend to bet that there is a difference between theory and practice. In theory, you would think that you're releasing source and you would have people combing over the code looking for security holes and trying to fix them. In practice, I don't think that's necessarily the case. In theory, it sounds great: the more people have it, the more people are looking at the source code and the more people are going to try to find bugs or security holes so that they can try to fix them. That sounds great. In practice, I think the only people who are trying to do that are maybe people who are trying to break into a system.

Pickel: Exactly. And they're part of the community. If they identify a hole and start exploiting it, people will notice that.

Hissam: You're saying that even the bad guys, in a sense...

Pickel: ...they're part of your development.

Place: But you only find out after the fact.

Pickel: That's still a better environment than closed source.

Plakosh: Actually, that's not necessarily much better because some of these holes that you can find in open source code you never would have found if it was closed source. They never would have existed.

Hissam: If I were a hacker, I could get the latest distribution from Red Hat, go to my garage, close the doors, get a lot of Twinkies and Coke, and start mulling over it until I can find a hack. Then I can just turn on my modem and go attack somebody who's using Linux.

Pickel: Certainly, there's some potential of administrators not noticing. This is an issue that we deal with every day in the CERT® Coordination Center. It's quite common that somebody will break into a site and the site administrators don't know how it happened. But when you deal with the sophisticated administrator, you can usually track down what program was the source, the problem. Then, maybe there's a new vulnerability in that. So, a lot of times, we look through the vulnerabilities, get in contact with vendors, and have the problem fixed. So there, in that situation, a few people were compromised. But the community as a whole is now operating on more secure software.

Place: Then there is the difficulty of getting customers to upgrade with the patch. Don't underestimate the number of people who are behind the curve.

Thomas: Is it any better in a closed source situation?

Hissam: Closed or open source, when a vulnerability is discovered, people have to react. I think the other thing that is interesting is that a hacker can become very intimate with some of the underlying protocols that are used. There may be that somewhere in the code it says, "You'd better check for null value here and the packet header for this because if you don't you could lock up the machine." The hacker says, "Wow! I hadn't thought of that before. If I tried this against Windows, I wonder what it would do?" Just by reading the source code, they could learn some very sophisticated, obscure attack.

What Comparisons Can Be Made Between Open Source and COTS?

Weinstock: There's also a big push to use COTS [commercial off-the-shelf] software, and widely using COTS raises all sorts of problems. At least some of the same arguments that apply to COTS apply to open source software.

Place: You might get an instability argument more so with open source than with COTS.

Plakosh: You'll get quality arguments too.

Weinstock: But it's the argument that "you own the solution if you try to modify it in any way."

Hissam: Let’s say a contractor is using open source in a government program and they're having some success. Then they run into a roadblock. They decide, "All I have to do is change this one line of code and we'll save the government millions of dollars." And they do it. Now let’s say the open source community doesn't want to adopt that change because it's very specific to whatever the government is doing. Now the government—by virtue of a contractor—is in the business of maintaining and competing with that open source.

Plakosh: That may or may not be true. I think one of the best advantages to me as a software developer is to take somebody else's open source and save development time, if it does do that for me. And I don’t intend to track it in the future.

Weinstock: Yes, but suppose there's a security compromise that's found in some future version and the four-star says, "That rolls back to the version you modified seven years ago—or six months ago." Now you've got to find someone who's even smart enough to put the changes in.

Plakosh: No you don’t. You've taken over maintenance of that. That’s fine, and you move on.

Weinstock: And the technician who worked on it seven years ago is still there, on the payroll, and there ready to help?

Plakosh: No. But we’re just talking about another method of software reuse. You're equating it to a product and having to track versions, rather than someone looking at open source software and saying, "Man, I could use these features. Let me rip them out and use them." You’ve taken the code from an open source product and reused it elsewhere.

Weinstock: But you’ve lost a supposed benefit of open source, which is that vast community of developers.

Plakosh: But if you weren’t interested in that, it doesn’t matter. You’ve gained. You didn't have to write that. You didn't lose anything. That's the benefit to a lot of people who are using open source. They don't have to reinvent the wheel. Somebody's already invented it. Yeah, I have to maintain it. Yeah, I'd better take a look at what I'm getting. Yeah, I'd better check the quality of it.

Weinstock: I'm not disagreeing with you. That's certainly a valid, good thing about open source. But if the whole world had that view of open source, there wouldn't be open source. All I'm saying is that it's sort of outside the spirit of open source as I see it.

Plakosh: What's the spirit of open source? Open source has two motives. One is that you want other people to work on your code; you want resources. So you want to obtain free resources.

Weinstock: Right. And you taking the code and going you own way and not feeding back to the community does not accomplish that.

Plakosh: But that's one motive. It’s for your own—how can I put it—personal glory, corporate gain, whatever. The second advantage is for other people just to advance technology and to advance people's growth. People give things out so that some other people can learn how to do something. For other people, it fosters new ideas. That's why I give out source code. I don't do it for my personal gain. I do it so that other people can look at it and use it for whatever they want to use it for.

Weinstock: That's from a developer's perspective.

Plakosh: I think it's great if someone reuses and finds benefit from something that I wrote, and if it saves them some time. Just like if I'm going to write a piece of software, I go out looking. I'm not going to try writing everything from scratch when I know that there is software that I can lift.

Pickel: The real benefit of open source, in my opinion, is that you build on things that are already available. You're furthering technology, and then people build on what you've done.

Place: You get the function you want as well. That's the other thing. If you're buying from Chuck's House of Software, you get what Chuck wants to sell you, not what you want.

Pickel: But you do have to be a developer to get that. I've been a user/developer of these things for a long time. If you're not a developer, you don't get it. I suspect that one of the results of this is that there will be more people who are developers out there. It's going to convince more people to look at the source code.

[Dec 27, 2004] Forth An underview

Forth is not just a language, its more of a philosophy for solving problems. This can be summarised with the acronym K.I.S.S. (Keep It Simple and Stupid). To quote from Jerry Boutelle (owner of Nautilus Systems in Santa Cruz, California) when asked "How does using Forth affect your thinking?" replied:

Forth has changed my thinking in many ways. Since learning Forth I've coded in other languages, including assembler, Basic and Fortran. I've found that I use the same kind of decomposition we do in Forth, in the sense of creating words and grouping them together. For example, in handling strings I would define subroutines analogous to Forth's CMOVE, -TRAILING, FILL, etc. More fundamentally, Forth has reaffirmed my faith in simplicity. Most people go out and attack problems with complicated tools. But simpler tools are available and more useful. I try to simplify all the aspects of my life. There's a quote I like from Tao Te Ching by the Chinese philosopher Lao Tzu: "To attain knowledge, add things every day; to obtain wisdom, remove things every day".

[Oct 20, 2004] Charley Reese Simplify

If we as a species are going to survive, we are going to have to learn to live simpler lives. By that I mean consume less stuff. The world's poor are already living simpler lives, and not by their own choice, so it's up to us in the industrialized countries to set the example.

OK, I know this sounds preachy and far-fetched, not to mention being highly unlikely to influence anybody. Nevertheless, sooner by choice or later by necessity, we will have to recognize that we are, if we continue the present trend and lifestyle, going to consume our own planet. Our ancestors will look mighty funny one day clinging to the solar system's only orbiting trash dump while trying to choose between garbage and cannibalism as a source of food.

Consult any almanac and look at the exorbitant rate at which we are pumping oil, mining coal and other minerals, cutting forests, catching fish and dousing the land with ever-increasing amounts of fertilizers, pesticides and herbicides. There's no question we are just now beginning to run short of a lot of natural resources. The price of oil is just one example of what's in store for us unless we curb our appetites.

Ravenous consumption was rather all right when the world population was only a billion, and few of them wealthy enough to afford much stuff. The Industrial Revolution changed all that. Utilizing fossil-fuel energy, it did raise the standard of living, and people began breeding ever more prolifically. Today, there are 6 billion people, and practically every one of them aspires to consume at the Donald Trump level. Cheap electronics make sure that nobody is ignorant of how the fat cats live. Even in the Amazon jungle, they watch "Baywatch.''

Europe, Russia, the United States and Japan have long been consuming at a rapid rate, and now two more giants are coming on line, so to speak, as India and China develop their massive economies, which is to say their appetites for energy and commodities. Then there are the so-called Asian tigers — Malaysia, the Philippines, Indonesia and Korea — all determined to raise their standard of living to the level of the West.

Well, we'd get nowhere asking anyone to remain poor as a conservation measure. What the world needs is a new lifestyle of elegant simplicity, so that people will learn to aspire to a few well-made items that can be used and passed on instead of junk, which is discarded as soon as it begins to wear or break down.

I include myself in criticism of overconsumption. I fancy myself on the low end of consumption. I care nothing for jewelry, clothes, fancy cars or furniture. The latter two things I tend to keep until they fall apart. But I have a weakness for books. There are books all over my little condo — five bookshelves, one covering a whole wall to the ceiling, and more books stacked on coffee tables, end tables and the floor.

On one little shelf between my dining/living room and the kitchen, I can see six sandstone coasters, a plastic timer, a bottle of glass cleaner, two candy dishes, a plastic globe, a rack for hanging bananas, a flashlight, four candles, a plaster-of-Paris Nefertiti, a bottle of Tabasco, a plastic watering jug, 11 cookbooks, a pipe rack with pipes and tobacco, a plate with a portrait of Robert E. Lee, and a kerosene lamp. That's one stinking little shelf. What do I really need? Maybe the flashlight and the Tabasco sauce. I haven't smoked a pipe in years and never cook anything more elaborate than fried eggs and baloney sandwiches.

Let's face it: Most of us, even us lower-middle-class types, lack consumption discipline. We get led astray by the singing sirens — new, more, bigger and upgraded. We need to seriously cut back, lest our grandchildren inherit a used-up, worn-out planet. And not just us — the whole world must reduce consumption, though of course about a fifth of the people still need basic food and housing.

Let us all try to simplify by decluttering and then avoid recluttering. Good luck to us all. We'll need it.

The Old Joel on Software Forum - Software Bloat and Moore's Law

On the topic of software bloat, it would help to distinguish between size bloat and processor cycle bloat.  With regards to size, the more features you pile on, the bigger your code footprint will become.  Just as mentioned by Joel, the cheaper disk space becomes, size bloat becomes more and more negligible.  Processor cycle bloat is typically brought about by software layering.  To get something done, you end up calling layers and layers of software.... most of which add little to what you need to do.  I come from a c/c++ background so yes, I did knock VB, Perl, and other scripting languages around for a while with regards to their performance aspect.  However, I am finally recognizing that all the above languages have become so pervasive in the programming world.

In the miniscule world where microseconds matter, size and speed matter.  This used to be the perception and programming accumen used to be how tight you can keep your code.  You tell me what the new perception is.

[Sep 10, 2004] Where is IT Going In The Next 5 Years - Code Bloat

I'm going to start with a tip before I rant.  The Acrobat Reader speedup tool available here ( http://www.tnk-bootblock.co.uk/ )really works. It disables loading a unnecessary plug-ins at startup.

<rant>
Why is this tool even necessary?  This proves that code bloat exists everywhere, not just at MS.  Why in the world would you need to load 75 plug-ins by default at startup - in a simple reader?  This is just pure lazy progamming. 

I understand the pressures of time-to-market quick delivery.  I also understand that there is a lot of overhead involved when you are trying to develop re-usable OO code versus hand-crafting an application.  This type of thing is inexcusable however.  IMHO Adobe is worse than Microsoft lately in producing applications that give you time for a beer break while you're waiting for them to load.  AutoCad and other are not necessarily speed demons either.

I've read estimates that the final Longhorn, running full Aero, Indigo and WinFS will need a base machine with 1GB of RAM to run decently.

Linux may be cleaner and more stable , but if you go look at recommended systems, the requirements aren't that much different than Windows.

There's got to be a better way.
</rant>

Oops, I have to start a program up - might as well go have a beer....

Tarwn (Programmer) Sep 10, 2004
Uh oh, you brought up Linux vs Windows :)

I'd argue the point on requirements but I am sure someone else will bring it up.

I agree on insane bloat though, I'm noticing it everywhere. Load the new ATI hardware drivers and the .Net-based control panel will eat up some more windows loading time as well as huge chunks of memory...it's a settings util, why would a settings util be forced to load on startup? Obviously I need the drivers, but please, I shouldn't be forced to hack my registray just to make the settings console stop eating my memory...

Firefox: I love the browser, I hate the fact that is is sitting on 67Mb of RAM right now...

STEAM: (for half-life players) Choke, cough cough...the release version wasn't even adequate to call a beta version but that aside, we have hidden loadup when the system starts and 23-24Mb while running in the taskbar (dunno about normal background, I killed that reg entry :P)

MS Word...Outlook still takes foever to start up, word takes a litle while, so what is MS Word doing with 22Mb of my memory?

MS Outlook: Apparently not everything is covered by MS Words 22Mb, so here's another 13Mb for my mail program...

WinVNC: Now this is what I'm talking about...4Mb. VNC is running as a server and using less then 1/5th of the memory that MS Word is....and I haven't opened anything since rebooting except Outlook, Editplus (1mb), and Firefox...


So yeah, I agree that bloat is everywhere...but machines keep getting faster and RAM keeps getting cheaper so companies feel justified in cutting corners to bang out bloated software, the bloating probably isn't even an effect of time limits anymore, probably bloated by design (or lack thereof)...
 

About Bradbury Software

Bradbury Software, LLC was founded in 1999 by Nick Bradbury. Nick is the creator of the HTML editor HomeSite, which was acquired by Allaire in 1996 and is now owned by Macromedia. After leaving Allaire in 1998, Nick continued his love of acronyms by creating the CSS/xHTML editor TopStyle and the RSS/Atom reader FeedDemon.

Our mission is to provide fast, efficient, reliable software that exceeds peoples' expectations.

...yadda, yadda, yadda.

Okay, almost every software company has a similar mission statement, and in most cases it's a hollow sentiment that flies in the face of how it really does business. How many times have you bought software that was so buggy that you thought you must've accidentally installed a pre-beta version? And how many hours have you wasted downloading bloated software that made your high-powered system run like a narcoleptic slug on downers?

If you're like us, you're tired of this. So, rather than bore you with our mission, we'll tell you...

Our Promises to our Customers

  1. Our products will go through extensive beta testing involving dozens of external testers. We will not release our software until these testers tell us it's ready.
  2. If serious bugs are reported in the current version of our software, we will not make you wait (and pay!) for the next version to see them fixed. Fixing bugs in the current version will always take priority over the release of the next version.
  3. We will build our software based on the needs of our customers. We maintain an online forum where we take feature requests, and when it comes time to work on the next version, we enable you to vote for which of these features you want to see.
  4. Our software will always be system-friendly. TopStyle and FeedDemon are perfect examples of this. They install no shared DLLs, ActiveX controls or other system files. Since they're self-contained, you can install them without fear of them interfering with your system or with other applications.
  5. We will keep our software fast and compact. Too many products are extremely slow and bloated beyond reason, filling your hard drive and wasting system resources. TopStyle and FeedDemon load very quickly so you can start using them immediately, and they're also surprisingly compact.

[July 16, 2002] Light methodologies value simplicity over complexity - Builder UK Tom Mochal, Builder.com

Light methodologies rely on quickly iterative design cycles to fulfill their promise of rapid development and smart solutions. But how quick can you be if you’re using plodding design tools or wading through reams of cumbersome, overwrought code?

A key aspect of light methodologies is their need for simplicity. All light methodologies value simplicity over complexity whenever possible. Use that one tool that satisfies 80 percent of your needs instead of adopting three tools to cover 100 percent of your wish list. If you can use 50 lines of simple code to substitute for 25 lines of elegant code that only you can understand, go with the 50 lines. When you design an application, do it as cleanly and simply as possible.

Simple design and coding

The overall design of your application needs to be simple and flexible. Avoid design decisions that are perfect for your first iterations but then don’t allow you to add features and functions in later iterations. This can happen if you tie program components too closely together, instead of maintaining a level of independence.

You also don’t want to overengineer a solution. If you’re building 100 reports for your application, you probably need some sort of user library structure to keep track of the reports and what they do. But if your solution requires just 10 reports, drop the library.

Your code needs to be simple to review and to understand by others who follow you. If you look at the total life cycle of an application, only about 20 percent of cost is spent during the development phase. The remaining 80 percent is spent in the support and maintenance phases. If you build a no-frills application, the code might run in production for 10 years or more. Simple and straightforward code written up front allows easier learning curves, error fixes, and enhancements over the entire life cycle.

Simple program documentation
Writing documentation is the bane of many programmers. First of all, many programmers are great with computer languages but aren’t very strong with English. Secondly, programmers tend to write their comments and notes for themselves, not someone else who will need to understand them.

Light methodologies tend to advocate essential documentation, but no more. This minimalist approach recognises the inherent limitations of documentation.

Take program documentation, for instance. If you‘re trying to track down problems in code, you’re not going to be able to find the problem in a programmer’s manual. The only place you’ll find the bug is in the code itself. Even if the customer asks you to investigate how a feature works, you typically can’t rely on an external programmer’s manual. The only way to know for sure is to check the code. So having a stand-alone programmer’s manual that describes the code probably doesn’t make sense.

On the other hand, the code itself should have plenty of comments. These comments shouldn’t reflect the obvious but instead should point out creative techniques or describe major sections of code that enable certain features and functions.

Programmers might also be asked to assist with users’ manuals and help features. Again, you should convey a basic understanding to others that are not involved in the project. In many cases, large users' manuals are created but only certain parts are ever referred to again. Work with your customer to anticipate the basic documentation needed and build at that level. The more extravagant the documentation, the more content will never be referred to again.

Simple specifications
All of us have heard about the 80/20 rule. Perhaps 80 percent of an application’s business logic can be coded in 20 percent of the total development and testing time. Light methodologies rely on users really accepting the 80/20 philosophy. It’s true that you don’t want any sequence of user logic to result in errors or an application failure. However, you may not need to create an elegant recovery strategy for every possible input combination. For obscure error combinations, maybe it’s acceptable to simply point out to users that they have made a mistake and need to start the transaction again.

In the same way, users sometimes ask for every feature and function that they might possibly need over time. The better approach is to look at must-have requirements and then build the application to those specifications. In many cases, extra features will not be utilised often. If some borderline features do prove to be absolutely necessary, they can be added into future iterations or as enhancements after the project is complete.

Rank priorities as low, medium, and high, and then agree that no low-priority work will be incorporated in the project. You can note the requirement to show that it was considered, but there’s always something more important to work on than a speculative feature that will not be needed when the application goes live. Again, if the requirement is important, write it down, but as an item to be considered later, not to work on at this time.

It’s as easy as you make it
Light methodologies tend to falter when applied to very large and very complex projects, which require more rigor and structure. On the other hand, sometimes we make projects larger and more complex than they need to be. When you are working with your customer on a development project, try to always think simply. Think about implementing the basic requirements, in a simplistic manner, rather than trying to create a solution that meets 120 percent of the business needs. You may have heard the saying that “better is the enemy of good.” You can always make things better and better, but your sponsor will be more than happy with a good solution that is delivered on time and on schedule.

The Old Joel on Software Forum - Programmer Folkways

The recent threads posted on quality of life balance, software overcomplexity (which diminishes job satisfaction), and the repetitive nature of SW practice, impels me to comment.

Basically, it seems that when topics like this arise, the opinions that most programmers contribute all tend toward pretty much the  same set of conclusions and values. (note, I didn't say "all". The insightful few who question the status quo are usually torn to ribbons and personally attacked.)

My conclusion is that most programmers are personally half satisfied to miserable, but due to peer or self imposed pressure decided a long time ago that beating their heads against a brick wall was the only honorable thing to do. It's kind of a Spartan code of ethics, among an occupational group that never seems to have any profile with anyone outside the field.

Basically, it's the falling on a sword and impaling yourself while nobody else gives a sh*t.

And the sameness of most people's opinions forces me to conclude that we're a bunch of robots. Most of us adopt the thinking of our age group.

A few gems that always come up:

ZDNet PC Week Seeking simplicity -- the value of KISS principle

These days, more and more IT managers are discovering that, in the age of pedal-to-the-metal e-business, architectural complexity doesn't work. Historically, many organizations have, like SIAC, created multiple networks, each with its own firewall and security system. To get e-business efforts off the ground, for example, many enterprises ended up supporting two network architectures: a legacy network supporting internal processes—finance, inventory management, etc.—and another for extranet operations that link to customers, suppliers and partners. However, that kind of complexity makes it difficult for companies to transform themselves into e-businesses. Multiple layers of security and authentication, for example, make it all but impossible to open inventory and other back-end systems to customers, partners and suppliers online. And maintaining all that architectural complexity is becoming increasingly expensive, IT managers say.

So, like Solomon, many organizations are seeking to simplify. They're trying to build unified IT architectures that provide common, enterprisewide security, authentication and data exchange services using Web-oriented technologies such as LDAP (Lightweight Directory Access Protocol), metadirectories, XML (Extensible Markup Language), the CORBA (Common Object Request Broker Architecture) distributed object framework, PKI (public-key infrastructure) security and authentication schemes (see chart.) That means giving users—whether employees, customers or partners—a single way to get to corporate information. And it means a single, less expensive approach to system management.

There are still plenty of roadblocks between enterprises and unified architectural simplicity. Standards such as LDAP, XML and PKI must be more completely defined and implemented in products. Many IT managers also admit that they still have concerns about security, concerns that loom large as they contemplate building simpler, unified architectures. And, as some IT managers are finding, business managers can become impatient with the expense and time it takes. For those reasons, the simpler, unified architecture is still a few years off for most companies. However, according to experts, many are moving in that direction.

"From a trend standpoint, we've seen a number of companies who've begun the process of collapsing and standardizing their architectures," said Andrew Kelemen, an analyst with CNS Group, in Norwalk, Conn. "All of a sudden comes this blurring of the lines between intranets and extranets."

...Some vendors are taking note of IT managers' desires for simple, unified architectures based on Web technologies that can run across multiple platforms. Companies such as Entrust Technologies Inc., Netscape Communications Corp. and Novell Inc. have touted versions of key services that will run on a number of operating systems. Entrust's PKI products, for example, can work with directory services from Netscape, Novell and a variety of vendors via LDAP.

"The maturity of newer technologies will have to happen before enterprises fully deploy such an infrastructure," CNS Group's Kelemen said. "But as users begin to push for this infrastructure, they will in turn push vendors to adopt standards by requiring compatibility, integration and support."

...But why change something that already works? Anderson said the cost benefits of a more unified network architecture are too high to overlook. Staff and training expenses alone could be trimmed significantly if Duke could deploy unified, centrally managed architectures, he said.

Besides politics and a lack of vendor support for standards, concerns about security continue to pose a barrier to the merging of extranet and intranet architectures, Kelemen said. Although companies have willingly allowed partners to access information via extranets, the idea of extending their own infrastructures into somebody else's enterprise is still viewed as risky, he said.

"Right now, it's simple if you have two network architectures," Kelemen said. "You're going to authenticate at the account level, maybe using some kind of directory service. But as far as actually allowing some third party to access proprietary information within their own organizations, many IT managers are still balking."

Now, CIOs at a group of large companies are joining to pressure IT vendors to help solve the problem. Last year, 15 member companies of the Society for Information Management formed a working group to reduce the level of complexity in IT systems. They want vendors to support standards that will help make building simpler, unified IT architectures easier. The group includes companies such as AT&T Corp. and Kraft Foods Inc.

Not every member of the SIM working group is moving toward the goal of a unified architecture capable of handling both intranet and extranet needs. But each company has a stake in making it simpler, less costly and less time-consuming to integrate systems from multiple vendors, said Steve Michaele, district manager of foundation architecture at AT&T and the group's leader.

"We've got legacy environments we need to connect to and multiple hardware platforms that we're supporting. All of that is a complex infrastructure to manage," Michaele said. "Now that we're trying to leverage that infrastructure in a Web environment, we need these applications to be interoperable."

The IT Complexity Reduction Group's goal is to develop a series of standards documents to send to key hardware and software vendors. The documents give vendors outlines of which member companies want vendors to provide interoperability for products and languages.

At SIM's annual meeting last month, the working group presented areas where a unified approach is required and some of the standards it wants vendors to adopt. Every area of concentration is one with which members have problems with integration and standards such as directory services and security.

For more information, go to www.simnet.org.

[Nov 29, 1999]  IT gets bullish on simplicity Seeking simplicity. E-business demands are pushing managers to unify architectures. By Anne Chen

November 29, 1999 (eWEEK)

As the pace of e-business accelerates, however, supporting all of that architectural diversity is getting tough, so Solomon has begun to seek simplicity. He is examining the idea of one type of network architecture, where all systems share a common set of services, such as security, authentication and system management.

He's beginning by unifying disparate network and system components and integrating Web-based technology into SIAC's core legacy back-end systems. Right now, for example, he's blending into SIAC's core trading-floor networks Web technologies that will allow traders to check quotes and communicate with customers on the outside.

These days, more and more IT managers are discovering that, in the age of pedal-to-the-metal e-business, architectural complexity doesn't work. Historically, many organizations have, like SIAC, created multiple networks, each with its own firewall and security system. To get e-business efforts off the ground, for example, many enterprises ended up supporting two network architectures: a legacy network supporting internal processes—finance, inventory management, etc.—and another for extranet operations that link to customers, suppliers and partners. However, that kind of complexity makes it difficult for companies to transform themselves into e-businesses. Multiple layers of security and authentication, for example, make it all but impossible to open inventory and other back-end systems to customers, partners and suppliers online. And maintaining all that architectural complexity is becoming increasingly expensive, IT managers say.

So, like Solomon, many organizations are seeking to simplify. They're trying to build unified IT architectures that provide common, enterprisewide security, authentication and data exchange services using Web-oriented technologies such as LDAP (Lightweight Directory Access Protocol), metadirectories, XML (Extensible Markup Language), the CORBA (Common Object Request Broker Architecture) distributed object framework, PKI (public-key infrastructure) security and authentication schemes (see chart.) That means giving users—whether employees, customers or partners—a single way to get to corporate information. And it means a single, less expensive approach to system management.

There are still plenty of roadblocks between enterprises and unified architectural simplicity. Standards such as LDAP, XML and PKI must be more completely defined and implemented in products. Many IT managers also admit that they still have concerns about security, concerns that loom large as they contemplate building simpler, unified architectures. And, as some IT managers are finding, business managers can become impatient with the expense and time it takes. For those reasons, the simpler, unified architecture is still a few years off for most companies. However, according to experts, many are moving in that direction.

"From a trend standpoint, we've seen a number of companies who've begun the process of collapsing and standardizing their architectures," said Andrew Kelemen, an analyst with CNS Group, in Norwalk, Conn. "All of a sudden comes this blurring of the lines between intranets and extranets."

Shifting investments

One such organization is Franklin Covey Co., a Salt Lake City-based provider of management tools and professional services. Eighteen months ago, Franklin Covey CIO Niel Nickolaisen decided to stop investing in the company's internal network architecture—a combination of Windows NT and Unix systems—and instead direct spending to Web-based applications that would enable his organization and its partners to more easily tap into disparate information over the Internet.

Nickolaisen sat down with his retail point-of-sale software provider, Tomax Technology Inc., also of Salt Lake City. Tomax understood the benefits of converging retail applications and the Internet architecture and agreed to develop a Java-enabled version of its system that would take advantage of the distributed intranet infrastructure. Using that version, running on his CORBA-based Internet architecture, Nickolaisen's plan is to be able to deploy a system that lets various parts of his organization tap into the same, up-to-date customer information.

From a browser, for example, Franklin Covey call center reps can tap into customer purchase history records, and suppliers can access their sales history information. By connecting and integrating with customers and suppliers over the Web, Nickolaisen said, his IT department can move away from constantly integrating applications and work on developing tools that provide value to the company.

"I put the current network into maintenance mode and decided that all investments, projects and initiatives would be built on an Internet infrastructure that we can leverage in the future," he said. "Single-point management would be ideal. Ideally, it wouldn't matter what format the data is in. It could be standardized and subscribed to by everybody who needed that data."

As more packaged applications become Java- and CORBA-enabled, Nickolaisen said he would like to use them to connect his current legacy systems to exchange information.

Since, like Franklin Covey, most enterprises aren't prepared to throw out the networks, directories and firewalls they already have in place, many will start by building Web-oriented technologies into current networks and legacy systems, experts say.

Many large enterprises, in fact, are beginning to apply increased pressure to get vendors to support such cross-platform standards.

Some vendors are taking note of IT managers' desires for simple, unified architectures based on Web technologies that can run across multiple platforms. Companies such as Entrust Technologies Inc., Netscape Communications Corp. and Novell Inc. have touted versions of key services that will run on a number of operating systems. Entrust's PKI products, for example, can work with directory services from Netscape, Novell and a variety of vendors via LDAP.

"The maturity of newer technologies will have to happen before enterprises fully deploy such an infrastructure," CNS Group's Kelemen said. "But as users begin to push for this infrastructure, they will in turn push vendors to adopt standards by requiring compatibility, integration and support."

At Duke Energy Corp., in Charlotte, N.C., for example, Bruce Anderson, the manager of technology planning and application services, has a goal of standardizing on a network architecture capable of handling both intranet and extranet capabilities. Anderson knows, however, that such a move will not happen overnight. That's because at Duke there is already a legacy environment that operates effectively. The company is running a complex network that includes a number of platforms, including Oracle Corp.'s manufacturing software running on Unix and some IBM mainframe applications running DB2. Duke also has a couple of hundred EDI (electronic data interchange) connections in place. The company won't be replacing those systems any time soon.

A single network

In building a new intranet architecture, Anderson is using tools that will allow him to eventually move to a single network infrastructure. He is currently implementing XML where possible, with an eye toward replacing some EDI connections when business partners are ready. He's also deploying directory services in various parts of his intranet in an effort to build an architecture that will eventually allow him to increase reliability and accessibility while lowering support costs.

"Everyone is trying to provide the most value to their customers," Anderson said. "One of our IT principles is to really try to leverage a single data network. That does not mean that it's either one physical standard or nothing at all; it means that there are certain physical characteristics users from inside and outside of the company will be able to see [that] will be standardized."

But why change something that already works? Anderson said the cost benefits of a more unified network architecture are too high to overlook. Staff and training expenses alone could be trimmed significantly if Duke could deploy unified, centrally managed architectures, he said.

So, Anderson is leveraging what he's learned from building business-to-business and business-to-consumer e-commerce applications to rebuild his internal network infrastructure.

Duke, for example, has already built Web-based call center applications that handle calls during disaster situations anywhere in the world. The company has learned how to use directory services and other Web technologies to make those systems scalable and reliable. Now, Anderson said, Duke will use that experience and some of the same technologies to enhance its intranet, which supports 25,000 employees worldwide. Anderson and Duke employees are evaluating metadirectories, certificates and PKI in a lab environment.

"Using an extranet application like our call center allows us to see how we can build a stable intranet accessible on a worldwide scope that is not only scalable but also reliable," Anderson said.

Political risks

However, focusing resources on creating a simplified, unified IT architecture can carry political risks, IT managers say. Franklin Covey's Nickolaisen, for example, has taken heat for pushing investments into Internet technologies. After all, that means he's less eager to spend money on, for example, a new customer relationship management application that some business managers are pushing for. Instead he is more eager to invest in LDAP or other technologies that end users don't see but that the company can leverage long-term. The hard part is explaining to business managers why applications that don't take advantage of the new Web-enabled architecture are not a good buy.

"I tend to push back and say, 'Let's not implement new applications. Let's not replace that call center we've had for 12 years because we can replace it and get increased functionality, but we can't leapfrog into the future with it because it is not designed to run on the open and flexible infrastructure of the future,'" he said. "I take incredible heat for my position." Nickolaisen recently decided to leave Franklin Covey. Although he said the decision had nothing do with differences that may exist over IT investment philosophy, he said his new position launching an Internet site will allow him to build an integrated architecture from scratch.

Driving for simplicity via a unified architecture also often means taking a more active role in pushing vendors to support Web-based standards. Duke's Anderson, for example, has become active in the Society for Information Management's IT Complexity Reduction Working Group, of which Duke's CIO, Cecil Smith, is a founding member. Using guidelines decided upon by the group, Anderson is pushing vendors to provide tools that will work seamlessly with tools from other vendors using standard interfaces and technologies such as XML (see related story).

"We are interested in the connectivity," Anderson said. "We'll sacrifice the absolute uptime in terms of performance and productivity if the collaborative nature of the environment allows us to be more adaptive. Our goal is to build a network that will enable us to exchange information seamlessly with our customers and vice versa."

The security barrier

Besides politics and a lack of vendor support for standards, concerns about security continue to pose a barrier to the merging of extranet and intranet architectures, Kelemen said. Although companies have willingly allowed partners to access information via extranets, the idea of extending their own infrastructures into somebody else's enterprise is still viewed as risky, he said.

"Right now, it's simple if you have two network architectures," Kelemen said. "You're going to authenticate at the account level, maybe using some kind of directory service. But as far as actually allowing some third party to access proprietary information within their own organizations, many IT managers are still balking."

In fact, said Duke's Anderson, merging intranet and extranet architectures will not only require new technology, it will force Duke's IT organization to change the way it implements security, replacing a series of application-specific firewall- and password-based systems with a unified approach that grants users access to applications based on predefined profiles and authentication.

At SIAC, Solomon has the same concerns about security. With billions of dollars at stake on trading floors, Solomon said he cannot afford a network security breach. That's why he's convinced that, while merging intranet and extranet architectures around something like PKI is feasible for some applications, he won't be doing it any time soon for critical applications such as SIAC's trading networks.

There, for the time being, he'll stick with the Kerberos security protocol, which, using defined boundaries, closes the network from the outside world.

Solomon may be a speed demon in the race to tomorrow's e-business architecture but, he said, he's not about to drive without a seat belt just yet if he doesn't have to

[Nov 29, 1999] CIOs pressure vendors to cut complexities. By Anne Chen, eWEEK

November 29, 1999 (eWEEK)  IT managers have long endured the arduous task of connecting disparate operating systems, applications and network protocols to build network architectures. That task is becoming even more burdensome as e-business increases the need for companies' systems to become accessible to customers, partners and suppliers.

Now, CIOs at a group of large companies are joining to pressure IT vendors to help solve the problem. Last year, 15 member companies of the Society for Information Management formed a working group to reduce the level of complexity in IT systems. They want vendors to support standards that will help make building simpler, unified IT architectures easier. The group includes companies such as AT&T Corp. and Kraft Foods Inc.

Not every member of the SIM working group is moving toward the goal of a unified architecture capable of handling both intranet and extranet needs. But each company has a stake in making it simpler, less costly and less time-consuming to integrate systems from multiple vendors, said Steve Michaele, district manager of foundation architecture at AT&T and the group's leader.

"We've got legacy environments we need to connect to and multiple hardware platforms that we're supporting. All of that is a complex infrastructure to manage," Michaele said. "Now that we're trying to leverage that infrastructure in a Web environment, we need these applications to be interoperable."

The IT Complexity Reduction Group's goal is to develop a series of standards documents to send to key hardware and software vendors. The documents give vendors outlines of which member companies want vendors to provide interoperability for products and languages.

At SIM's annual meeting last month, the working group presented areas where a unified approach is required and some of the standards it wants vendors to adopt. Every area of concentration is one with which members have problems with integration and standards such as directory services and security.

How have vendors reacted? Bruce Anderson, a member of the working group, has brought the group's white papers to vendors and asked if they'd consider following the group's specifications. Many vendors are open to the idea, said Anderson, manager of technology planning and application services for Duke Energy Corp., in Charlotte, N.C.

They'll have to do more than explore to meet the SIM working group's goals. The group is after nothing less than removing barriers to collaborative e-business for the future.

"We're not trying to optimize niches or segments of technology but to remove the barriers to interoperability and interconnectivity," Michaele said. "We're trying to be visionary in that way and build a successful path to the infrastructure of tomorrow."

Here's information for managers interested SIM's Complexity Reduction Working Group:

For more information, go to www.simnet.org.

Open Source and Bloatware

Elf Collaborative Open Source Crisco of the programming universe

I was talking to some people in IRC today, mostly about open source projects such as perl, linux, and mozilla, and it occured to me that the collaborative model of open source (where many developers from all over the Internet can contribute to a project) encourages programs to become bloated and unwieldy.

I started using Linux during 1998. I wasn't very adept at computing, and I didn't know much. But it was an interesting concept to me that a computer could actually turn on and proceed to run something other than DOS or Microsoft Windows (two products which I was convinced were intimately related, by popular myth). So back in 1998, I got a book on Linux. With the book came three CDs. Slackware, Red Hat, and Caldera. Red Hat's installer didn't really like my computer too much, and Caldera (according to the book) didn't seem to be a full version. So I pulled out the slackware CD and proceeded to install it on my 486.

Back in 1998, Slackware installed just fine on my 486 DX2, with 8 megs of ram and a 300mb hard disk. And after a bit of tinkering, so did Red Hat. Today, their installers won't even run on that computer.

Tales of the golden age of computing aside, what has happened to these pieces of software? Looking at things like the Linux kernel, there haven't been all that many significant changes in functionality, yet the size of both the compiled binary and the source have gone up dramatically. Sure they have journalling filesystems, encryption, IPv6 support, and all that. But can I do anything significantly different than what I could before? Not really.

While Microsoft has been accused of bloated software (a claim I do not dispute), it worries me to see open source projects progressing in their bloat at a much faster rate than Microsoft products.

This is my theory, and you're free to dispute it. But I believe it makes at least some sense. Monolithic development houses such as Microsoft usually set goals and keep to them. They say: "Let's make this software more accessible to people that are not familiar with computers," and so they develop a more intuitive user interface. They say: "Let's integrate a picture album, people like that", or "Almost everyone that uses our OS browses the web, let's put our own browser in". While this model of meeting demands and pushing out new features does lead to considerable bloat, as is obvious by the increasingly powerful computers required to run even the simplest of applications (or even the operating system itself), it is generally controlled. Because the team of people is generally constrained to a few design goals, and is also limited by the practicability of their upgrades or enhancements, the bloat too is limited.

Enter collaborative open source, as demonstrated by the Linux kernel. Because of the open source model, many people are encouraged to modify the source to fit their specific needs. And because of the general spirit of open source, they are encouraged to submit their modifications back to the community so that others in similar predicaments can benefit from their work. What this generally creates is uncontrolled and exponential bloat. Some of these submitted features are genuinely useful, such as drivers for hardware, or a refinement of the virtual memory manager. Additions and modifications that the majority of the people using the software can benefit from. However, this model also encourages excessive bloat because of the niche 'enhancements' submitted that are only used by a few, but distributed to everyone.

Take for example, the HTTP server integrated into the Linux kernel. Most sane administrators realize that not only would a user space webserver such as Apache better fit their needs, but a HTTP server that is integrated with the kernel may pose a serious security threat. Granted, when you compile your kernel, there is no need to include this piece of code. Such an arrangement is an optimal way of dealing with bloat.

However, this arrangement is made merely out of necessity. If everything everyone submitted into the Linux kernel code was automatically included, it would be a practically disasterous situation. But take, for example, open source projects where this is less important. Perhaps Mozilla.

One of the people in an IRC room I frequent was recently singing the praises of Mozilla and it's extensions system. It was so great, in fact, that he was using ChatZilla to talk to me. ChatZilla, for those who do not know, is an IRC client scripted entirely within Mozilla's extension system. And included by default. Mozilla comes by default with extensions such as a mail and news (NNTP) client, a visual HTML editor, and now, an IRC client. While features such as mail and news are perhaps acceptable default inclusions, due to their entanglement with many existing documents on the web, and an addressbook goes hand in hand with the mail client, features such as a page editor and especially an IRC client are largely extraneous bloat. Most sane people will use a more suitable, dedicated IRC client such as BitchX, mIRC, or X-Chat. Clients that are most often small compiled applications that run quickly and use very little memory/CPU time, as an IRC client should. Why, except for convenience, would the majority of mozilla users wish to run a scripted (which also suggests that it is inherantly slow and less efficient than possible) IRC client that depends also on the mozilla browser being loaded into memory? The point here is not to overly criticize chatzilla. The point is that it is an unnecessary piece of software designed to satisfy the needs of a very small percentage of the user group. It is bloat. Despite all its shortcomings, does Internet Explorer include an IRC client? The collaborative open source model encourages software to become bloated.

The general collaborative open-source attitude (although to be fair, only after the project has reached maturity) is to submit anything and everything that you can come up with. "Someone, somewhere will find a use for it," is the general mantra. Also causing the phenomenon of bloat is conflicting goals. Many want Linux to be a desktop OS. Many more (I hope) want it to be a server OS. Some don't understand why it can't be both. This creates odd situations, like having a "generic" kernel with USB and sound drivers, and KDE, running on a server with no sound card, USB devices, or a monitor.

Why do I care? Computers are fast enough to handle large software. Disk drives are big enough to store large software. Networks are fast enough to transfer large software. Why does it matter? It matters because:

* That software is not always the only thing running. People don't start up 'perl' and say "I think I'll sit here and watch the output of perl until it's done doing what it does. Then I can run other software." Look at a moderately loaded webserver running CGI scripts that rely on perl. Hundreds of perl invocations may be running at any given time. An increase in the size of the perl binary or the processing time it eats up while doing a specific function may not seem so significant until you too run a server like this. You upgrade perl and suddenly you find yourself needing another gigabyte of ram, or another processor. And server owners aren't the only ones suffering. Even people that participate in simple activities such as playing games suffer from software bloat. When you have a computer that is a few months behind state of the art and every meg of memory you can squeeze out helps your online gaming session look less like a slide show, things like a huge web browser binary residing in memory start to tick you off a bit.

* Bloated software is one of the main reasons why computers get 'outdated'. Not everyone compresses video or renders 3d movies. Some people like to do simple things like check their e-mail, or take a peek at the web. These people find their older computers woefully inadequate for the same tasks the computers were doing when they bought them. For example, I can no longer load up linux and mozilla on my 486 and browse the Internet. And don't tell me that's because of advances in the HTML standard. Why should I have to shell out a few thousand dollars because someone wants to add an IRC client to a web browser here, and a HTTP server to the kernel there, and slap a flashier graphical interface on an installer? If developers who submitted these little jewels of software delight to open source projects had to pay for the accumulation of hardware upgrades they were causing, they probably wouldn't do it.

Failing to upgrade your software is no solution to this problem. As can be seen from the myriad of websites dedicated to security problems and vulnerabilities, it is irresponsible not to regularly upgrade your applications.

What's the solution? Either developers need to be more conscious about the features that they integrate into their software (for example, if you're using C, #ifdef's around possibly unnecessary pieces of code, and configure script/Makefile choices), or they need to back-port bug fixes and more important upgrades into their old code. The first solution seems better.

There's nothing wrong with ChatZilla. I just don't want it. (And personally, I think it should be a separate piece of software.)

Note that entire Linux distributions such as Gentoo have been founded around the concept of choice, largely as a reaction against open source bloat. And me? I use FreeBSD, an OS that has a development team whose goals seem clearer than most, and who have designed what is very obviously a server/workstation OS as opposed to trying to cater to old grannies that don't want to pay for windows.

Discussion

Elf Collaborative Open Source Continued

software bloat

n. The results of second-system effect or creeping featuritis. Commonly cited examples include `ls(1)', X, BSD, Missed'em-five, and OS/2.

[Dec. 17, 2001] InformationWeek Fred Langa Rethinking 'Software Bloat' December 17, 2001 Fred Langa takes a trip into his software archives and finds some surprises--at two orders of magnitude. By Fred Langa

Reader Randy King recently performed an unusual experiment that provided some really good end-of-the-year food for thought:

I have an old Gateway here (120 MHz, 32 Mbytes RAM) that I "beefed up" to 128 Mbytes and loaded with--get ready--Win 95 OSR2. OMIGOD! This thing screams. I was in tears laughing at how darn fast that old operating system is. When you really look at it, there's not a whole lot missing from later operating systems that you can't add through some free or low-cost tools (such as an Advanced Launcher toolbar). Of course, Win95 is years before all the slop and bloat was added. I am saddened that more engineering for good solutions isn't performed in Redmond. Instead, it seems to be "code fast, make it work, hardware will catch up with anything we do" mentality.

It was interesting to read about Randy's experiment, but it started an itch somewhere in the back of my mind. Something about it nagged at me, and I concluded there might be more to this than meets the eye. So, in search of an answer, I went digging in the closet where I store old software.

Factors Of 100

It took some rummaging, but there in a dusty 5.25" floppy tray was my set of install floppies for the first truly successful version of Windows--Windows 3.0--from more than a decade ago.

When Windows 3.0 shipped, systems typically operated at around 25 MHz or so. Consider that today's top-of-the-line systems run at about 2 GHz. That's two orders of magnitude--100 times--faster.

But today's software doesn't feel 100 times faster. Some things are faster than I remember in Windows 3.0, yes, but little (if anything) in the routine operations seems to echo the speed gains of the underlying hardware. Why?

The answer--on the surface, no surprise--is in the size and complexity of the software. The complete Windows 3.0 operating system was a little less than 5 Mbytes total; it fit on four 1.2-Mbyte floppies. Compare that to current software. Today's Windows XP Professional comes on a setup CD filled with roughly 100 times as much code, a little less than 500 Mbytes total.

That's an amazing symmetry. Today, we have a new operating system with roughly 100 times as much code as a decade ago, running on systems roughly 100 times as fast as a decade ago.

By itself, those "factors of 100" are worthy of note, but they beg the question: Are we 100 times more productive than a decade ago? Are our systems 100 times more stable? Are we 100 times better off?

While I believe that today's software is indeed better than that of a decade ago, I can't see how it's anywhere near 100 times better. Mostly, that two-orders-of-magnitude increase in hardware speed is not matched by anything close to an equal increase in code quality. And software growth without obvious benefit is the very definition of "code bloat."

What's Behind Today's Bloated Code?

Some of the bloat we commonly see in today's software is, no doubt, due to the tools used to create it. For example, a decade ago, low-level assembly-language programming was far more common. Assembly-language code is compact and blazingly fast, but is hard to produce, is tightly tied to specific platforms, is difficult to debug, and isn't well suited for very large projects. All those factors contribute to the reason why assembly language programs--and programmers--are relatively scarce these days.

Instead, most of today's software is produced with high-level programming languages that often include code-automation tools, debugging routines, the ability to support projects of arbitrary scale, and so on. These tools can add an astonishing amount of baggage to the final code.

This real-life example from the Association for Computing Machinery clearly shows the effects of bloat: A simple "Hello, World" program written in assembly comprises just 408 bytes. But the same "Hello, World" program written in Visual C++ takes fully 10,369 bytes--that's 25 times as much code! (For many more examples, see http://www.latech.edu/~acm/HelloWorld.shtml. Or, for a more humorous but less-accurate look at the same phenomenon, see http://www.infiltec.com/j-h-wrld.htm. And, if you want to dive into Assembly language programming in any depth, you'll find this list of links helpful.)

Human skill also affects bloat. Programming is wonderfully open-ended, with a multitude of ways to accomplish any given task. All the programming solutions may work, but some are far more efficient than others. A true master programmer may be able to accomplish in a couple lines of Zen-pure code what a less-skillful programmer might take dozens of lines to do. But true master programmers are also few and far between. The result is that code libraries get loaded with routines that work, but are less than optimal. The software produced with these libraries then institutionalizes and propagates these inefficiencies.

You And I Are To Blame, Too!

All the above reasons matter, but I suspect that "featuritis"--the tendency to add feature after feature with each new software release--probably has more to do with code bloat than any other single factor. And it's hard to pin the blame for this entirely on the software vendors.

Take Windows. That lean 5-Mbyte version of Windows 3.0 was small, all right, but it couldn't even play a CD without add-on third-party software. Today's Windows can play data and music CDs, and even burn new ones. Windows 3.0 could only make primitive noises (bleeps and bloops) through the system speaker; today's Windows handles all manner of audio and video with relative ease. Early Windows had no built-in networking support; today's version natively supports a wide range of networking types and protocols. These--and many more built-in tools and capabilities we've come to expect--all help bulk up the operating system.

What's more, as each version of Windows gained new features, we insisted that it also retain compatibility with most of the hardware and software that had gone before. This never-ending aggregation of new code atop old eventually resulted in Windows 98, by far the most generally compatible operating system ever--able to run a huge range of software on a vast array of hardware. But what Windows 98 delivered in utility and compatibility came at the expense of simplicity, efficiency, and stability.

It's not just Windows. No operating system is immune to this kind of featuritis. Take Linux, for example. Although Linux can do more with less hardware than can Windows, a full-blown, general-purpose Linux workstation installation (complete with graphical interface and an array of the same kinds of tools and features that we've come to expect on our desktops) is hardly what you'd call "svelte." The current mainstream Red Hat 7.2 distribution, for example, calls for 64 Mbytes of RAM and 1.5-2 Gbytes of disk space, which also happens to be the rock-bottom minimum requirement for Windows XP. Other Linux distributions ship on as many as seven CDs. That's right: Seven! If that's not rampant featuritis, I don't know what is.

Is The Future Fat Or Lean?
So: Some of what we see in today's huge software packages is indeed simple code bloat, and some of it also is the bundling of the features that we want on our desktops. I don't see the latter changing any time soon. We want the features and conveniences to which we've become accustomed.

But there are signs that we may have reached some kind of plateau with the simpler forms of code bloat. For example, with Windows XP, Microsoft has abandoned portions of its legacy support. With fewer variables to contend with, the result is a more stable, reliable operating system. And over time, with fewer and fewer legacy products to support, there's at least the potential for Windows bloat to slow or even stop.

Linux tends to be self-correcting. If code-bloat becomes an issue within the Linux community, someone will develop some kind of a "skinny penguin" distribution that will pare away the needless code. (Indeed, there already are special-purpose Linux distributions that fit on just a floppy or two.)

While it's way too soon to declare that we've seen the end of code bloat, I believe the signs are hopeful. Maybe, just maybe, the "code fast, make it work, hardware will catch up" mentality will die out, and our hardware can finally get ahead of the curve. Maybe, just maybe, software inefficiency won't consume the next couple orders of magnitude of hardware horsepower.

What's your take? What's the worst example of bloat you know of? Are any companies producing lean, tight code anymore? Do you think code bloat is the result of the forces Fred outlines, or it more a matter of institutional sloppiness on the part of Microsoft and other software vendors? Do you think code bloat will reach a plateau, or will it continue indefinitely? Join in the discussion!

The Joel on Software Forum - Software Bloat and Moore's Law

Software Bloat and Moore's Law

Regarding a line in the recent interview of Joel - How can Moore's Law justify software bloat?  Software can grow *at least* as fast as hardware can.  So no matter how much better your next computer is, bloated software will *still* run slowly.  And what's worse, you'll have to upgrade *everyone's* workstation to the new model, in order to keep compatible with the new bloated office suite.

When using software you're trying to complete a task, and the cost of completing that task is worker time, the hardware, and any software development or licensing.  When the software adds 100 useless "features", and ends up needing a system with a  processor 18 months newer and 32MB more RAM, that adds to the total cost to use the software.  Bloatware simply costs too much.

When it comes to software that is always incompatible with the previous version (Linux kernel, Microsoft Office), this leads to a perpetual cycle of hardware upgrades.  Why should an organization have to keep buying boatloads of new PCs, when most of the people are trying to complete the same tasks?

Neil STevens
Wednesday, December 05, 2001

I definitely respect this point of view, that we're going nuts with new features.  However, the consumers have spoken, and while they say they want speed, they pay for bloat.

Things may change though, since it's only been a decade or two that we've had PCs.  It is very hard to find meaningful long-term patterns in such a small time period.  Perhaps the economic slowdown will make things more clear.

Basically, I think that any discussion of bloat requires a discussion of consumers as well as software companies.

forgotten gentleman
Wednesday, December 05, 2001

Software bloats faster than hardware capacity expands. Why ? We have all experienced situations where the user demands interconnection between functions which mess up our nice logical deconstruction of the system. If you think of a system as a circle with functions at the edge, the interconnectedness of the system is related to the area. Adding a little extra to the circumference (user functions) inflates the area (interconnectedness) to a much greater degree. Increasing the power of a computer simply allows your computer to run bigger circles of functionality with an exponential increase in complexity.

Ian sanders
Wednesday, December 05, 2001

On the topic of software bloat, it would help to distinguish between size bloat and processor cycle bloat.  With regards to size, the more features you pile on, the bigger your code footprint will become.  Just as mentioned by Joel, the cheaper disk space becomes, size bloat becomes more and more negligible.  Processor cycle bloat is typically brought about by software layering.  To get something done, you end up calling layers and layers of software.... most of which add little to what you need to do.  I come from a c/c++ background so yes, I did knock VB, Perl, and other scripting languages around for a while with regards to their performance aspect.  However, I am finally recognizing that all the above languages have become so pervasive in the programming world.

In the miniscule world where microseconds matter, size and speed matter.  This used to be the perception and programming accumen used to be how tight you can keep your code.  You tell me what the new perception is.

Hoang Do
Wednesday, December 05, 2001

On Size vs Speed, remember that it's not just disk space that is affected by size bloat.  The more code and data your application has to use, the more RAM your application uses up, the more swapping the user has to endure, and the slower the entire system runs as a result.

Yes, disk space is cheap, but disk space isn't a substitute for RAM.

Neil Stevens
Wednesday, December 05, 2001

Yes, but RAM is cheap, too. And so are CPU cycles.

Besides, hardware capacity hasn't just grown faster than software has expanded, it's left it in the dust and lapped it a few times. 512 MB of RAM is less than $100. A 1 GHz CPU is less than $100. A 20 GB hard drive is less than $100.

Dave Rothgery
Thursday, December 06, 2001

RAM/HD not withstanding, it's worth noting that bandwidth for ESD (electronic software distribution) is expensive.  And in large enterprises, the ability to have a clean installation process (e.g., not ripping up registry settings or installing system DLLs - a common trait of bloatware) is also key.

We market a small P2P web server ([plug]BadBlue[/plug]) for file sharing that at one point was so small (161K) that Lucas Gonze of O'Reilly Network entitled his column about it "161K".  I guess journalists are sick of bloatware as well.

But I do think that's one of the things people like about this type of software:  tiny, easy to install and functional for the purpose at hand.

D Ross
Thursday, December 06, 2001

What about all the junk in the background... In the Operating Systems course I am doing they teach you that the more programs that run at once the more CPU cycles have to be split in between programs. This causes overhead of which older operating systems had trouble with. The new operating systems like windows 2000 and XP don’t have such a problem but in saying that they have a higher overhead regardless.

Phillip Kilby
Sunday, December 16, 2001

[RRE]software bloat [forwarded from Phil Agre] (fwd) Date: Fri, 30 Apr 1999 16:22:48 +0000

From: [address deleted] (RA Downes)  
Subject: The Bloatware Debate

One of the chief hallmarks of early UNIX was how simple, compact programs worked well together.  Brian W. Kernighan's definition of a good program was a program so good and so consistent that it could be used for an entirely different purpose and be expected to work well. UNIX, they said, was a way of thinking more than an operating system. And, with Brian's Software Tools series, it was surely so.

Microsoft Windows is also a way of thinking - or not thinking, to be more exact.  In almost every possible sense it is the anathema of the programming community, if that community still abides by and adheres to the solid thinking delineated by Brian so many years ago.

MS Windows programming is considered too difficult to attempt head on. Where we come from most major corporations, financial institutions and the like promised a smooth transition from UNIX or DOS to Windows 3.1x within a matter of weeks.  Management talking of course.  When they found this would not work they decided to invest heavily in 16-bit Visual Basic applications.  Operative word "heavily".  These bloatware masters sunk almost any machine made.  Clearly this was not the answer either.

People looked to Kahn.  Borland, with its Turbo C, saw the opening and released Borland C, and shortly thereafter Scott Randell who a year earlier had toured with MSC 7.0 (which admittedly never worked) was out rocking again, this time with Visual C++.  The environment was unbelievable; the executables were extremely bloated; but still and all it was Microsoft talking, and still and all they were smaller than the corresponding Borland images.  COBOL programmers everywhere were suddenly encouraged to learn C++, develop code browsing skills, learn about preprocessors, assembly language, CodeView and subsequent debuggers, and the world entered into a tailspin.

What originally started as a rather feeble but lucky attempt to get on the OO bandwagon, the MFC soon became something you'd like to see Steve McQueen kill.  Patches and work-arounds and bugs and more bugs, and bloat and more bloat.  The current splash screen module is a case in point: Microsoft includes a 16-color bitmap which weighs in at nearly 200KB for you.  This bitmap can be compressed with RLE encoding to less than half that size.  The idea of banging a 100KB splash bitmap in an application is still, however, sickening.  Yet Microsoft gladly gives you the bitmap at 200KB, happy if you don't understand what you are doing by using it.  Your application will be more sluggish than their own bloatware, and people will be less inclined to complain about what they themselves do.

Microsoft's RegClean, a popular product for fixing corruptions in the MS Windows Registry is another case in point.  When this application was originally introduced I downloaded it and wondered about its size. It weighed in then at nearly a megabyte.  Similar applications out there were 20KB and hardly more.  What was inside this monster?  I opened it and looked inside.

Remember all those stories about how surgeons in the old days just threw their rubber gloves inside the patient's stomach before sewing them back up again?  Well here you had it.  There were humungoid bitmaps never used.  There were dozens of icons never referenced. There were tens of kilobytes of entries in the string table that had no meaning for the application whatsoever.

I honed the app down and came to the conclusion that the actual size of RegClean should be about 45KB.  That as compared to its distribution size of nearly one megabyte.  Clearly bloat is not only a question of adding features almost no one wants.  Bloat is a condition of the mind, permeating software houses everywhere.

Clearly again the distribution of RegClean was highly irresponsible. But remember, MS Windows is not just an operating system - it is a way of thinking, or not thinking as you may have it.  And it has permeated the entire industry today.  Our hats off to Microsoft.

In conclusion: there are few application domains even today that require executables of over 100KB, and most ordinary tasks can be adequately managed by executables in the 20KB range.  This is simply a fact.

There are no excuses.  Either we think or we don't.  There is no in between.

RA Downes  Radsoft Laboratories <http://www.radsoft.net> ------------------------------

End of RISKS-FORUM Digest 20.35

************************

Date: Tue, 4 May 1999 09:06:59 -0700 (PDT)

From: risks at csl.sri.com

RISKS-LIST: Risks-Forum Digest  Tuesday 4 May 1999  Volume 20 : Issue 37

FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)     ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Date: Sun, 02 May 1999 16:12:13 +0000
From: [address deleted] (RA Downes)
Subject: Re: Bloatware Debate (Downes, RISKS-20.35)

A certain "Johnny" has written to me from Microsoft because of my posting in RISKS-20.35 about MS bloat.  The tone was a thinly disguised threat.  In his opening, "Johnny" stated that the "bloat" of MS RegClean was due no doubt to having static links.  Discussing the sweeping ramifications of such a statement is unnecessary here. The mind boggles, it is sufficient to state.  The MSVC runtime is a mere 250,000 bytes and in fact is not statically linked anyway to MS RegClean, AFAIK [as far as I know].  MS RegClean is an MFC app and will by default use the dynamically linked MFC libraries.  And even if its static code links were an overhead here they would add but a small fraction of the total bloat, say 40KB at most.

For whatever reason, I decided to download the latest version of MS RegClean from BHS again and pluck it apart.  This is what I found. I have tried - and it has been difficult - to keep subjective comments out of this report.

Current Status of RegClean Version 4.1a Build 7364.1

====================================================

Image Size (Unzipped and ready to run): 837,632 bytes (818KB)

=============================================================

(Subjective comment removed.)

Import Tables

The import section in the PE header.  This gives an indication of just how (in)effective the use of Bjarne's C++ has been.  In this case, the verdict is: "pretty horrible".  A walloping 7,680 bytes are used for the names of the relocatable Win32 imports.  These are the actual names of the functions (supposedly) called.  MS RegClean does not call most of these functions - they remain because an MFC template was originally used, most likely borrowed from another application, and it was never "cleaned".  This is corroborated by what is found among the "Windows resources": over half a dozen standard menus, assorted graphic images, print preview resources, etc. that have nothing to do with the application at hand.

Resources

Please understand that resources not only bloat an executable with their own size, but with additional reference data, in other words the bloat factor of an unused or bad resource is always somewhat larger than the size of the bloating resource itself.

Accelerators

Sixteen (16) unused accelerators from an MFC template were found: Copy, New, Open, Print, Save, Paste, "Old Undo", "Old Cut", Help, Context Help, "Old Copy", "Old Insert", Cut, Undo, Page Up, Page Down. MS RegClean uses only one accelerator itself, not listed here.

Bitmaps

This was a particularly sorry lot.  The main bloat here was a splash screen bitmap weighing in (no RLE compression of course) at over 150KB.  Further, Ctl32 static library bitmaps were found, meaning MS RegClean is still linking with the old Ctl32v2 static library which was obsolete five years ago and which automatically adds another 41KB to the image size.

Cursors

Six (6) cursors were found, none of which have anything to do with this application.

Dialogs

A very messy chapter indeed.  MS RegClean walks around with eighteen (18) hidden dialogs, of which only one or at the most two are ever used.  The others are just - you took the words out of my mouth - junk.  The findings (read it and weep):

*) Eleven (11) empty dialogs with the caption "My Page" and the static text "Todo", all identical, all empty, and of course all unused.  This is a wonder in and of itself.

*) The main "wizard" dialog actually used by the application is left with comment fields to help the programmers reference the right controls in their code (subjective comment removed).

*) A "RegClean Options" dialog which AFAIK is never used.

*) A "New (Resource)" dialog, probably a part of the development process, just stuffed in the stomach at sew-up time and left there for posterity.

*) A "Printing in Progress" dialog.

*) A "Print Preview" control bar dialog.

Icons

MS RegClean has three icons, all with images of 48x48 in 256 colors (of course).  The funniest thing here is that the authors of MS RegClean have extracted the default desktop icon from shell32.dll, which is available at runtime as a resident resource anyway and at no image bloat overhead at all, and included it in toto in their executable.

Menus

MS RegClean has eight (8) menus, at least half of these are simply junk left around by the MFC template.  Another menu indicates that the authors of RegClean have in fact worked from an internal Microsoft Registry tool - rather bloated in itself it seems.

String Table(s)

Actually it need only be one string table, but Microsoft itself has never learned this.  The findings here were atrocious.  And you must remember that strings stored in a string table are stored in Unicode, which means that their bloat automatically doubles.  Further, MS's way of indexing strings in a string table means a 512 byte header block must be created for every string grouping, and strings are grouped according to the high 12 bits of their numerical identifiers (yes they are 16-bit WORD identifiers).  Meaning indiscriminate or random numbering of string table entries will make an otherwise innocent application literally explode.

347 (three hundred forty seven, yep, your video driver is not playing tricks on you) string table entries were found in MS RegClean, including 16 identical string entries with the MS classic "Open this document" as well as archaic MFC template toggle keys texts which are not used here (or almost anywhere else today).  Most of these strings have - of course - nothing to do with the application at hand.

Toolbars

Toolbars are a funny MS way of looking at glyph bitmaps for use in toolbar controls.  MS RegClean has two - one which may be used by the application, and one which was part of the original MFC template and never removed.

Total Accountable Resource Bloat

The total accountable (i.e. what can be directly calculated at this stage) resource bloat of MS RegClean 4.1a Build 7364.1 is over 360,000 bytes (350KB).

Total Accountable Code Bloat

Harder to estimate, but considering that most of the code is never used, only part of an MFC template that the authors of MS RegClean lack the wherewithal to remove, the original estimate of a total necessary image size of 45KB for the entire application must still stand.

In Conclusion

Bloat is not a technical issue, but verily a way of thinking, a "state of mind".  Its cure is a simple refusal to accept, and a well directed, resounding "clean up your act and clean up your code!"

PS. Send feedback on RegClean to regclean at microsoft.com

RA Downes, Radsoft Laboratories  http://www.radsoft.net

   -- I have a shoehorn, the kind with teeth. --

Christopher R. Hertel -)----- University of Minnesota

crh at nts.umn.edu    Networking and Telecommunications Services
RE software bloat, was, RE Linux on a 486DX-33




> who is guilty of software bloat??????

Most software programs are getting large because consumers want more and more features.  Software companies need to add more and more features to create reasons for people to upgrade.  On top of that the market today demands that software be released earlier and earlier... software companies cannot spend months optimizing code.  In addition the benefits of code optimizations are becoming less and less because of how fast computers are and how much storage we have available.

Just the other day I was looking through some old Commodore magazines (circa 1984) and was chuckling at how much time they spent optimizing the little BASIC programs.  One article even talked about the future of gaming and mentioned something like.... "some day we may have computers so powerful that we don't have to worry about how much storage we can use".

For the most part I don't think software is all that bloated.  There are a few programs that seem needlessly slow, but most operate quite quickly. Even MS Office 2000 apps seem to start quickly and run without much hesitation at all.

And my argument when someone complains about software bloat is usually "use an older version!"  You don't HAVE to upgrade!

I think consumers and the media are to blame.  They harrass software manufacturers when they're "late".  This forces software manufacturers to reduce time spent debugging or optimizing code so they can get a program out the door. 

It always cracks me up... the same people that complain when software is late will complain if it's buggy.  IMO I'd rather have software be a little "late" so that it can be higher quality.


Recommended Links

Softpanorama Top Visited

Softpanorama Recommended

Wikipedia

Strategy Letter IV Bloatware and the 80-20 Myth - Joel on Software

software bloat Software bloat is an instance of Parkinson's Law: resource requirements expand to consume the resources available.

Manton Reece Smart software bloat One way is to differenciate between visible and hidden bloat. For example, Microsoft products used to have a tendency to take every major bullet point on the side of the box and make a toolbar icon for it. Even if the user only uses 5% of those features, they have easy access to far too many of them, and they needlessly have access to them all at once.

The Old Joel on Software Forum: Part 3 (of 5) - Define bloat: by ...

IBM developerWorks Blogs building tools to support software development teams

Software bloat, beware the server-managed rich client!

Bob Zurek: Software Bloat! Ever get the feeling that you've had enough of the upgrade after upgrade and sometime continuous stream of hundreds of new features coming in the software you use? How many of these new features do you truely need? In fact it seems like some of these "new features" are not really new, they are just things that needed to be fixed or improved that are now called "new features". New features also sometimes mean more complexity, more disks, longer downloads, bigger help systems, more documents, more trips to the bookstore, etc. (more)

Bob, I agree with you in principal, but it's not such an open and shut case.

Random Findings

Welcome to Simplify iT Innovations - It's that simple
Simplify iT Innovations, 100% Australian owned and operated, located in Perth, Western Australia. As our name implies, we provide leading edge innovative ...

Processor Editorial Article - Simplify IT Network Management ...

Processor Editorial Article - Simplify IT Network Management & Monitoring.

HP Feature Story: Simplify IT: Smart Office, Smart advice ...

Whether you're a small business with limited information technology (IT) support or a midsize company trying to get the most from your IT infrastructure, ...



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2014 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Site uses AdSense so you need to be aware of Google privacy policy. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine. This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting hosting of this site with different providers to distribute and speed up access. Currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.

Disclaimer:

The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: February 19, 2014