||Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
|(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix|
Nikolai Bezroukov. Portraits of Open Source Pioneers
For readers with high sensitivity to grammar errors access to this page is not recommended :-)
|But in our enthusiasm, we could not resist a radical
overhaul of the system, in which all of its major weaknesses have been
exposed, analyzed, and replaced with new weaknesses.
-- Bruce Leverett,
On the technical side of the Linux development things were also not very encouraging. Linux kernel became just too complex to be interesting for the large part of potential volunteer developers and existing developers essentially were converted into outsourcing pool for existing Linux vendors and first of all Red Hat. Linus is definitely oversimplifying the situation saying that he continues development "just for fun". I doubt that there can be any fun in managing 300K of source code especially in view of his letter about patches submissions reproduced below. I would assume a substantial ambivalence of his part, a mixture of love for programming and his acquired status and hate to be the prisoner of the Linux kernel:
Date: Wed, 25 Apr 2001 14:31:30 -0700 (PDT) From: Linus Torvalds <email@example.com> > > I've got a question... I would like where to send my driver patches...
Probably both me and Alan.
[ General rules follow. Too few people seem to have seen them before ]
Most importantly, when sending patches to me:
- specify clearly that you really want to see them in the standard kernel, and why. I occasionally get patches that just say "this is a good idea". I don't apply them. Especially if they are cc'd to somebody else too, in which case I pretty much assume that it's a RFC, not a "real patch".
- do NOT send patches in attachments. Send one patch per mail, in clear-text under your message, so that I can easily see the patch and decide then-and-there whether it looks ok. And if it doesn't look ok, and I do a "reply", the patch gets included in the reply so that I can point out which part of the patch I dislike.
Don't worry about sending me five emails. That's FINE. I much prefer seeing five consecutive emails from the same person with five distinct subject lines and five distinct patches, than seeing one email with five attachments to it.
- if your email system is broken, and you want to send patches as attachments to avoid whitespace damage, then please FIX YOUR EMAIL SYSTEM INSTEAD.
- Don't point to web-sites. If I have to move the mouse outside my email xterm to work on the email, your email just got ignored.
- Make your patches one sub-directory under the source tree you're working on. In short, your patches should look like something likeclean/fs/inode.c ... +++ linux/fs/inode.c .. @@ -179,7 +179,7 @@ ... so that I can (regardless of where my source tree is) apply them with "patch -p1" from my linux top directory. Then I can just do a cd v2.4/linux patch -p1 < ~/multiple-emails-with-multiple-accepted-patches and not have to worry about three patches being based on /usr/src/linux, while two others not having a path at all and being individual filenames in linux/drivers/net.
- and finally: re-send. If I had laser-eye surgery the day you sent the patches, I won't have applied them. If I took a day off and spent it with the kids at the pool instead, I won't have applied them. If I decided that this weekend I'm not going to read email for a change, I won't have applied them.
And when I come back to work a day or two later, I will have several hundred other emails to work through. I never go backwards in my emails.
At the same time Linus managed to get new friends who generously "clued" him in his free development process. As Wired reported from the invitation-only summit in San Jose, California "Linus Torvalds and 60 of his closet friends met in spring 2001 to map out development plans for the Linux kernel 2.5." And most of the new friends seems to be executives from IBM, Oracle and other commercial software heavyweights; the whole project look suspeciaously like a cooperative project between several large firms than something run independently by enthusiasts :
Presentations from industry leaders such as Oracle, IBM and API clued developers to the changes that the business world would like to see in the kernel.
Lance Larsh of Oracle outlined Oracle's wish list for Linux -– detailing features and additions to the kernel that would allow Oracle's high-end database servers to perform better when running Linux. Torvalds said that most of the issues brought up by Larsh are already on the "fix" list.
Many of the presenters requested improved error-reporting features in 2.5. Currently, if certain computing operations fail, the kernel doesn't know if the failed disk has a simple glitch such as a bad sector, or if the entire drive is completely trashed.
Kernel 2.5 will most likely provide more detailed information to aid troubleshooting.
More than one conference presenter lobbied for improved journaling features and enhanced control over drives' write-caching procedures.
Some presenters noted the kernel's dismaying tendency to become overwhelmed under major pressure. API's Jamal Hadi Salim said that the standard kernel doesn't do very well when it's subjected to intense network traffic primarily because it tends to do a lot of repetitive handling of each packet of information.
Jamal presented several fixes to this issue, which many felt would will improve and streamline the way the kernel deals with information overload.
The ability to name removable devices was another issue under discussion, and dynamic tracking will most likely be a feature in V2.5.
Another likely feature, judging by the enthusiasm and time devoted to discussing it, will be support for hot plugging.
Virtually all new devices that connect to a computer via USB, SCSI, or firewire connections can be hot-plugged: connected to a computer without first having to shut down the system.
The Linux kernel still can't handle hot plugging, though, and it needs to be able to offer the function to stay competitive.
Enhanced power management, with a move toward true implementation of the ACPI (Advanced Configuration and Power Interface), and virtual memory management are also in the works, attendees said. They all sounded very eager to get to work on developing features for the new kernel now that kernel 2.4, released in January and enhanced several times since, is considered stable.
Also the danger of "The committee for the administration of the structural planning of the Linux kernel" suddenly materialized in August, 2000 with the help of old good ESR. May be it was his understanding of VA corporate interests, but somehow ESR decided that it's time to teach Linus how to program the kernel, or at least how to organize the development (and please remember that ESR does have Fetchmail and Emacs macros experience under the belt ;-). And the results you can see below. A pretty interesting exchange between a member of VA board, multimillionaire Eric Raymond and the top Transmeta programmer, multimillionaire Linus Torvalds in what ESR used to consider in CatB to be a pretty harmonious community that is skillfully guided by Linux Torvalds. With such friends, who needs enemies and it might be that the letter was leaked to LWN by ERS himself (as the name of the file on LWN a-ESR-sharing implies; actually it looks that ESR put a lot of effort writing it and it was written with subsequent publishing in mind; BTW that's a typical behavior pattern of the members of the "The committee for the administration of the structural planning of the Linux kernel."):
Date: Tue, 22 Aug 2000 16:00:52 -0400 From: "Eric S. Raymond" <firstname.lastname@example.org> To: Linus Torvalds <email@example.com> Subject: Re: [PATCH] Re: Move of input drivers, some word needed from you Linus Torvalds <firstname.lastname@example.org>: > > > On Tue, 22 Aug 2000, Eric S. Raymond wrote: > > > > Linus Torvalds <email@example.com>: > > > But the "common code helps" thing is WRONG. Face it. It can hurt. A lot. > > > And people shouldn't think it is the God of CS. > > > > I think you're mistaken about this. > > I'll give you a rule of thumb, and I can back it up with historical fact. > You can back up yours with nada.
Yes, if twenty-seven years of engineering experience with complex software in over fourteen languages and across a dozen operating systems at every level from kernel out to applications is nada :-). Now you listen to grandpa for a few minutes. He may be an old fart, but he was programming when you were in diapers and he's learned a few tricks...> Face it. Every once in a while, you have to start afresh. Tell people that > "Ok, we can't share this code any more, it's getting to be a major > disaster".
I'm not arguing that splitting a driver is always wrong -- I can easily imagine making that call myself in your shoes, and for the exact reasons you give. I'm arguing that the perspective from which you approach this issue causes you to underweight the benefits of sharing code, and to not look for ways to do it as carefully and systematically as you ought.
When you were in college, did you ever meet bright kids who graduated top of their class in high-school and then floundered freshman year in college because they had never learned how to study? It's a common trap. A friend of mine calls it "the curse of the gifted" -- a tendency to lean on your native ability too much, because you've always been rewarded for doing that and self-discipline would take actual work.
You are a brilliant implementor, more able than me and possibly (I say this after consideration, and in all seriousness) the best one in the Unix tradition since Ken Thompson himself. As a consequence, you suffer the curse of the gifted programmer -- you lean on your ability so much that you've never learned to value certain kinds of coding self-discipline and design craftsmanship that lesser mortals *must* develop in order to handle the kind of problem complexity you eat for breakfast.
Your tendency to undervalue modularization and code-sharing is one symptom. Another is your refusal to use systematic version-control or release-engineering practices. To you, these things seem mostly like overhead and a way of needlessly complicating your life. And so far, your strategy has worked; your natural if relatively undisciplined ability has proved more than equal to the problems you have set it. That success predisposes you to relatively sloppy tactics like splitting drivers before you ought to and using your inbox as a patch queue.
But you make some of your more senior colleagues nervous. See, we've seen the curse of the gifted before. Some of us were those kids in college. We learned the hard way that the bill always comes due -- the scale of the problems always increases to a point where your native talent alone doesn't cut it any more. The smarter you are, the longer it takes to hit that crunch point -- and the harder the adjustment when you finally do. And we can see that *you*, poor damn genius that you are, are cruising for a serious bruising.
As Linux grows, there will come a time when your raw talent is not enough. What happens then will depend on how much discipline about coding and release practices and fastidiousness about clean design you developed *before* you needed it, back when your talent was sufficient to let you get away without. The code-sharing issue -- more specifically, your tendency to abandon modularization and re-use before you probably ought to -- is part of this. Andy Tanenbaum's charge against you was not entirely without justice.
The larger problem is a chronic topic of face-to-face conversation whenever two or more senior lkml people get together and you aren't around. You're our chosen benevolent dictator and maybe the second coming of Ken, and we respect you and like you, but that doesn't mean we're willing to close our eyes. And when you react to cogent and well-founded arguments like Rogier Wolff's as you have -- well, it makes us more nervous.
I used to worry about what would happen if Linus got hit by a truck. With all respect, I still worry about what will happen if the complexity of the kernel exceeds the scope of your astonishing native talent before you grow up.
After LWN it was featured on Linux Today - LWN ESR to Linus on the curse of the gifted and thus became a public event. The discussion contained some pro Linus comments (but surprisingly I found several pro-ESR comments too, which IMHO is a pretty alarming sign for this badly disguised attack on Linus' authority):
RealisticBoy - Subject: how arrogant of esr... ( Aug 28, 2000, 17:18:48 ) to be lecturing Linus on the fine point of OS design!!! esr may have done some good writing, but i don't think he's as much the authority on os design as he thinks he is. Shame on him.
tony stanco - Subject: shut up and show them the code ( Aug 28, 2000, 17:45:47 ) ESR,
You once told RMS to shut up and show them the code (I cringed when I read that). Well, I guess it's time for you do the same thing.
If you don't like what Linus is doing with Linux, fork it!
[God, this Presidential Committee thing has put me in a foul mood today. I'm going for lunch].
Jamie Redknapp - Subject: bored with raymond ( Aug 28, 2000, 23:09:17 )
Aren't you bored with ESR, his ego, and his politics, (He carries a gun so he is big and free)
Saddened and irritated in equal measure, because ESR is rapidly becoming a negative against the Linus/FSf/Gnu/Opensource movement as the political views take over.
This has nothing to do with free software. Increasingly, the pronouncements involve politics beyond Linux and show a total absence of respect for the individuals around the world who have contributed and don't share the politics.
Sorry friend, your politics are not mine.
As for Ego and Genius. I doubt whether Linus Torvalds thinks that he is a genius, and even less that Linus is a "libertarian" in that narrow sense.
It's possible that Linus Torvalds doesn't think that either himself or Eric Raymond are geniuses either!! (what are geniuses?) (but evidently ESR is according to his mail.
ESR asks that Linus grow up. We have all had grandpas who have receded to childhood. Perhaps ESR is one of them?
Can we talk sensibly?
In mid-1999, Torvalds said Linux 2.4 would be ready that fall. In spring 2000, he said October, 2000 was more likely, and the latest goal was the end of the year with unexpected release of the kernel during the first week of Jan, 2001. Like in case of version 2.2 the pressure to release the kernel mounted and was too difficult to bear and like with version 2.2 it took half of 2001 to stabilize this "production" version. Anyway, at some point kernel traffic list became a too hot discussion forum for Linus to postpone this version further. It became a tradition that at some point Linus abruptly decided that "enough is enough" and release the new "production version" of the kernel. The version 2.4.0 was released on Jan 4, 2001. Here is his pretty funny announcement (see also Linux Today - Linus Torvalds And oh, btw..2.4.0 is out):
Date: Thu, 4 Jan 2001 16:01:22 -0800 (PST) From: Linus Torvalds firstname.lastname@example.org To: Kernel Mailing List email@example.com Subject: And oh, btw..
In a move unanimously hailed by the trade press and industry analysts as being a sure sign of incipient braindamage, Linus Torvalds (also known as the "father of Linux" or, more commonly, as "mush-for-brains") decided that enough is enough, and that things don't get better from having the same people test it over and over again. In short, 2.4.0 is out there.
Anxiously awaited for the last too many months, 2.4.0 brings to the table many improvements, none of which come to mind to the exhausted release manager right now. "It's better", was the only printable quote. Pressed for details, Linus bared his teeth and hissed at reporters, most of which suddenly remembered that they'd rather cover "Home and Gardening" than the IT industry anyway.
Anyway, have fun. And don't bother reporting any bugs for the next few days. I won't care anyway.
It looks like exhausted creator of Linux kernel still have a sound sense of humor. Technically version 2.4 of the kernel was mainly polishing effort. But the size of kernel increased and in some areas performance decreases. Still it improved Linux performance in such key area as TCP/IP stack and introduced a better SMP capabilities. Socially it was (at least partially) a reaction of harsh critique of Linux kernel quality by Thompson and to the Mindcraft fiasco. Some people (incorrectly) assume that both a product and a phenomenon, Linux has reached a plateau where it is now considered just another mainstream player in the overall Unix market. Despite monstrous size technically that is still not true in some areas, but what is true is than the development of Linux is no longer fun. And that means that developers working for companies like Red Hat, IBM, Caldera are the key for the kernel future.
As Linus Torvalds rather hypocritically ("I'm not much of a leader,") admitted himself that problems are here in his interview to eWEEK Growing pains slow Linux cycle
"[I allowed] too much new code too late. I'm not always as stern ... as I should be, and I end up accepting changes even after the point where I know I shouldn't."
How long this arrangement can be sustained is the subject of heated debate. Some developers have argued that as Linux gains ground in the enterprise, it's essential that the release of new kernels conform to a formal schedule. Others argue that technological innovation and stability cannot be held hostage to release deadlines.
Torvalds said the ultimate responsibility for time-frame control and releases is with vendors interested in selling to enterprises. "I'm not much of a leader," he said. "I see myself more as a guide. ... The only thing I ever personally worry about is the technical side. This is what makes people trust me."
But Torvalds said he's seen what scheduled releases and missed deadlines have done to users beholden to the commercial software industry. "That doesn't mean that my standard kernel should be seen as the only game in town," he said. "If it was, the whole point of open source would be lost. I'd just be another Microsoft [Corp.]."
The number of lines in the kernel have grown exponentially over last ten years and reached 300K lines (see Linux Benchmark Lines Data) and that cast a long shadow on Linus assertion that he is doing this staff "just for fun" ;-). 300K lines is just too much even for any mortal even if you limit yourself to pure configuration management of patches.
At this point Linux may have reached a turning point at which it can either address several important challenges or face problems that could limit future adoption. For example in the IEEE paper Linux At a Turning Point Neal Leavitt clearly stated that there is no any "open source advantage" of program that is this complex and at the same time still lucks "a high level of functionality including scalability, availability, manageability, and security" and that dilemma for Linux is either to became a unified mainstream enterprise server OS or to languish in obscurity:
Enterprises may also be reluctant to consider Linux without a full framework of global service and support, vendor commitment, and a high level of functionality including scalability, availability, manageability, and security, said George Weiss, an analyst for market research firm Gartner.
In fact, a survey by the Miller Freeman media company concluded that the largest roadblock to implementing Linux is the perceived lack of commercial support and service, cited by almost a third of respondents.
Nonetheless, pushed in part by the growing demand for commercial implementations, Linux continues to develop and is beginning to address some of these concerns.
In view of what we discuss about Sun and IBM positions only Intel is a real driving force for Linux in this direction and thus Linux chances are probably around 33% or even less if we consider a difficult time for Red Hat and other Linux players and related to increased competition fragmentation forces within the enterprise Linux community.
Linus decision to continue to control development of the 2.4 kernel after it was declared to be "production" was actually a forced decision. The kernel was pretty fat, had several known architectural problems and its memory subsystem was not that stable for production server environment. Here is how the experience with 2.4 was summarized later(in Jan 2002) by Moshe Bar:
In fact, if anything the 2.4 kernel series has so far shown a far greater ratio of embarrassing bugs, misbehaviours and sudden changes of direction than any of the previous Linux kernel series. The big VM fiasco in 2.4 really showed how erratic and opaque the decision making process of Linus Torvalds really is. For all his superb engineering capabilities, the question is starting to come up if he is also capable of handling the single most complex distributed project in mankind, Linux.
As a first reaction, many are now maintaining their own boutique kernels. Michael Cohen has his mc tree, Rik has his own tree, then you have the aa (Arcangeli) ac (Cox) and many more trees. Don't know what to do this week-end? Start your own tree!"
In his paper "The kernel of pain" Joshua Drake summarized his experience in a way very unusual to a typical loyal Linux user "long live Linux" stance: "Let's call a spade a spade: For large servers, the 2.4 kernel has been a disaster." He stated that it was more or less OK for desktops (and I can confirm this, I used Mandrake 7.0 for more than a year without major problems), but cause intermittent lockups as a server, lockups that required rebooting the server. Here is his side of the story:
Let's start from the beginning. In July 2001, I was responsible for upgrading a customer's server from Red Hat 6.2 to Mandrake 8.0. The machine was built from scratch, and Mandrake was installed onto a freshly formatted RAID 5 array. We then migrated the Red Hat 6.2 applications to the new machine.
After a little configuration, the machine seemed to run fine. We successfully migrated the entire system in less than five hours. Considering this was a large-scale server, that was quite a feat and was certainly welcomed by our paying customer.
However, after about a month into deployment I started noticing strange problems with the machine. Intermittent lockups were the most common. The lockups appeared physical, and the machine was unrecoverable without a reboot.
While performing research on the problem, I learned there was a serious sync() bug in the 2.4 kernel. This bug exists in all kernel 2.4 versions until 2.4.6. The solution seemed simple: I upgrade the kernel.
About a week later, the machine locks up cold -- again. We considered it a fluke and rebooted. The very next day the machine locked up -- again. We do further research and find that the original 2.4 VM (Virtual Memory) implementation was causing problems. In my frustration and embarrassment, I would be inclined to call it bad design, but I don't know enough about the intricacies of the Linux kernel to say whether it was.
The VM problem was so horribly bad that the kernel team decided to rip out the older implementation and implement a completely new design. These problems continued as the kernel versions worked their way up through 2.4.11, which has a serious symlink bug that could lead to corrupted inodes. As of 2.4.13, things finally seemed to be cleaned up a bit. The kernel seemed to show more stability. Then we hit kernel 2.4.15.
Linux version 2.4.15 contained a bug that was arguably worse than the VM bug. Essentially, if you unmounted a file system via reboot -- or any another common method -- you would get filesystem corruption. A fix, called kernel 2.4.16, was released 24 hours later.
Kernel 2.4.16 now appeared to be the kernel of choice. It seemed as if it was possible that after almost a year of "stable" status that the 2.4 kernel would be usable in a production environment.
We still aren't there yet Alas, the mire of trouble within the 2.4 series kernels continues. As of kernel 2.4.16, there is a serious bug in the OOM that can cause system lockups. The lock-up bug in 2.4.16 has supposedly been fixed in 2.4.17pre4aa1.
The current kernel release is 2.4.17, and one would hope that it is stable, but a brief review of the changelog will show that the kernel team is still working on fine-tuning the new VM design, and the vast amount of changes that have been made are already making me weary of it.
As I reviewed the archives of late December, I found that the per-user limit support in the 2.4 series kernels is broken. With the limit support broken, any user -- privileged or not -- has the potential to suck up all of the machines resources, effectively causing an intramural DoS (Denial of Service) attack. They could do this accidentally, and it would cause a
great deal of grief for any system administrator.
So, what does all of this mean for me? It means that after five months of battling the new, better-than-fresh-butter, enterprise-ready 2.4 kernel, I am moving my customer back to the stodgy, conservative, more-enterprise-ready-than-2.4-has-been-since-its-release-almost-a-year-ago, 2.2 kernel-based Red Hat 6.2.
The 2.2 kernels may not handle large SMP machines as well, they may not handle large amounts of memory well (only 2 gigabytes), and they may have a practical limit of 2 gigabytes on a single file, but the 2.2. kernels don't crash or cause phone calls at 5:00 AM. Moreover, the 2.2 kernels don't make customers unhappy that they chose Linux as their server solution.
Here is an interesting post from Jeremy Zawodny's blog about his exprience with running MySQL on 2.4:
I spent a fair amount of time of Friday trying figure out why our FreeBSD servers running MySQL 4.0.2 were doing so much better than our Linux servers running MySQL 4.0.2. They're all slaves of the same 3.23.51 master and get roughly equal query loads, thanks to our Alteon load-balancers (yes, the ones that occasionally stop working right).
What I noticed while watching each of them with mytop is that the Linux boxes seem to have far more slow queries than the FreeBSD boxes. Now the FreeBSD boxes in question are newer. They're Compaq DL-380s with dual 1.2 GHz CPUs, 2GB of RAM, and 6 36GB SCSI disks. The Linux boxes are a bit older and slower. But the difference was still surprising. Over the last 24 hours, the FreeBSD boxes had each logged 3 slow queries, while the Linux boxes had logged a few thousand of them. Clearly something was up.
So I got on the boxes and noticed something odd. The load average on the Linux machines was higher than I'd expect. Rather than being in the 0.5 - 2.0 range, it was hitting between 7 and 9 during busy times. Odd. I ran top for a while to see if I noticed anything odd. Sure enough, after a few minutes, I found the pattern. The kswapd process was using up a fair amount of CPU time--sometimes as much as 99% of one CPU.
It gets more interesting. Both Linux boxes have swap disabled. It's been that way ever since I got sick of dealing with the 2.4 kernel's brain-dead virtual memory system last year. Why would kswapd even be running on a system with no swap? I have no idea.
But I decided to do some research and see if anyone had seen this before. The closest I got was this message on the linux-kernel mailing list, a complain by MySQL AB's own Sascha Pachev.
He noted similarly odd behavior and asked that Rik look into it. Unfortunately, I haven't been able to find any follow-up messages.
So I went back to looking at the configuration on the machines in question. Both have 2GB of RAM, roughly half of which is for MySQL. I have the key_buffer set to 512M as well as the innodb_buffer_pool. That leaves 1GB for the OS cache, buffers, and related stuff. It should be more than enough, shouldn't it?
Just for the heck of it, I backed both values down to 384M and restarted MySQL. After an hour or so, things began to look bleak again. Lots of slow queries and the kswapd process (actually a kernel thread) was getting more CPU time than I'd like. It was at this point that I really began to marvel at the situation. The FreeBSD VM subsystem never does stupid things like this. In fact, our MySQL/FreeBSD boxes rarely swap unless I do something really stupid. How can the one in Linux be this much worse? Beats me.
Anyway, even more frustrated, I decided to re-enable swap reboot the machine. At this point, I had little to lose. Once it came back up and I got MySQL started, things looked okay. kswapd wasn't as busy, and there were fewer slow queries. In fact, after 1 day and 9 hours, the server has only logged 66 slow queries. But according to top there's about 47MB of swap in use. The resident size of mysqld is 736MB, while it's overall size is 816MB. Apparently the kernel swapped out part of the buffer pool for InnoDB or the MyISAM key buffer.
I guess that extra gig of memory isn't enough for it.
I fail to understand what it's doing. But the machine seems to perform better with swap enabled. The only theory I've developed so far goes like this: With swap disabled, the kernel (being very stupid), goes looking for pages that it can swap out. It finds them but cannot swap them to disk. Next time around, it repeats this process, never realizing how futile it is. With swap finally enabled, it can swap out some memory and get the breathing room that it thinks it needs.
If anyone has hints on how this can be tuned (like telling the kernel not to bother), I'd LOVE to hear about it.
Linux may have FreeBSD beat when it comes to threading, but it sure could learn a lot from FreeBSD when it comes to virtual memory management.
Update: Thanks to the folks at NewsForge, you can find a teaser for this blog entry here (and it's currently on the home page). They picked this one up quite fast. I'm impressed.
Update #2: Allow me to respond from some feedback that I've seen so far. First off, we've been running 2.4.18 for quite a while now. We started with 2.4.9, tried 2.4.12 and 2.4.16. There's only so much time I can spend switching kernel versions and re-testing. Now that 2.4.19 is out, we'll give it a shot.
A few folks have suggested that since FreeBSD is the best tool for the job, I should just shut up and use it. If only that was the case. I'll post another entry in a few days detailing the problems with running a high-volume MySQL server on FreeBSD. It has issues of it's own, mostly related to FreeBSD's poor threads implementation.
Thanks for all the feedback so far. Some of it looks promising. The flames, however, are simply ignored.
Posted by jzawodn at August 04, 2002
All-in-all version 2.4 proved to be one of the most problematic of Linux kernels. Actually version 2.2 also have some lockup problems:
7 Jan 2001 (7 posts) Archive Link: Which kernel fixes the VM issues?
Jim Olsen [*] had been a victim of the do_try_to_free_pages problem reported in the recent past. He had some idea from following linux-kernel that the problem had been fixed or at least modified, and asked, "exactly which kernel should I use in order to rid my server of this VM issue? I'm uncomfortable (and always have been) with running pre* kernels on production machines, so i'd like to stick with 2.2.18, but I would like to know if it truly does fix the problem(s) with the VM. If I need to, though, I will (hesitantly) put a 2.2.19pre* kernel on the box." Alan Cox [*], Rik van Riel [*], and Ville Herva [*] pointed out that 2.2.19pre2 had the actual fix. As Ville put it, "It's fixed 2.2.19pre2 (which includes the Andrea Arcangeli [*]'s vm-global-7 patch that (among other things) fixes this.) You can also apply the vm-global-patch to 2.2.18 if you like."
One strange thing with 2.4 development that Alan Cox was excluded as a maintainer and Linus himself continued to serve as a development coordinator and configuration manager on the version 2.4 till the end of 2001. Linus justified his position to continue to develop 2.4 a year after it was declared "production kernel" in the following way:
Date: Sun, 25 Nov 2001 19:58:41 -0800 (PST) From: Linus Torvalds
Subject: Re: [RFC] 2.5/2.6/2.7 transition [was Re: Linux 2.4.16-pre1] On Sun, 25 Nov 2001, Mike Fedyk wrote: > > Personally, I think that 2.4 was released too early. It was when the > Internet hype was going full force, and nobody (including myself) could be > faulted for getting swept up in the wave that it was. That's not the problem, I think. 2.4.0 was appropriate for the time. The problem with _any_ big release is that the people you _really_ want to test it won't test it until it is stable, and you cannot make it stable before you have lots of testers. A basic chicken-and-egg problem, in short. You find the same thing (to a smaller degree) with the pre-patches, where a lot more people end up testing the non-pre-patches, and inevitably there are more percieved problems with the "real" version than with the pre-patch. Just statistically you should realize that that is not actually true ;) > 1) Develop 2.5 until it is ready to be 2.6 and immediately give it over to > a maintainer, and start 2.7. I'd love to do that, but it doesn't really work very well. Simply because whenever the "stable" fork happens, there are going to be issues that the bleeding-edge guard didn't notice, or didn't realize how they bite people in the real world. So I could throw a 2.6 directly over the fence, and start a 2.7 series, but that would have two really killer problems (a) I really don't like giving something bad to whoever gets to be maintainer of the stable kernel. It just doesn't work that way: whoever would be willing to maintain such a stable kernel would be a real sucker and a glutton for punishment. (b) Even if I found a glutton for punishment that was intelligent enough in other ways to be a good maintainer, the _development_ tree too needs to start off from a "known reasonably good" point. It doesn't have to be perfect, but it needs to be _known_. For good or for bad, we actually have that now with 2.4.x - the system does look fairly stable, with just some silly problems that have known solutions and aren't a major pain to handle. So the 2.5.x release is off to a good start, which it simply wouldn't have had if I had just cut over from 2.4.0. The _real_ solution is to make fewer fundamental changes between stable kernels, and that's a real solution that I expect to become more and more realistic as the kernel stabilizes. I already expect 2.5 to have a _lot_ less fundamental changes than the 2.3.x tree ever had - the SMP scaliability efforts and page-cachification between 2.2.x and 2.4.x is really quite a big change. But you also have to realize that "fewer fundamental changes" is a mark of a system that isn't evolving as quickly, and that is reaching middle age. We are probably not quite there yet ;)
I think that he simply feel that this version of the kernel was too problematic to pass it to somebody else. Later the VM problems led to a unprecedented decision to replace VM in the production kernel. When the v2.4.0 Linux Kernel was released in January of 2001, it had a new VM designed by Rik van Reil. Nine months later, with the release of 2.4.10, Linus shockingly ripped it out, replacing it with an old VM developed by Andrea Arcangeli (used in 2.2).
Though an impressive feat, that bold decision produced some additional tension because there was no consensus about how Linux should do virtual memory management (VM). Not only Andrea Arcangeli and Rik van Riel had the two competing solutions. It seems that there was discontent between Linus Torvalds and Alan Cox with the latter supporting abandoned Rik van Riel solution. Rik continued to develop his VM, with a initially strong but quickly diminishing following. For example initially Alan Cox included Rik's VM in his branch, which is essentially Red Hat blanch. He also added rmap VM. In the interview Rich van Riel explained some problems that he has with this decision:
Q: With kernel 2.4.10 we have seen that Linus Torvalds has preferred Arcangeli's VM to yours. What do you think of his decision? And why has he made that?
A: It was a strange situation, first Linus ignores bugfixes by me and Alan for almost a year, then he complains we "didn't send" him the bugfixes and he replaced the VM of course. The new VM has better performance than the old VM for typical desktop systems ... but it fails horribly on more systems than the old VM. Redhat, for example, cannot ship the new VM in their distribution because it'll just fall apart for the database servers, some of their users run at least now my code is gone I no longer have to work together with Linus, which is a good thing ;)
Q: Why is it a good thing?
With Linus out of the way, I can make a good VM. I no longer have to worry about what Linus likes or doesn't like. This is mostly important for intermediary code, where some of the "ingredients" to a VM are in place and others aren't yet in place. Such code can look ugly or pointless if you don't have the time to look at the design for a few days, so Linus tends to remove it ... even though it is needed to continue with development
Q: The new VM has caused many developers' critics, that have seen in that so radical change the reason of an instability. Also your words have been very hard: "Look, the problem is that Linus is being an asshole and integrating conflicting ideas into both the VM and the VFS, without giving anybody prior notice and later blame others." Are you of the same mind?
Yes, though I guess I have to add that I have a lot of respect for Linus. He is a very user unfriendly source management system, but he is also very honest about it. One thing to fix this issue for me would be an automatic patch bot, a computer program to automatically resubmit patches to Linus. That way I can use Linus just as a source management program with intelligent feedback and I don't have to worry about having to re-send patches 50 times before they get into the kernel because that will be automatic ;)
Q: It seem that there was no bad blood between you lately, is there?
Actually I'm not holding a grudge or anything. The truth of the matter is that I'm just as stubborn as he is and I can't stand the way that "the Linus source control system" works. I suspect that once a patchbot is in place (a TCP-like system on top of the lossy Linus source control system) we'll both be much happier.
Q: I hope that you can solve your problems. Alan Cox, after going on with using your VM has found Arcangeli's VM so safe to be used. What do you think of that?
Judging from the fact that Redhat is busy porting my old VM to 2.4.17 to use in their kernel RPM, I guess he's reversed this decision.
Q: I'm going with following your development of OOM Killer and approaching to VM with the Reverse Mapping and I'm founding them very interesting, but perhaps difficult to realize them into an efficient way. Could you explain to our readers what do they work and why they are needed?
OK, I'll start with the OOM killer as that one is easy to explain.
Basically on a Linux system you have a certain amount of RAM and of swap. RAM is used for kernel data, for caching executables, libraries and data from the disk and for storing the memory programs use. As everybody knows, systems never have enough RAM, so we have to choose what data to keep in RAM and what data to move to swap space. However, in some situations systems do not have enough swap space and the system just doesn't have enough space to run all the programs which were started. In that situation we can either wait until somebody frees memory (not possible if everybody is waiting) or we need to free up space forcefully, for example by killing a process. Of course you do not want a random process to be killed, that could be init, syslogd or some other critical process. The OOM killer is responsible for detecting when the system is out of memory and swap and for selecting the right (read: least bad) process to kill.
The need for the reverse mapping (rmap) VM is much more complex to explain, but basically the old VM is facing a number of problems:
- we have different memory zones and need to keep free pages in each zone.
- pages can be shared by many processes, but we don't know which ones.
- we need to balance between evicting pages from the cache and evicting pages from processes.
These three problems basically boil down to one problem, we don't know how much a particular page in RAM is used or who is using it. In the old VM, for example, you can have the situation where programs are using pages in the 16 MB large ISA DMA zone, but you might need to scan all memory of all processes to free one page in that zone because you do not know who is using those pages.
The reverse mapping VM attempts to solve this problem by keeping track of who is using each page. That way, we can just scan the pages in the DMA zone and free one which isn't used much, meaning we need to scan at most the metadata for 16 MB of pages ... in practice a lot less than that. It also means we can scan processes and cache memory using the same algorithm, so it simplifies that part of the VM a lot.
Currently -rmap is still in development, but in most situations where the normal VM works ok the performance of -rmap is similar. Then there are some situations where the normal VM falls apart due to the problems described above; in those areas -rmap still has good performance.
This means that benchmarks probably won't show any big advantage of -rmap over the standard VM, but -rmap will make sure that your server can survive better if it gets slashdotted. It also restores good behaviour in a number of other situations where the standard VM just falls apart, most notably people have reported a more responsive desktop with -rmap. If you want to try it, you can download the -rmap patch at http://surriel.com/patches/ or access the bitkeeper tree at http://linuxvm.bkbits.net/.
Q: How do you think to solve the swap storms problem on machines with few RAM under heavy load?
Well, let me begin with the bad news: there is no magic VM. If your programs need more RAM than what is available in the computer and they need it all at the same time, there is nothing the VM can do. However, if they only need a smaller amount of RAM at a time, or if you have multiple programs running and some of them fit in RAM, there are some possible solutions. If you have one program and most of the time it needs less RAM than what the system has, the VM can make the system perform decently by just chosing the right pages to swap out. If you have (for example) 5 programs that each need 30% of RAM, the only possible solution is to not run more than 3 at a time.
Luckily this is something where the VM could help, by just stopping two of the processes for a while and stopping two others later on, so each process gets a chance to run at full speed for part of the time. At the moment the only thing which is implemented in -rmap is the better pageout selection. Temporarily stopping some processes when the load gets too high is something I still need to work on. There is also a third mechanism, which is already partially present. When the system is low on RAM, we just don't do readahead. This means that while our disk IO is slow for a while, at least we don't make the situation worse than it is and make it possible for the pages which are in RAM to stay for a bit longer.
Someone thinks, as a possibile solution for the actual VM's problems in high loads, to switch to a local paging, when the number of free pages is... say 15% more than freepages.low, being *sure* every process has at least his working set in memory. What is your opinion about it?
That solution is very often proposed by people who don't think about the problem. However, I never found anybody capable of explaining exactly how that "solution" is supposed to fix the problem. This is one of the minor annoyances of VM development, hundreds of people come to me with "solutions" of which they don't understand how they are supposed to work or whether they are related to the problem at hand; occasionally people come forward with good ideas, but most of the time people just come with the same idea. If you're interested in discussing such ideas, you're always welcome to drop by in #kernelnewbies on irc.openprojects.net, just don't think you're the first person to think of a particular idea because most likely you aren't ;)
Actually the replacement fail to solve the problems with VM as is evident from this post from Kernel Traffic #99
13 Dec 2000 - 17 Dec 2000 (17 posts) Archive Link: VM problems still in 2.2.18
Mark Symonds [*] reported locked boxes with many "VM: do_try_to_free_pages failed" errors scrolling down the screen on 2.2.18; he added, "Something else I noticed is that the Load average is usually around 0.08, but when I let it idle for a few mins, just tapping the spacebar in a terminal will cause it to stop responding for 10 or so seconds with the load average skyrocketing to over 6. After that the system sometimes recovers and starts responding normally, other times it will die." Alan Cox [*] replied, "Andrea's VM-global patch seems to be a wonder cure for those who have tried it. Give it a shot and let folks know." Mark and someone tried the patch and reported complete success, at which point Alan remarked, "I think Andrea just earned his official God status ;)" Another person asked if Andrea's patch would make it into 2.2.19, and Alan replied:
The question is merely 'in what form' . I wanted to keep them separate from the other large changes in 2.2.18 for obvious reasons.
Andrea - can we have the core VM changes you did without adopting the change in semaphore semantics for file system locking which will give third party fs maintainers headaches and doesnt match 2.4 behaviour either ?
Andrea Arcangeli [*] replied with some technical comments, and he and Alan went back-and-forth for awhile.
That situation spelled troubles for anybody who wanted to use Linux for production environment. As leading FreeBSD developer Matt Dillon aptly put it in his interview to OSNews ( OSNews.com - Exploring the Future of Computing):
7. From the technical point of view, how would you rate the Linux 2.4 kernel compared to BSD's?
I don't know enough about recent linux kernels to be able to rate them, nor would it be P.C. I do follow the VM work being done in Linux and in particular Rik van Riel's work. I think Linux is going through a somewhat painful transition as it moves away from a Wild-West/Darwinist development methodology into something a bit more thoughtful. I will admit to wanting to take a clue-bat to some of the people arguing against Rik's VM work who simply do not understand the difference between optimizing a few nanoseconds out of a routine that is rarely called verses spending a few extra cpu cycles to choose the best pages to recycle in order to avoid disk I/O that would cost tens of millions of cpu cycles later on. It is an attitude I had when I was maybe 16 years old... that every clock cycle matters no matter how its spent. Bull!
Still scheduler problems will continue for several years and relation between kernel developers sometimes bacame less then cordial as the following post can attest:
Re:huh? (Score:5, Interesting)
by arvindn (542080) on Saturday March 08, @02:01PM (#5467657)
(http://theory.cs.iitm.ernet.in/~arvindn/ | Last Journal: Monday June 16, @01:39AM)
Kernel Dev's Gone Wild volume 3
Well, here is Linus replying to Molnar's post:
From: Linus Torvalds
Subject: Re: [patch] "HT scheduler", sched-2.5.63-B3
Date: Thu, 6 Mar 2003 09:03:03 -0800 (PST)
On Thu, 6 Mar 2003, Ingo Molnar wrote:
> the whole compilation (gcc tasks) will be rated 'interactive' as well,
> because an 'interactive' make process and/or shell process is waiting on
No. The make that is waiting for it will be woken up _once_ - when the
thing dies. Marking it interactive at that point is absolutely fine.
> I tried something like this before, and it didnt work.
You can't have tried it very hard.
In fact, you haven't apparently tried it hard enough to even bother giving
my patch a look, much less apply it and try it out.
> the xine has been analyzed quite well (which is analogous to the XMMS
> problem), it's not X that makes XMMS skip, it's the other CPU-bound tasks
> on the desktops that cause it to skip occasionally. Increasing the
> priority of xine to just -1 or -2 solves the skipping problem.
Are you _crazy_?
Normal users can't "just increase the priority". You have to be root to do
so. And I already told you why it's only hiding the problem.
In short, you're taking a very NT'ish approach - make certain programs run
in the "foreground", and give them a static boost because they are
magically more important. And you're ignoring the fact that the heuristics
we have now are clearly fundamentally broken in certain circumstances.
I've pointed out the circumstances, I've told you why it happens and when
it happens, and you've not actually even answered that part. You've only
gone "it's not a problem, you can fix it up by renicing every time you
find a problem".
Get your head out of the sand, and stop this "nice" blathering.
Some people started to talk about the benefits of having the 2 VMs. And that actually a plausible solution :-) : anything that happens in Linux will be painted by zealots as a good thing. I do not see any real benefits from having 2 VMs in the production kernel: that complicates a lot of things and actually means a fork. But the fact that Rik van Reil decided to continue to work on his VM is not only an interesting revolt against Linus decision. Implicit Red Hat support of a fork is an interesting story too; for a while it kept both Linus and Andrea under huge pressure. It looks like this event demonstrated that Linux kernel reached a point where different trees for different groups became a viable (albeit, temporary) method of solving kernel problems. This situation was still present in 2002 as Andrew Morton February 14, 2002 Interview suggests:
Q: What are some of the most outstanding issues that still need to be addressed
A:One big one is (surprise) the VM. It is still not working adequately. Andrea has a patch which I'm sure will improve things. But this patch is big enough to stun an elephant and needs to be split up, cleaned up and fed into the tree in a way in which we can monitor its effects. By all accounts it is working well for SuSE customers and those who have tested it, but until it's fully integrated, more widely tested and everyone is reasonably happy with it, we have a VM problem.
Apart from that, various reports of machines mysteriously locking up under load. The ia32 APIC handling still seems to be wonky. Very bad disk read latencies when there is a heavy write load. Reports of disappointing filesystem throughput - Andrea's patches may solve most of these, but of course we don't know yet. Andre's big IDE patch needs to be merged in and settled down.
We're getting a number of reports of corruption of some long pointer chains in the kernel. Some of these will be due to bad memory, but not all, I suspect. Something somewhere is stomping on memory. Possibly the problem was introduced around the 2.4.13 timeframe. I'm collecting these reports, trying to discern a pattern.
The "Kernel of Pain" thread at Slashdot was very, very interesting - I think it shows that we just ain't there yet. See
The other problem that is also well known to Linux kernel development community and that periodically resurfaced in different forms is the unreliability of Linus as a configuration manager. It's just too many patches for one person to fully understand and keep up without solid configuration management system and supplementary research staff. One interesting aspect of this problem is that several large Linux companies, especially Red Hat and SuSe put a lot of effort into creating private kernels, considerably more reliable that stock "Linus-approved" kernels. Sometimes, obvious bugs are fixed, sometimes more substantial changes are offered along the way. The problem is that many of these patches never get incorporated back into the stock kernel. Here is one example from Gentoo Linux -- missing patches:
... While this is certainly not a violation of the GPL on the part of the distro companies (the GPL requires that modifications be made available, but does not require that anyone is informed of these modifications or that important bug fixes are actually applied), it certainly has a negative impact on Linux as a whole. In effect, if you do not use distro X or Y, then you do not benefit from these fixes. Without focusing on assigning blame, we'd like to encourage everyone to do what they can to get good patches flowing to Marcelo so he can incorporate them into the stock kernel. Here's an example of a "no brainer" patch from Mandrake's cooker kernel that was created nearly a year ago (back in the era of 2.4.3-ac4) yet has still not been incorporated into the stock kernel. Was it never submitted? Was it forgotten or lost? I don't particularly care, as long as this general situation gets addressed. No-brainer patch follows...
--- linux/mm/bootmem.c.chmou Thu Apr 12 04:13:23 2001 +++ linux/mm/bootmem.c Thu Apr 12 04:13:57 2001 @@ -86,7 +86,8 @@ if (end > bdata->node_low_pfn) BUG(); - for (i = sidx; i < eidx; i++) + /* subtle: eidx is the last index we need to reserve */ + for (i = sidx; i <= eidx; i++) if (test_and_set_bit(i, bdata->node_bootmem_map)) printk("hm, page %08lx reserved twice.\n", i*PAGE_SIZE); }
While this patch may not be very significant, it doesn't take a genius to figure out that 20 or 30 similar missing patches could have a big effect on the quality of the Linux kernel. And of course, many of these ignored fixes are significant. Ignoring bug fixes wastes user and developer time, and makes Linux a not-so-fun place to be. If we're truly concerned about having a quality kernel, happy users and continued growth in the desktop, server and enterprise markets, then we need to treat every single bug fix as a prized possession. A certain number of man hours went in to tracking down and producing each fix, and if they are forgotten or ignored, then those man hours were wasted. And if these fixes only appear in "special" kernels, then the benefit of these fixes is diminished incredibly. It is already difficult enough to investigate a problem a user is having, track it down, and create a patch in order to fix it. In fact, the Linux community is already pretty bad at doing this. So, if already-tracked-down and verified fixes get ignored -- well, we simply can't allow that to happen and expect to have a viable kernel. In order for Linux to gain maximum benefit from its distributed development model, it's absolutely essential that nothing falls through the cracks. We ask that everyone do their part.
Rob Landley, a computer programmer, writer and Linux evangelist, posted a proposal to the Linux kernel development list calling for a 'Patch Penguin' -- an idea borrowed from Perl development community that utilizes a special rotating person responsible for the integration of fixes. That might help to resolve a myriad of small problems that plague the current kernel:
The proposal comes after many developers have grown frustrated with Torvalds for not keeping up with the slew of minor fixes hatched by volunteers, said Landley. A situation that, he added, that has become a source of underlying tension in the community.
"Right now, the patch process is manageable, but it's showing stress fractures, and I'm proposing to relieve that stress before an earthquake," said Landley, after his proposal set off a heated discussion on the list between Torvalds and several developers. "If the stress keeps growing, the more and more likely that something catastrophic will happen."
The full description of changes in configuration management and introduction of Bitkeeper are beyond the scope of this chapter which is limited to the first ten years of Linux. Interested people might benefit from reading Business Week story that, of course, should be taken with a grain of salt like everything Business Week writes. Still are close to pinpointing the problem of overcontrolling on the part of Linus when that kernel definitely had outgrown its initial developer :
Five years ago, Linus Torvalds faced a mutiny. The reclusive Finn had taken the lead in creating the Linux computer operating system, with help from thousands of volunteer programmers, and the open-source software had become wildly popular for running Web sites during the dot-com boom. But just as Linux was taking off, some programmers rebelled. Torvalds' insistence on manually reviewing everything that went into the software was creating a logjam, they warned. Unless he changed his ways, they might concoct a rival software package -- a threat that could have crippled Linux.
The threat never materialized. Still does not look fun to be a fat owner of the fat kernel no matter how pure your motives are :-)
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2020 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to to buy a cup of coffee for authors of this site|
Last modified: March 12, 2019