Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Softpanorama Bulletin
Vol 23, No.11 (November, 2011)

About

Bulletin

Switchboard

e-books Papers KISS 
Jul Aug Sep Oct Nov Dec
Jan Feb Mar Apr May Jun

Editorial

Softpanorama classification of sysadmin horror stories

"More systems have been wiped out by admins than any hacker could do in a lifetime"

Rick Furniss

“Experience fails to teach where there is no desire to learn.”
"Everything happens to everybody sooner or later if there is time enough."
George Bernard Shaw

“Experience is the the most expensive teacher, but a fool will learn from no other.”

Benjamin Franklin

Unix system administration is an interesting and complex craft. It's good if your work demands use of your technical skills, creativity and judgment. If it doesn't, then you're in absurd world of Dilbertized cubicle farms.. There is a lot of deep elegance in Unix, and a talented sysadmin like any talented craftsman is able to expose this hidden beauty of the masterful manipulation with complex symbolic objects to the amazed observers. You need to improvise on the job to get things done, create your own tools; you can't go "by manual". Unfortunately some of such improvisations produce unexpected side effects ;-)

In a way not only magic execution of complex sequences of commands is a part of this craftsmanship. Blunders and folklore about them are also the legitimate part of the craft. It's human to err after all. And if you are working as root, such an error can easily wipe a vital part of the system. In you are unlucky this is a production system. If you are especially unlucky there is no backup.

Sh*t happens, but there is a system in any madness ;-). That's why it is important to try to classify typical sysadmin mistakes. Regardless of the reason, every mistake should be documented as  it constitutes as an important lesson pointing to the class of similar errors possible. As saying goes "never waist a good crisis". Learning from your own mistakes as well as mistakes of others is an important part of learning the craft. In addition keeping a personal journal of such SNAFU (like in the army incompetent bosses role in such SNAFU is often considerable) and periodically browsing also stimulates personal growth of a system administrator. It is a must for any aspiring sysadmin.

There are several fundamental reasons:

Having a classification system for such errors in one way to increase situation awareness. and periodic reviews of them, like safety training can help to avoid some of the situations described below. Spectacular blunder is often too valuable to be forgotten as it tends to repeat itself ;-).

The typical case of the loss of situational awareness is performing some critical operation on the wrong server. If you use Windows desktop to connect to Unix servers use MSVDM to create multiple desktop and change background for each to make the typing a command in a wrong terminal window less likely. If you prefer to work as root switch to root only on the server that you are working. Use your regular ID and sudo on the others. Other related reasons include:

In this page we will present "Softpanorama classification of sysadmin horror stories". It is not the first and hopefully not the last one. The author is indebted to Anatoly Ivasyuk who created original " The Unofficial Unix Administration Horror Story Summary. This list exists several versions:

The saying is that experience keeps the most expensive school but fools are unable to learn in any other ;-). In order to be able to learn from others we need to classify classic sysadmin horror stories. One such classification created by the author is presented below.

The saying is that experience keeps the most expensive school but fools are unable to learn in any other ;-). In order to be able to learn from others we need to classify classic sysadmin horror stories. One such classification created by the author is presented below.

The main source is an old, but still quite relevant list of horror stories created by Anatoly Ivasyuk; There are two versions available:

Here is the author attempt to reorganize the categories and enhance the material by add more modern stories.

Softpanorama classification

  1. Missing backup. Please remember that backup is the last change for you to restore the system if something went terribly wrong. That means that before any dangerous steps you need to locate and check the existence of backup. Making another backup is also a good idea to that you have two or more recent copies. Attempt at least to brose the backup and see if data are intact is a must.
  2. Locking yourself out
  3. Performing operation on a wrong computer. The naming schemes used by large corporation usually do not have enough distance between them to avoid such blunders. For example you can type XYZ300 instead of XYZ200. Another common situation is when yuou have several tgerminal windows open and in a hurry start working on a wrong server. That's why it's important that shell prompt shows the name of the host. Often, if you both have production computer and quality server for some application is wise never have two terminals opened simultaneously. Reopening it is not a big deal but can save you from some very unpleasant situations.
  4. Forgetting in which directory you are and executing command in a wrong directory. This is common mistake if work under severe time pressure or are very tired.
  5. Regular expressions related blunders. Novice sysadmins usually do not realize that '.*' also matches '..' often with disastrous consequences if commands like chmod, chown, rm are used recursively or in find command.
  6. Find filesystem traversal errors and other errors related to find. This is very common class of errors and it is covered in a separate page Typical Errors In Using Find
  7. Side effects of performing operations on home or application directories due to links to system directories. This is a pretty common mistake and I had committed it myself several time with various, but always unpleasant consequences.
  8. Misunderstanding of syntax of important command and/or not testing complex command before execution of production box. Such errors are often made under time pressure. One such case is using recursive rm, chown, chmod or find commands. Each of them deserves category of its own.
  9. Ownership changing blunders Those are common when using chown with find so you need to test the command first.
  10. Excessive zeal in improving security of the system ;-). A lot of current security recommendation are either stupid or counterproductive. In the hands of overly enthusiastic and semi-competent administrator it becomes a weapon that no hacker can ever match. I think more systems were destroyed by idiotic security measures that by hackers.
  11. Mistakes done under time pressure. Some of them were discussed above, but generally time pressure serves as a powerful catalyst for the most devastating mistakes.
  12. Patching horrors
  13. Unintended consequences of automatic system maintenance scripts
  14. Side effects/unintended consequences of multiple sysadmin working on the same box
  15. Premature or misguided optimization and/or cleanup of the system. Changing settings without full understanding consequences of such changes. Misguided attempts to get rid of unwanted file or directories (cleaning the system).
  16. Mistakes made because of the differences between various Unix/Linux flavors For example in Solaris run level 5 means reboot while in Linux run level 5 is running system with networking and X11.
  17. Stupid or preventable mistakes

Some personal experience

Reboots of wrong server

Such commands as reboot or mkinitrd can be pretty devastating when applied to wrong server. That mishap happens with a lot of administrators including myself, so it is prudent to take special measures to make it less probable.

This situation often is made more probable due to not fault-tolerant name scheme employed in many corporations where names of the servers differ by one symbol. For example, scheme serv01, serv02 serv03 and so on is a pretty dangerous name scheme as server names are different by only single digit and thus errors like working on the wrong server are much more probable.

The typical case of the loss of situational awareness is performing some critical operation on the wrong server. If you use Windows desktop to connect to Unix servers use MSVDM to create multiple desktop and change background for each to make the typing command in a wrong terminal window less likely

Even more complex scheme like Bsn01dls9, Nyc02sns10 were first three letter encode the location, then numeric suffix and then vendor of the hardware and OS installed are prone to such errors. My impression that unless first letters differ, there is a substantial chance of working on wrong server. Using favorite sport teams names is a better strategy and those "formal" name can be used as aliases.

Inadequte backup

If you try to distill the essence of horror stories most of them were upgraded from errors to horror stories due to inadequate backups.

Having a good recent backup is the key feature that distinguishes mere nuisance from full blown disaster. This point is very difficult to understand by novice enterprise administrators. Rephrasing Bernard Show we can say "Experience keeps the most expensive school, but most sysadmins are unable to learn anywhere else". Please remember that in enterprise environment you will almost never be rewarded for innovations and contributions but in many cases you will be severely punished for blunders. In other words typical enterprise IT is a risk averse environment and you better understand that sooner rather then later...

If you try to distill the essence of horror stories most of them are about inadequate backups. Having a good recent backup is the key feature that distinguishes mere nuisance from full blown disaster.

Rush and absence of planning are probably the second most important reason. In many cases sysadmin is stressed and that impair judgment.

Forgetting to chroot affected subtree

Another typical reason is abuse of privileges. If you have access to root that does not mean that you need to perform all operations as root. For example such simple operations' as

cd /home/joeuser
chown -R joeuser:joeuser .* 

performed as root cause substantial problems and time lost in recovery of ownership of system file. Computer are really fast now :-(. 

Even with user privileges there will be some damage: it will affect all world writable files and directories. 

This is the case where chroot can provide tremendous help:

cd /home/joeuser 
chroot /home/joeuser  chown -R joeuser:joeuser .* 

Abuse of root privileges

Another typical reason is abuse of root privileges. Using sudo or RBAC (on Solaris) you can avoid some unpleasant surprises. Another good practice if to use screen with one screen for root operations and another for operations that can be performed under your on id.

Many Unix sysadmin horror stories are related to unintended consequences, unanticipated side effects of a particular Unix commands such as find and rm performed with root privileges. Unix is a complex OS and many intricate details (like behavior of commands like rm -r .* or chown -R a:a .*) can easily be forgotten from one encounter to another, especially if sysadmin works with several flavors of Unix or Unix and Windows servers.

For example recursive deletion of files either via rm -r or via find -exec rm {} \; has a lot of pitfalls that can destroy the server pretty nicely in less then a minute, if run without testing.

Some of those pitfalls can be viewed as a deficiency of rm implementation (it should automatically block * deletion of system directories like /, /etc/ and so on unless -f flag is specified, but Unix lacks system attributes for files although sticky bit on files can be used instead ). It is wise to use wrappers for rm. There are several more or less usable approach to writing such a wrapper:

Another important source of blunders is time pressure. Trying to do something quickly often lead to substantial downtime. Hurry slowly is one of the saying that are very true for sysadmin. But unfortunately very difficult to follow.

Sometimes your emotional state contribute to the problems: you didn’t have much sleep or your mind was distracted by your personal life problems. In such days it is important to slow down and be extra cautious.

Typos are another common source of serious, some time disastrous errors. One rule that should be followed (but as the memory of the last incident fades, this rule like any safety rules, usually is forgotten :-): if you are working as root and perform dangerous operations never type the directory path, copy it from the screen.

I once automatically typed /etc instead of etc trying to delete directory to free space on a backup directory on a production server (/etc probably in engraved in sysadmin head as it is typed so often and can be substituted for etc subconsciously). I realized that it was mistake and cancelled the command, but it was a fast server and one third of /etc was gone. The rest of the day was spoiled... Actually not completely: I learned quite a bit about the behavior of AIX in this situation and the structure of AIX /etc directory this day so each such disaster is actually a great learning experience, almost like one day training course ;-). But it's much less nerve wracking to get this knowledge from the course...

Another interesting thing is having backup was not enough is this case -- backup software stopped working. The same was true for telnet and ssh. And this was a remote server is a datacenter across the country. I restored the directory on the other non-production server (overwriting its /etc directory in this second box with the help of operations, tell me about cascading errors and Murphy law :-). Then netcat helped to transfer the tar file.

If you are working as root and perform dangerous operations never type a path of the command, copy it from the screen. If you can copy command from history instead of typing, do it !

In such cases network services with authentication stop working and the only way to transfer files is using CD/DVD, USB drive or netcat. That's why it is useful to have netcat on servers: netcat is the last resort file transfer program when services with authentication like ftp or scp stop working. It is especially useful to have it if the datacenter is remote.

netcat is the last resort file transfer program when services with authentication like ftp or scp stop working. It is especially useful to have it, if the datacenter is remote.

Dr. Nikolai Bezroukov


Top updates

Bulletin Latest Past week Past month
Google Search


Old News ;-)

[Nov 01, 2011] What TCP-IP ports does iLO 3 use

Aug 10, 2010 | HP Communities

You can look up what ports are used via the iLO 3 web interface. Expand the "Administration" menu on the left, then click on the "Access Settings" link. That screen will tell you the ports used by the various services.

Here are the defaults:
SSH 22
Web (non-SSL) 80
SSL 443
IPMI-over-LAN 623
Remote Console 17990
Virtual Media 17988

You might also need to enable other ports if you're using DHCP, DNS, SNTP, SNMP, and/or LDAP from iLO.

[Nov 03, 2011] The Stallman Dialogues

[Nov 05, 2011] Steve Keen Harvard Starts its Own PAECON Against Mankiw

Neoclassicals in economics are hacks (or financial oligarchy prostitutes, if you wish) who “electively pick research to suite their agendas”–and this includes ignoring critical literature generated even when it comes from leading lights within neoclassical economics. Is situation with teaching CS radically different? I think not...
November 3, 2011 | naked capitalism

aletheia33:

here is the full text of the economics 10 walkout students’ letter to mankiw: ______________________________

Wednesday November 2, 2011

Dear Professor Mankiw—

Today, we are walking out of your class, Economics 10, in order to express our discontent with the bias inherent in this introductory economics course. We are deeply concerned about the way that this bias affects students, the University, and our greater society.

As Harvard undergraduates, we enrolled in Economics 10 hoping to gain a broad and introductory foundation of economic theory that would assist us in our various intellectual pursuits and diverse disciplines, which range from Economics, to Government, to Environmental Sciences and Public Policy, and beyond. Instead, we found a course that espouses a specific—and limited—view of economics that we believe perpetuates problematic and inefficient systems of economic inequality in our society today.

A legitimate academic study of economics must include a critical discussion of both the benefits and flaws of different economic simplifying models. As your class does not include primary sources and rarely features articles from academic journals, we have very little access to alternative approaches to economics. There is no justification for presenting Adam Smith’s economic theories as more fundamental or basic than, for example, Keynesian theory.

Care in presenting an unbiased perspective on economics is particularly important for an introductory course of 700 students that nominally provides a sound foundation for further study in economics. Many Harvard students do not have the ability to opt out of Economics 10. This class is required for Economics and Environmental Science and Public Policy concentrators, while Social Studies concentrators must take an introductory economics course—and the only other eligible class, Professor Steven Margolin’s class Critical Perspectives on Economics, is only offered every other year (and not this year). Many other students simply desire an analytic understanding of economics as part of a quality liberal arts education. Furthermore, Economics 10 makes it difficult for subsequent economics courses to teach effectively as it offers only one heavily skewed perspective rather than a solid grounding on which other courses can expand. Students should not be expected to avoid this class—or the whole discipline of economics—as a method of expressing discontent.

Harvard graduates play major roles in the financial institutions and in shaping public policy around the world. If Harvard fails to equip its students with a broad and critical understanding of economics, their actions are likely to harm the global financial system. The last five years of economic turmoil have been proof enough of this.

We are walking out today to join a Boston-wide march protesting the corporatization of higher education as part of the global Occupy movement. Since the biased nature of Economics 10 contributes to and symbolizes the increasing economic inequality in America, we are walking out of your class today both to protest your inadequate discussion of basic economic theory and to lend our support to a movement that is changing American discourse on economic injustice. Professor Mankiw, we ask that you take our concerns and our walk-out seriously.

Sincerely,

Concerned students of Economics 10

Selected comments

Richard Kline:

So Memory, exactly. Mankiw’s synthesis is political propaganda designed to socialize candidates for the ruling class in what their acceptable worldview is to be, nothing more. Analysis would interfere with that, as would contrast-and-compare exercises; thus, both are omitted. What Mankiw is doing is political indoctrination, his snuffling remark in his response to the walk out that “I leave my politics at the door” when teaching notwithstanding. Maybe he does—since all he shows _is_ a political perspective, he can leave ‘I’ statements out and simply point, disingenuously, at this syllabus. —And it wouldn’t matter who was teaching this class, no; the function is exactly the same. Kind of like catechism, really . . . .

Lloyd Blankstein:

Mankiw is world famous economist. Steve Keen is only a nameless blogger, who teaches economics in his spare time. I want to stay on the side of the titans – Mankiw, Summers, Krugman, Greenspan, Bernanke. The only purpose of economics is to justify and legalize theft. If Steve Keen cannot do that, he is a BAD economist. Why listen to him?

monte_cristo:

I studied 101 economics in 1981, I seem to recall. The analytic component was easy, it’s like arithmetic, maths, logic, you know we have the axioms and we proceed. The ‘descriptive component’ ... basically unions versus management I choked on. I had a sweet lecturer. He actually held to a marxist analysis: roi/s the logic etc etc. It made a lot more sense than the required ‘understanding’ that the course required. Right at this moment I find myself in awesome agreement with the Harvard protesters.

Mankiw is my opinion (and who am I(?)) is just a semi-second rate academic that rode the wave with his textbook. Nearly any fool can do it given the auspicious historical circumstances. That doesn’t make it right, in fact it just makes him cheap, opportunistic, and confused.

Just to expand this criticism I have an interest in clinical psychology. Any idiot that believed Freud after he sold himself out when his m/c//r/c audience kicked up when he told them that their children's problems were the result of their abuse needs to be hanging their heads in shame, as should have Freud. Freud was brilliant but a disgrace to himself.

It is a beautiful analog this moment. You either get this or you don’t. My message is “Screw the rich.” Analogue: Screw the abusers.

Skippy:

@Brito,

I’m still waiting for your reply, how do you model trustworthiness, see below:

http://ageconsearch.umn.edu/handle/104522

Look, if the model can not describe the human condition, then all you are building is a behavioral template, which you then_shove down_upon_humanity. All Mankiw is doing is reinforcing his worldview cough neoliberal see:

David Harvey notes that the system of embedded liberalism began to break down towards the end of the 1960s. The 1970s were defined by an increased accumulation of capital, unemployment, inflation (or stagflation as it was dubbed), and a variety of fiscal crises. He notes that “the embedded liberalism that had delivered high rates of growth to at least the advanced capitalist countries after 1945 was clearly exhausted and no longer working.”[10] A number of theories concerning new systems began to develop, which led to extensive debate between those who advocated “social democracy and central planning on the one hand” and those “concerned with liberating corporate and business power and re-establishing market freedoms” on the other.

Harvey notes that, by 1980, the latter group had emerged as the leader, advocating and creating a global economic system that would become known as neoliberalism.[11]

———-

Skippy…Humanity is the horse pulling the cart. Neoliberalism is nothing more than a glazed apathetic leash to ones own chosen addictions (see economic menu), to justify egregious wealth and power concentration. I find this state deeply wrong. And like Mr. Kline pointed out, one day you_may_wake up and can’t pull your head out of the bucket, drowning in recognition of past deeds or even worse, look in the mirror and see Dick Cheney and be OK with it.

http://video.google.com/videoplay?docid=-925270800873130790

Steve Keen:

“Steve Keen is a hack who selectively picks research to suite his agenda”.

Cute Brito! From my reading of economics, most neoclassicals are themselves hacks who “electively pick research to suite their agendas”–and this includes ignoring critical literature generated even when it comes from leading lights within neoclassical economics.

Here’s a few “selectively picked research papers” on IS-LM and DSGE modelling that I’d like to see you prove are wrong and should be ignored:

Hicks, J.R., (1980). ‘IS-LM: an explanation’, Journal of Post Keynesian Economics, 3 (2): 139–54:

“I accordingly conclude that the only way in which IS-LM analysis usefully survives – as anything more than a classroom gadget, to be superseded, later on, by something better – is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate”

Solow, R. M. (2001). From Neoclassical Growth Theory to New Classical Macroeconomics. Advances in Macroeconomic Theory. J. H. Drèze. New York, Palgrave.

[N]ow … if you pick up an article today with the words ‘business cycle’ in the title, there is a fairly high probability that its basic theoretical orientation will be what is called ‘real business cycle theory’ and the underlying model will be … a slightly dressed up version of the neoclasssical growth model. The question I want to circle around is: how did that happen? (Solow 2001, p. 19)

Solow, R. M. (2003). Dumb and Dumber in Macroeconomics. Festschrift for Joe Stiglitz. Columbia University.

. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages. How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up? My impression is that this approach (which seems now to be the mainstream, and certainly dominates the journals, if not the workaday world of macroeconomics) has had no empirical success; but that is not the point here. I start from the presumption that we want macroeconomics to account for the occasional aggregative pathologies that beset modern capitalist economies, like recessions, intervals of stagnation, inflation, “stagflation,” not to mention negative pathologies like unusually good times. A model that rules out pathologies by definition is unlikely to help. (Solow 2003, p. 1)

Solow, R. M. (2007). “The last 50 years in growth theory and the next 10.” Oxford Review of Economic Policy 23(1): 3–14.

the main argument for this modeling strategy has been a more aesthetic one: its virtue is said to be that it is compatible with general equilibrium theory, and thus it is superior to ad hoc descriptive models that are not related to ‘deep’ structural parameters. The preferred nickname for this class of models is ‘DSGE’ (dynamic stochastic general equilibrium). I think that this argument is fundamentally misconceived… The cover story about ‘microfoundations’ can in no way justify recourse to the narrow representative-agent construct…

The nature of the sleight-of-hand involved here can be made plain by an analogy. I tell you that I eat nothing but cabbage. You ask me why, and I reply portentously: I am a vegetarian! But vegetarianism is reason for a meatless diet; it cannot justify my extreme and unappetizing choice. Even in growth theory (let alone in short-run macroeconomics), reasonable ‘microfoundations’ do not demand implausibility; indeed, they should exclude implausibility. (Solow 2007, p. 8)

Lefty :

You hate the word neoliberal because of how much of a failure neoliberal policies have been. Yes, there is some disagreements amongst neoliberal economists, but there are many, many commonalities as well. Same goes with words like socialism. There is, or was, social democracy in Western Europe that was different than the socialism of the Nordic countries, which was different than Yugoslavian socialism which was different than Cuban socialism which was different than Maoist China. They were all different, sometimes radically different, but shared certain characteristics which made them “socialist”. You are running away from performance of neoliberal economics, not the label.

“further exposing how completely untrue your lies are about economists being rigidly right wing”

I have a degree in economics, which I mentioned. I am taking classes this fall. Never once had a non-neoclassical teacher. Not once. I wish I had a Yves Smith or a Steve Keen teaching me. I had to work hard to find economists like them. I DID hear lots of hostility to unions, regulation, non-market based solutions to environmental issues. Lots of pretending things about perfectly competitive markets, perfect information, all information being encoded in prices, preferences of all market participants being identical. Was taught how great “free trade” was by teachers who obviously never read Ricardo, never read the assumptions Ricardo articulated in defense of free trade that are radically different than the world we live in today. Again, neoclassical economics HAS dominated the profession and we know the types of ideas and theories neoclassical professors regurgitate.

One last time, the heads of institutions like the ECB, IMF, the US Treasury, the World Bank, the BIS, they all were taught this type of economics. They all have roughly the same ideas, even after their policies have caused economic collapse. I see no evidence they’ve learned any lessons at all. They’re willing to force countries to socialize the losses of their financial puppet masters in order to save their pet theories. Sorry, but its your type of mentality that makes me embarrassed to tell people close to me that I am an economist in training.

They, understandably, are skeptical. Many of my friends are politically astute and progressive, or whatever the term is. They ask, “aren’t you economists ruining Europe?” “Didn’t you economists deindustrialize the country, financialize the economy, cause wealth inequality to explode, private and governmental debt to balloon?”. Yep, and I have to explain that there are many schools of thought in economics and…by that point they have heard enough. They respect me, but not what I am studying. Economics wasn’t always such a joke. It used to have meaning, it was a powerful tool to make the world a better place. Neoclassical economists have ruined a wonderful field of study and they’ve caused lots of harm to people the world over. Go to bed!

Philip Pilkington:

“Nevertheless, economists are constantly disagreeing with each other, it is not a rigid cult, get over it.”

Here’s the deal, buddy. The accusation we make is this: a training in mainstream economics narrows a person’s perspectives so much that they simply cannot see outside the counterfactual and empirically unjustifiable claims the whole discipline (barring a small few) make.

You’re caught up in this too. That’s why what you think of as dissent is, to the rest of us, just pointless nonsense.

You say Marxism. There are different strains but most believe in the labour theory of value. If I debunk the LTV the WHOLE OF MARXISM GOES WITH IT. Likewise for neoclassical economics. There may be different currents, but if I debunk, say, equilibrium analysis when applied to complex systems constantly subject to entropy THE WHOLE OF NEOCLASSICAL ANALYSIS GOES WITH IT.

You’re already trapped, I’m afraid. You’ve swallowed the party line and you’ll never see beyond it. When people raise critiques you’ll react as you did above (“But dur-hur, then my ideas about inflation [which are derived from the same system of knowledge being criticised] don’t work… therefore you must be wrong”). Do you see the trappings here? It’s like a cult.

If I said to an evangelical that evolution is true, they’d respond by saying something like this (“But then my theory of creation doesn’t work… therefore you must be wrong”). It’s the same thing. Your ‘education’ has trapped you in a closed, self-referential system of knowledge. This is what the Harvard students protested. AND THEY ARE DAMN RIGHT.

Lefty:

“I don’t want to get into a debate about some boring discussion on some technical crap”

I do. I am saying that the assumptions that get mentioned continuously throughout an undergraduate and graduate education have no basis in reality. If you can’t prove they do then what exactly are you arguing?

“I’m absolutely certain that you look at certain models and reject the ones that do not fit into your ideology and come up with pseudo-scientific ad hoc justifications for doing so, I simply have no interest in that.”

Except when you’re the one doing it. You dismissed Dr. Keen’s work out of hand. You haven’t shown that you have even a passing knowledge of what he’s written. I DO have an ideology, as do you. I can at least admit it. I read those I disagree with however and try to keep an open mind. Neoclassical economics is very ideologically rigid and you know it. We both also know how non-neoclassical economists are treated by economics schools, and it has nothing to do with the soundness of their ideas.

“when it can almost always be shown empirically that it’s actually a result from massive levels of corruption and corporate capture, leading to policies certainly counter to what not only modern economics would suggest but even common sense.”

All of the empirical evidence shows this, huh? The US, New Zealand, Australia, most countries in Europe, they’ve all moved to the right on economic policy in recent decades. They have privatized services & resources, lowered individual and corporate taxes, deregulated finance, liberalized trade. Left wing, right wing, centrist parties have implemented these policies. Higher, moderate and middle income countries. All are in horrible shape as a result, and it is because of “corruption”. the ECB’s massive policy failures? All the empirical evidence shows this? Same with with most countries in Latin America. Have many countries in the region seen growth increase, inequality decrease, access to basic services vastly improve, happiness with democracy increase (take a look at latinobarometro polls during and after neoliberal governments took power), because they tackled corruption? Is it just a coincidence that they have largely turned their backs on neoliberal economic policies?

“just because you knowledge of postgraduate economics does not mean you will be able to solve the worlds problems”

Never said that, wasn’t my point. Economists, once again, almost without exception, have studied neoclassical economics. It has been their jobs to craft economic policy. Not to solve the world’s problems, to craft economic policy. Their policies have been miserable failures, for decades. In the developed, developing and underdeveloped world. Their job has been to draw up and implement economic policy and they have done a horrible job. We can pretend that something other than neoclassical and neoliberal economics is to blame if you’d like.

[Nov 06, 2011] Alienware M11x Gaming Laptop Details Dell

An interesting alternative to netbooks. Same form factor. 8 hours battery life, 4.4 pounds

[Nov 08, 2011] Some useful (or at least not harmful ;-) lecture notes devoted to quicksort

vimstory

Paper submitted for the Linux200.nl conference, 9-10 Oct 2000 in Ede

The continuing story of Vim
by Bram Moolenaar

The development of Vim (Vi IMproved) started in 1988 as a small program for the Amiga, used by one person. It is now included with every Linux distribution and has been given an award for the best open-source text editor. This article will discuss the current status of Vim, placing it in the context of the past and the future.

Vi compatible

Vim started as a replacement for Vi on the Amiga. Being used to Vi on Unix systems, the author wanted to use this powerful editor on his newly obtained Amiga too. There was a program called "Stevie", which lacked many commands and contained bugs; but since the source code was available, it was possible to enhance the program. Gradually more Vi commands were added and problems fixed. Then new useful commands were added that Vi didn't have: multi-level undo, text formatting, multiple windows, etc. At that point it was renamed from "Vi IMitation" to "Vi IMproved".

But Vim still tries to be very Vi compatible, if that is what you want. For most commands you will not notice any difference between Vi and Vim. But some Vi commands work in a clumsy way and some may be considered a leftover from the old days of slow computers and teletypes. Here Vim gives the user a choice of doing it the old Vi way, or doing it in an improved Vim way.

For example, in Vi the "u" command toggles the text between the situation before and after a change. Vim offers multi-level undo. What commands to use to get to the multiple levels? One way would be to use the "." command to repeat the "u" command. Nvi follows this approach. But this is not Vi compatible: Typing "xxu." in Vi deletes two characters: The "u" undoes one "x" and the "." repeats one "x" again. In Nvi the "." repeats the undo, thus both "x" commands are undone and you end up with no change.

The author of Vim doesn't like these unexpected and obscure incompatibilities. Another solution would be to use another command to repeat the undo or redo. In Vim this is CTRL-R, "R" for "repeat". Thus "xxu^R" is used to undo both "x" commands. This is both Vi compatible and offers the multi-level undo feature. Still, typing CTRL-R requires using two fingers. Since undo is an often used function, it should be easy to type.

Many people prefer to repeat the "u" command to undo more. Then CTRL-R is used to redo the undone commands. Thus "u" goes backwards in time and CTRL-R forward again. Since this is not compatible with Vi, it has to be switched on with an option.

What a user prefers often depends on his previous experience. If he has used Vi for many years, his fingers are trained to hit "u" and expect the last change to be toggled. But people who start with Vim find it strange that "u" toggles and prefer it to always perform an undo. For these matters of user preference Vim offers an option.

In Vim you can set "nocompatible", to make Vim behave more nicely, but in a not fully Vi compatible way. If you want, you can carefully tune each compatibility aspect by adding flags to the 'cpoptions' option. This is a typical Vim choice: offer a good default to make most people happy, and add options to allow tuning the behavior for personal preferences.

The current version of Vim is very much Vi compatible. One of the last things that has been added was the Ex mode and the "Q" command. Not many people use Ex mode, it was added to be able to execute old Vi scripts. One thing that's still missing is the open mode. Since this is really only useful when you are using a very primitive terminal, hardly anyone will miss this - most Vi users don't even know what it is.

There are still a number of small incompatibilities to be solved - you could call these bugs. Work on these continues, but it's very likely that Vim already contains less bugs for Vi commands than Vi itself.

Programmers aide

Many of the features that have been added to Vim over time are for programmers. That's not unexpected, since Vim is often used to edit programming languages and similar structured text, and the author himself does a lot of programming.

One of the first programming aids to be added was the "quickfix" feature. This was actually present in the Vi-like editor "Z" that came with the Amiga C compiler from Manx. Since it was very useful, it was added to Vim, using the "Z" editor as an example.

The "quickfix" feature allows the programmer to compile his code from within Vim and quickly fix the reported errors. Instead of making the user write down the line numbers and messages from the compiler, Vim parses the compiler output and takes the user directly to the location of the error, putting the cursor at the position where the error was reported. You can fix the problem and compile again with just a few commands. There are commands to jump to the next error, list all errors, etc. An option is used to specify how to parse the messages from the compiler, so that it can work with many different compilers.

"quickfix" also works with a command like "grep", which outputs lines with a file name and line number. This can be used to search for a variable and all places where it's used in the code, comments and documentation. Not only does this reduce the time needed to make changes, it also minimizes the risk of missing a location.

When editing programs you will type text once and read it many times. Therefore it is very important to easily recognize the structure of the code. Since everybody uses his own style of coding, and not everyone pays enough attention to the layout, the code may take time to understand. Highlighting items in the text can be of help here. For example, by making all comments blue it is easy to spot a short statement in between long comments. It's also easier to recognize the structure of the file when quickly paging through it.

Highlighting keywords can help spotting errors. The author has a tendency to type "#defined" instead of "#define". With highlighting switched on, the first one stays black, while the second one becomes brown. That makes it easy to see this mistake when it happens. Unmatched closing parentheses can be marked as an error, with a red background. That is a great help when changing a complicated "if" statement or Lisp code.

Highlighting is also useful for the last used search pattern. For example, when searching for the "idx" variable in a function, all its occurrences in the code will get a yellow background. That makes it very easy to see where it is used and check where its value is changed.

The syntax highlighting is completely adjustable. Everybody can add his own language if it's not already included with the Vim distribution. But since there are syntax files for about 200 languages now, mostly you just have to switch the syntax highlighting on and your files are coloured. Thanks to all the people who have submitted and are maintaining syntax files for everybody to use.

Folding

In 1998 Vim 5.0 was released. The question was: What next? A survey was held to ask Vim users which features they would like to see added to Vim. This is the resulting top ten:

  1. add folding (display only a selected part of the text) (*)
  2. vertically split windows (side-by-side) (*)
  3. add configurable auto-indenting for many languages (like 'cindent') (*)
  4. fix all problems, big and small; make Vim more robust (+)
  5. add Perl compatible search pattern
  6. search patterns that cross line boundaries (*)
  7. improve syntax highlighting speed (*)
  8. improve syntax highlighting functionality (+)
  9. add a menu that lists all buffers (*)
  10. improve the overall performance (+)

The goal for Vim 6.0 was to implement a number of these items - at least the top three - and this has actually taken place. The items marked with (*) have been implemented in Vim 6.0. The items marked with (+) are on-going activities. The number one requested feature deserves more explanation.

Folding means that a range of lines is displayed as one line, with a text like "34 lines folded: get_arguments()", but the actual text is still there and can be seen by opening the fold. It is as if the text were on a long roll of paper, which can be folded to hide the contents of each chapter, so that you only see the chapter titles. This gives a very good overview of what a file contains. A number of large functions, occupying thousands of lines, can be viewed as a list of function names, one per line. You can move to the function you want to see and open the fold.

Adding folding was a lot of work. It has impact on all parts of Vim. The displaying of text is different, and many commands work differently when the cursor is in a fold, but it mostly works now. And there are actually several ways to use folding:

  1. Manually: use commands to create and delete folds. This can also be used in a function to fold your text in any way you like.
  2. On syntax: use the syntax highlighting mechanism to recognize different items in the text and define folds with it.
  3. On indent: the further a line is indented the deeper it will be folded.
  4. By expression: define an expression that tells how deep a line is folded.
  5. By markers: put markers in the text to specify where a fold starts and ends.

Why so many different ways? Well, because the preferred way of folding depends on both the file you are editing and the desires of the user. Folding with markers is very nice to precisely define what a fold contains. For example, when defining a fold for each function, a comment in between functions could belong to the previous or the next function. But if you edit files in a version-controlled project, you are probably not allowed to add markers. Then you can use syntax folding, because it works for any file in a certain language, but doesn't allow you to change where a fold starts or ends. A compromise is to first define folds from the syntax and then manually adjust them. But that's more work.

Thus the way folding was implemented in Vim is very flexible, to be able to adjust to the desires of the user. Some related items have not been implemented yet, like storing the folding state of a file. This is planned to be added soon, before version 6.0 goes into beta testing.

Indenting

Giving a line the right indent can be a lot of work if you do it manually. The first step in doing this automatically is by setting the 'autoindent' option. Vi already had it. It simply indents a line by the same amount as the previous line. This still requires that you add space below an "if" statement and reduce space for the "else" statement.

Vim's 'cindent' option does a lot more. For C programs it indents just as you expect. And it can be tuned to follow many different indenting styles. It works so well that you can select a whole file and have Vim reindent it. The only place where manual correction is sometimes desired is for continuationbr> lines and large "if" statements. But this only works for C code and similar languages like C++ and Java.

Only recently a new, flexible way of indenting has been added. It works by calling a user defined function that returns the preferred indent. The function can be as simple or as complex as the language requires. Since this feature is still new, only indenting for Vim scripts is currently included in the distribution. Hopefully people will write indent functions for many languages and submit them to be included in the distribution, so that you can indent your file without having to write an indent function yourself.

A disadvantage of using a user defined function is that it is interpreted, which can be a bit slow. A next step would be to compile the function in some way to be able to execute it faster. But since computers keep getting faster, and indenting manually is slow too, the current interpreted functions are very useful already.

[Nov 10, 2011] Adding and Removing SAN Disks from SUSE Device Manager

January 23, 2009 | Novell User Communities

Remove a disk

      echo 1 > /sys/block/sdX/device/delete
      echo 1 > /sys/block/sdY/device/delete	  

You should now have all traces removed, you can run multipath -ll and cat /proc/scsi/scsi to cross check. You can now remove the mapping from the SAN and delete the logical volume if required.

[Nov 10, 2011] How to setup - use multipathing on SLES

SLES 9 information -- outdated

The boot scripts will only detect MPIO devices if the modules for the respective controllers are loaded at boot time. To achieve this, simply add the needed driver module to the variable INITRD_MODULES within the file /etc/sysconfig/kernel.

Example:

Your system contains a RAID controller that is accessed by the cciss driver and you are using ReiserFS as a filesystem. The MPIO devices will be connected to a Qlogic controller accessed by the driver qla2xxx, which is not yet configured to be used on this system. The mentioned entry within /etc/sysconfig/kernel will then probably look like this:

INITRD_MODULES="cciss reiserfs"

Using an editor, you would now change this entry:

INITRD_MODULES="cciss reiserfs qla2xxx"

When you have applied this change, you will need to recreate the INITRD on your system to reflect it. Simply run this command:

mkinitrd

When you are using GRUB as a bootmanager, you do not use to make any further changes. Upon the next reboot the needed driver will be loaded within the INITRD. If you are using LILO as bootmanager, please remember to run it once to update the boot record.

  • Configuring multipath-tools

    If your system is one of those listed above, no further configuration should be required.

    You might otherwise have to create /etc/multipath.conf (see the examples under /usr/share/doc/packages/multipath-tools/) and add an appropriate devices entry for your storage subsystem.

    One particularly interesting option in the /etc/multipath-tools.conf file is the "polling_interval" which defines the frequency of the path checking that can be configured.

    Alternatively, you might choose to blacklist certain devices which you do not want multipath-tools to scan.

    You can then run:

    multipath -v2 -d

    to perform a 'dry-run' with this configuration. This will only scan the devices and print what the setup would look like.

    The output will look similar to:

    3600601607cf30e00184589a37a31d911
    [size=127 GB][features="0"][hwhandler="1 emc"]
    \_ round-robin 0 [first]
      \_ 1:0:1:2 sdav 66:240  [ready ]
      \_ 0:0:1:2 sdr  65:16   [ready ]
    \_ round-robin 0
      \_ 1:0:0:2 sdag 66:0    [ready ]
      \_ 0:0:0:2 sdc  8:32    [ready ]
    

    showing you the name of the MPIO device, its size, the features and hardware handlers involved, as well as the (in this case, two) priority groups (PG). For each PG, it shows whether it is the first (highest priority) one, the scheduling policy used to balance IO within the group, and the paths contained within the PG. For each path, its physical address (host:bus:target:lun), device nodename and major:minor number is shown, and of course whether the path is currently active or not.

    Paths are grouped into priority groups; there's always just one priority group in active use. To model an active/active configuration, all paths end up in the same group; to model active/passive, the paths which should not be active in parallel will be placed in several distinct priority groups. This normally happens completely automatically on device discovery.
     

  • Enabling the MPIO components

    Now run

    /etc/init.d/boot.multipath start
    /etc/init.d/multipathd start

    as user root. The multipath devices should now show up automatically under /dev/disk/by-name/; the default naming will be the WWN of the Logical Unit, which you can override via /etc/multipath.conf to suit your tastes.

    Run

    insserv boot.multipath multipathd

    to integrate the multipath setup into the boot sequence.

    From now on all access to the devices should go through the MPIO layer.
     

  • Querying MPIO status

    To query the current MPIO status, run

    multipath -l

    This will output the current status of the multipath maps in a format similar to the command already explained above:

    3600601607cf30e00184589a37a31d911
    [size=127 GB][features="0"][hwhandler="1 emc"]
    \_ round-robin 0 [active][first]
      \_ 1:0:1:2 sdav 66:240  [ready ][active]
      \_ 0:0:1:2 sdr  65:16   [ready ][active]
    \_ round-robin 0 [enabled]
      \_ 1:0:0:2 sdag 66:0    [ready ][active]
      \_ 0:0:0:2 sdc  8:32    [ready ][active]

    However, it includes additional information about which priority group is active, disabled or enabled, as well as for each path whether it is currently active or not.
     

  • Tuning the fail-over with specific HBAs

    HBA timeouts are typically setup for non-MPIO environments, where longer timeouts make sense - as the only alternative would be to error out the IO and propagate the error to the application. However, with MPIO, some faults (like cable failures) should be propagated upwards as fast as possible so that the MPIO layer can quickly take action and redirect the IO to another, healthy path.

    For the QLogic 2xxx family of HBAs, the following setting in /etc/modprobe.conf.local is thus recommended:

    options qla2xxx qlport_down_retry=1 ql2xfailover=0 ql2xretrycount=5
  • Managing IO in error situations

    In certain scenarios, where the driver, the HBA or the fabric experiences spurious errors,it is advisable that DM MPIO is configured to queue all IO in case of errors leading loss of all paths, and never propagate errors upwards.

    This can be achieved by setting

    defaults {
    		default_features "1 queue_if_no_path"
    }

    in /etc/multipath.conf.

    As this will lead to IO being queued forever, unless a path is reinstated, make sure that multipathd is running and works for your scenario. Otherwise, IO might be stalled forever on the affected MPIO device, until reboot or until you manually issue a

    dmsetup message 3600601607cf30e00184589a37a31d911 0 fail_if_no_path				

    (substituting the correct map name), which will immediately cause all queued IO to fail. You can reactivate the queue if no path feature by issueing

    dmsetup message 3600601607cf30e00184589a37a31d911 0 queue_if_no_path
    

    You can also use these two commands to switch between both modes for testing, before committing the command to your /etc/multipath.conf.

    4. Using the MPIO devices

  • [Nov 11, 2011] The Dangers that Lurk Behind Shadow IT by George Spafford

    "the threat of critical information systems that have been created and are maintained outside of the formal IT organization."
    February 4, 2004 | Datamation

    While IT managers and even business leaders may worry about what's going on, and what's not going on, in the IT department, when you move that work outside of the building, it's enough to give most managers fits. Most public companies these days are very concerned about meeting the requirements of the Sarbanes Oxley Act of 2002 (SOx). Section 404 of the act mandates that management have effective internal controls and requires external auditors to attest to the effectiveness of the controls.

    This, of course, is creating an abundance of antacid moments in boardrooms all over the United States and those fears are being transferred to the IT groups as well because the financial systems and key operating systems that run the businesses are all under the spotlight. As a result, groups are hiring consultants by the bus load to come in and help put appropriate policies and procedures in place.

    The problem is, however, that some groups are overlooking the threat of critical information systems that have been created and are maintained outside of the formal IT organization.

    The term ''Shadow IT'' refers to groups providing information technology solutions outside of the formal IT organization. Their existence is due to groups thinking they can do things cheaper and/or better than the formal IT group. Also, it may be that the formal group can't meet their service requirements or the formal group is forced to develop generic applications in an attempt to meet the needs of everyone and controlling costs versus customizing applications to meet the needs of business units.

    Whatever the exact reason, the basic premise is the same -- the business units aren't satisfied with what is being given to them. For example, if manufacturing is under extreme pressure to lower costs while increasing throughput, they may have need for special RFID software. But when they approach the formal IT group and it turns out there are no plans to develop the necessary software, then that may force the business unit to write software outside of IT, for example, or source it from a third party without IT.

    In this day and age, there are some very significant issues facing companies that choose to allow Shadow IT groups to exist.

    One of the first issues to recognize is poor resource utilization. By having unofficial IT resources scattered through business units, there can't be a cohesive effort to prioritize and schedule work across all of them. That means Bob in accounting may have a four-month backlog on IT related requests for his group, while Sharon in sales may have capacity but be unaware, or for political reasons, be unable to help Bob.

    Another issue is lack of proper processes. Sometimes the Shadow IT people have formal IT process training. Many times they do not. As needs popped up, they figured out how to respond through trial and error, turning to friends or leafing through manuals. As a result, proper requirements definition, documentation, testing and change control are lacking. Even IT professionals have been known to let proper processes slip due to pressures from business, let alone when managed by people who may fail to see the value of the processes.

    Lack of controls is yet another problem. Proper security and operational controls are crucial now. It's one thing to implement proper controls over formal IT systems and personnel. It's far, far harder to try and retrofit controls over systems that were ill-designed to begin with. It's far better to design quality, security and controls into a system than to try and inspect them in or add the necessary functionality later. Sometimes, it is virtually impossible to do it without a ground-up redesign of the software or system.

    And then there's the simple matter of mistakes.

    People may have the best intentions in the world when they write a critical application or design a key system. However, simple mistakes can and do happen to everyone. Unless proper design, testing and monitoring processes are in place, the total risks to the organization increase.

    To illustrate, I recall a very capable gentleman outside of IT who wrote a reporting application for billing. He thought the SQL command captured all of the billing data. However, since he wasn't a SQL expert and did not methodically test the application, it turned out later that the vital report missed the first and last day of every month.

    What Lurks in the Shadows

    It is naive to think that an official edict can stop Shadow IT work.

    As soon as budgets are cut or resources constrained forcing executives to look at in-house IT alternatives, then the environment is fertile for Shadow IT groups to appear.

    To address this, what is needed is a close relationship with the business units. IT must sit down with them and spend the time to learn about their troubles and what direction they want to take the business in. Addressing pain gets you in the door and helping them with strategic direction keeps you there.

    The intent is to use this information to develop IT plans, budgets and resourcing strategies necessary to achieve these goals. This alignment is essential for senior management to understand how monies invested in IT relate to the financial performance of the organization. However, don't stop with the planning. Be sure to regularly communicate with the business owners about what is going on, as well as communicating achievements, risks and opportunities to senior management.

    The existence of Shadow IT within an organization is symptomatic of a lack of alignment between business units and IT and, possibly, even senior management and IT. Shadow IT is, at best, a shortsighted strategy that may work well for a given business unit, but be detrimental for the organization overall.

    It is vital that IT continuously work with the business units and senior management to ensure that the formal IT team is capable of supporting business requirements and that there be clear understanding of the risks associated with bypassing the formal IT organization.

    While the phrase ''Align IT with business'' may well be almost to the point of overuse and cliché in the management literature realm, the concepts that underlie it are timeless and IT must ensure that alignment exists.

    [Nov 11, 2011] The Rise of Shadow IT By Hank Marquis

    Sep 19, 2006 | CIO Update

    The loss of competitive advantage from IT may not be entirely due to its commoditization. It is starting to become clear that at least some of the responsibility lies with business activities taking place outside of the control of IT. Today, business users and knowledge-workers create and modify their IT infrastructures using “plug-and-play” IT products. These commodity IT products are now so easy to use, cheap, and powerful that business users themselves can and do perform much of the work traditionally done by IT.

    But without the planning and wider view into the ramifications of their actions provided by IT this often results in disastrous consequences. Forrester Research found 73% of respondents reported incidents and outages due to unplanned infrastructure modifications.

    Welcome to the gritty reality of commodity IT. Aside from the opportunity costs and operational losses resulting from this uncontrolled plug-and-play free-for-all, many companies are missing out on the competitive advantage potential that harnessing commodity IT delivers.

    Within this disturbing new reality lie both the seeds of competitive advantage and a viable model for 21st century IT. In the Summer 2006 issue of MIT Sloan Management Review , I proposed in “Finishing Off IT” that even though IT is now a commodity it can and does enable significant competitive advantage. Resource dependency creates complex relationships between consumers and providers.

    Post a comment Email Article Print Article Share Articles Digg DZone Reddit Slashdot StumbleUpon del.icio.us Facebook FriendFeed FurlThese interdependent relationships in turn produce organizational problems that require organizational solutions. Offered as a solution was the notion that management and organizational structure, not technology, hold the promise of sustainable competitive advantage from IT, and that manufacturing process control techniques hold a viable model for the future of IT.

    21st Century IT

    To visualize how a 21st century IT organization could look, it helps to consider the production and consumption of IT services as a manufacturing micro-economy.

    IT manufactures information processing, communication, and collaboration products that underpin nearly all business operations. Knowledge-workers consume these IT products in pursuit of business objectives using everything from simple emails to more complicated core activities like forecasts and audits.

    Related Articles Death By Deliverables IT Must Provide a Strategic Edge The CIO Reinvented Good Ol' Fashion Theft Biggest Threat to Consumer Data A deeper exploration of what actually occurs within the IT micro-economy helps to further clarify the issue. Based on real events I documented between December 2005 and July 2006, the following dramatization presents a composite of the experiences reported by a number of mid-to-senior IT managers.

    On the way to the office your Blackberry vibrates. It’s a message from your staff. Users on the east side have been tech-swapping again. You know how it goes: “I’ll trade you this color printer for your wide screen monitor.” You know this is going to raise flags with the auditors.

    You get to your office and there is a note from the service desk about that system outage on the west side. It turns out the system went down because its users bought some high-resolution scanners and connected them to the system themselves.

    You didn’t even know they had scanners until they called demanding support.

    Downtown, a group of users decided that to improve performance they needed to regularly transfer gigabytes of video from the main conference room uptown to a storage area network (SAN) they built on their own. As you suspected, these transfers were responsible for slowing down a business-critical application that has managers all over the company grumbling.

    An email from the PMO informs you of a new project that will require extra support staffing starting in two weeks; first you've heard of that. You look at the calendar and sigh—budget and staff reductions, increasing user counts, more audits, increased legal regulations, major new and unplanned applications, connectivity and collaboration requirements, and very powerful and unhappy customers to placate.

    So much for delivering the IT projects you did know about on-time and on-budget.

    This “bad behavior” by the business amplifies the already accelerating velocity of change facing IT whether in-sourced or out-sourced.

    The true nature of today's average IT environment is not pretty, and it’s not something most senior executives have fully grasped. It may also turn out to be a critical factor in obtaining competitive advantage from commodity IT.

    Rise of the Knowledge-Worker

    Post a comment Email Article Print Article Share Articles Digg DZone Reddit Slashdot StumbleUpon del.icio.us Facebook FriendFeed FurlIT commoditization changes the balance of power between IT and the business, and within the business itself. Within the IT micro-economy of plug-and-play commodity IT, the consumer/supplier exchange relationship has shifted. This requires dramatic changes in thinking and management.

    Traditional wisdom holds that the consumer for IT services is a functional business unit—sales, marketing, and so on—but, today, the real consumers of IT services are ad-hoc teams of knowledge-workers spanning multiple locations, and crossing business unit and corporate boundaries.

    This shift in the exchange relationship has profound implications for the business and IT.

    The underlying cause is the unstoppable commoditization of IT as advances accelerate productivity: The ubiquitous availability of information and internet technology is enabling knowledge-workers to traverse geographic, political boundaries, and now functional barriers.

    Called “Shadow IT,” they are the millions of knowledge-workers leaping traditional barriers and asserting themselves in ways that challenge traditional IT departments.

    Knowledge workers perform vital business functions like numerical analysis, reporting, data mining, collaboration, and research. They use databases, spreadsheets, software, off-the-shelf hardware, and other tools to build and manage sophisticated corporate information systems outside of the auspices and control of traditional IT.

    By creating and modifying IT functionality, knowledge-workers are in effect supplanting the traditional role of corporate IT. However, they do so in a management and process control vacuum.

    While the business can do these things due to the commoditization of IT, few executives ask if they should do them, and fewer say they must not. Virtually none realize the impact or import. Instead, to the dismay of IT staff, most senior executives and most CIO's condone virtually any demand the business makes.

    This lack of control is responsible for many of the problems associated with IT today.

    While the IT center-of-gravity has irrefutably shifted to the knowledge-worker, they do not have the long-term vision or awareness of dependencies and planning that IT traditionally provides.

    The business wonders why IT doesn’t get "it" and ponders outsourcing when instead they should be taking responsibility for their own IT usage. No product IT can buy, and no outsourced IT utility, can handle these and similar issues encountered in ever-increasing numbers by real IT organizations.

    Yet, it is precisely this consumer/supplier shift, increasing dependence upon IT, and the product-oriented nature of commodity IT that provides companies with the opportunity to leverage it for competitive advantage. However many senior executives have so far tipped a blind eye to Shadow IT, implicitly condoning the bad behaviors previously described—and they are throwing away any advantage that IT can provide.

    New World Order

    This lack of management control over business IT consumption has a tremendous cost. It is partly responsible for loss of the competitive advantage that IT can and does deliver, and is directly responsible for many lost opportunities, increased costs, and service outages.

    Over time the erosion of perceived IT quality usually leads to outsourcing, which is increasingly seen as an incomplete solution at best, and a disaster at worst.

    In order to recover and expand upon the advantages promised by commodity IT, senior executives have to change their concepts of an IT department, the role of centralized control, and how knowledge workers should contribute. The issue is fundamentally one of management philosophy.

    The Nordstrom way promotes a customer/worker management philosophy where management’s first commitment is to the customer. The customer is always right in the Nordstrom way. This accurately reflects is the hands-off position taken by many senior executive leaders with regard to out-of-control Shadow IT practices and bad business behavior.

    A better management philosophy for commoditized IT is the ‘Southwest’ way. In the Southwest way, the worker comes first. The customer is not always right, and Southwest has been know to ask misbehaving customers to fly another airline.

    Management’s first concern is the worker, because they know that workers following sound processes hold the keys to customer satisfaction, and in turn, competitive advantage.

    Making the Southwest model work for 21st century IT requires a more comprehensive view of what constitutes an IT organization, a view that extends well past the borders of what most leaders consider IT.

    Shifting Demographics

    The rising sophistication and expectations of knowledge workers results in divergence in perceived operational goals between IT and the business—an indicator of task-uncertainty and a key contingency within structural contingency theory.

    These changing demographics give new urgency to the need for coordination of knowledge-workers and IT, yet management is trying to centralize IT spend and control via the CIO role.

    Instead of embracing Shadow IT, CIOs are trying to shut it down. Consider instant messaging (IM), an application many knowledge worker consider critical. IT's approach to IM is reminiscent of the early days of the Internet.

    Instead of realizing the job of IT is to support the needs of knowledge-workers, most IT organizations are trying to stamp out IM—just as they tried to restrict and eliminate Internet access. How will traditional IT respond to Wikis and blogs as corporate IT tools in the future?

    The Corporate Executive Board projects that the percentage of IT spend under central control to grow from 50% in 2002, to 95% in 2006, but this does not take into account the knowledge-workers of Shadow IT.

    A study by Booze Allen Hamilton found that shadow IT personnel equal as much as 80% of the official IT staff. Clearly, despite the best efforts of senior leaders and IT, the business stubbornly refuses to succumb to centralized IT control.

    The problem with the current direction of the CIO role is that is typically has responsibility to support the business without authority to control the business; a classic management mistake leading to the aforementioned dilemmas.

    The lure of commodity IT is great. Since shadow IT is a direct result of commoditized IT and resource dependency, it also demonstrates that both corporate IT, and IT utilities, are not delivering the services required by knowledge workers.

    However, most IT leaders do not understand the strategic contingencies within the commoditized IT micro-economy. They don’t know their marketplace, and they don’t know who their customer is. In effect, IT is manufacturing the wrong products for the wrong market. IT doesn’t get it either.

    [Nov 11, 2011] Yourtube video

    [Nov 12, 2011] The Tetris God - CollegeHumor Video

    [Nov 13, 2011] Quicksort killer

    What is the time complexity of quicksort? The answer that first pops up in my head is O(N logN). That answer is only partly right: the worst case is in fact O(N2). However, since very few inputs take anywhere that long, a reasonable quicksort implementation will almost never encounter the quadratic case in real life.

    I came across a very cool paper that describes how to easily defeat just about any quicksort implementation. The paper describes a simple comparer that decides ordering of elements lazily as the sort executes, and arranges the order so that the sort takes quadratic time. This works even if the quicksort is randomized! Furthermore, if the quicksort is deterministic (not randomized), this algorithm also reveals the input which reliably triggers quadratic behavior for this particular quicksort implementation.

    [Nov 13, 2011] Sorting Algorithms - QuickSort Tutorial, Example, and Java code

    [Nov 16, 2011] DDR3 1600 CAS 9 vs. DDR3 1333 CAS 7

    tomshardware.com

    2133MHz on any Sandy Bridge is a hit or miss, in by miss I mean BSODs. Yes, you can raise the CAS timings to improve stability e.g. 9-11-9 -> 11-11-11 which is a real JEDEC standard; defeats the Frequency. I find RAM Mfg's try to get the magical CAS 9, but the CAS/frequency of problems warrants a careful look on motivation. IMO - Don't run 2133 MHz on SB, maybe 1866 MHz with a kit that offers a low tight CAS.

    I generally recommend 4GB density/stick 1600 MHz CAS 8-8-8, and in the beginning I was an 1866 'pusher' but I was getting some blow-back.

    Another nice Article -> http://www.tomshardware.com/review [...] 778-8.html

    The 'Wall' 99% of these posts are for gaming, therefore there are a few things to consider:

    1. Stability
    2. Performance


    Most people aren't multi-tasking freaks they're Gamers and sometimes Render'ers who don't want a BSOD 2 hours into their games or CAD/Rendering work, and the 1600 MHz CAS 8/9 is a WALL to the Games and GB's of RAM to the Render's. There is more than one article to substantiate the benchmarks.

    I run 2000MHz on my OC 980X {BCLK + CPU Multiplier} where I can adjust my BCLK to achieve greater stability, those days are gone with Intel, I'd guess for good.

    [Nov 16, 2011] iBUYPOWER Gamer Power 913i Desktop PC Intel Core i3 2120(3.30GHz) 8GB DDR3 500GB HDD Capacity Intel HD Graphics 2000 Windows 7 Home Premium 64-Bit

    Pretty amazing price ($399) on Black Friday. It  can't be replicated if you assemble the same desktop from parts.
    Newegg.com

    iBUYPOWER Gamer Power 913i Desktop PC Intel Core i3 2120(3.30GHz) 8GB DDR3 500GB HDD Capacity Intel HD Graphics 2000 Windows 7 Home Premium 64-Bit

    [Nov 16, 2011] Intel Core i5-2500 3.3 GHz 6 MB Cache Socket LGA1155

    $209 as of Nov 16, 2011
    Amazon.com

    Product Features

    [Nov 16, 2011] Kingston HyperX DDR3 3GB 2GHz Memory Kit KHX16000D3K3-3GX

    When Intel introduced the Core i7 processor they raised the memory stakes by implementing triple channel memory capability. Instead of two matched pairs of memory sticks, the new x58 motherboards can use matched memory sticks in sets of three. Kingston, known for their acclaimed HyperX memory line have released the Kingston HyperX DDR3 2GHz triple channel memory kit to provide performance memory for the Intel Core i7 platform.

    One of the performance requirements for the Core i7 machines is that system memory should not run higher than 1.65v to prevent possible damage to the CPU. The kit we will be looking at today is the one of fastest models available and runs at 2GHz / PC3-16000. The Kingston HyperX PC3-16000 kit runs at a 1.5v default and set at 1.65v these sticks easily run at CL9-9-9-27 timings.

    Some x58 motherboards feature Intel Extreme Memory Profiles (XMP) which is a high-performance DDR3 memory overclocking tool. Users can take advantage of this memory overclocking tool but making simple adjustments in the BIOS. Even the novice overclockers can take their Core i7 to the next level in no time with Kingston HyperX PC3-16000 kit and an XMP supported motherboard.

    If you have purchased Kingston memory in the past, then the Kingston HyperX PC3-16000 kit packaging will be very familiar. The memory sticks are nestled in a clamshell package with the modules sitting in three separate slots and covered by a plastic cover. Sealing the package is a Kingston sticker which lists the model number, memory size, timings and speed.

    [Nov 16, 2011] FrontPage options has setting for tag autocompletion.

    In many cases this is unnecessary and could be unchecked.

    [Nov 16, 2011] PassMark Intel vs AMD CPU Benchmarks - High End

    Intel Core i7-2700K @ 3.50GHz

    Price and performance details for the Intel Core i7-2700K @ 3.50GHz can be found below. This is made using thousands of PerformanceTest benchmark results and is updated daily.

    [Nov 16, 2011] SpecCPU results

    Difference in SpecCPU results is minimal, while difference in prices is 20% i5-2500 is $209 at Newegg vs best price for i7-2700 $249
    Fujitsu CELSIUS W410, Intel Core i3-2100
    HTML | CSV | Text | PDF | PS | Config
    Yes 2 1 2 1 36.8 38.1
    Fujitsu CELSIUS W410, Intel Core i5-2500
    HTML | CSV | Text | PDF | PS | Config
    Yes 4 1 4 1 42.7 44.5
    Fujitsu CELSIUS W410, Intel Core i7-2600
    HTML | CSV | Text | PDF | PS | Config
    Yes 4 1 4 1 44.6 46.4
    Fujitsu CELSIUS W410, Intel Core i5-2400
    HTML | CSV | Text | PDF | PS | Config
    Yes 4 1 4 1 40.6 42.2

    [Nov 16, 2011] Ten years of Windows XP how longevity became a curse by Peter Bright

    Not sure about the curse, but XP really has a tremendous ride... 
    arstechnica.com

    Windows XP's retail release was October 25, 2001, ten years ago today. Though no longer readily available to buy, it continues to cast a long shadow over the PC industry: even now, a slim majority of desktop users are still using the operating system.

    ...For home users using Windows 95-family operating systems, Windows XP had much more to offer, thanks to its substantially greater stability and security, especially once Service Pack 2 was released.

    ...Over the course of its life, Microsoft made Windows XP a much better operating system. Service Pack 2, released in 2004, was a major overhaul of the operating system. It made the software better able to handle modern systems, with improved WiFi support and a native Bluetooth stack, and made it far more secure. The firewall was enabled by default, the bundled Internet Explorer 6 gained the "gold bar" popup blocker and ActiveX security feature, and for hardware that supported it, Data Execution Protection made it more difficult to exploit software flaws.

    ...Ten years is a good run for any operating system, but it really is time to move on. Windows 7 is more than just a solid replacement: it is a better piece of software, and it's a much better match for the software and hardware of today.

    [Nov 16, 2011] The 40th birthday of the first microprocessor, the Intel 4004

    Forty years ago today, electronics and semiconductor trade newspaper Electronic News ran an advertisement for a new kind of chip. The Intel 4004, a $60 chip in a 16-pin dual in-line package, was an entire CPU packed onto a single integrated circuit (IC).

    At a bare minimum, a CPU is an instruction decoder and an arithmetic logic unit (ALU); the decoder reads instructions from memory and directs the ALU to perform appropriate arithmetic. Prior CPUs were made up of multiple small ICs of a few dozen or hundred transistors (and before that, individual transistors or valves) wired up together to form a complete "CPU." The 4004 integrated the different CPU components into one 2,300-transistor chip.

    4004 wasn't just a new direction for the computer industry; it was also a new direction for Intel. Since its founding in 1968, Intel was a memory company, making various kinds of RAM, boasting some of the fastest and highest density memory in the industry. It wasn't in the business of making CPUs or logic chips. Nonetheless, Japanese electronic calculator company Busicom approached Intel in 1969, asking the memory company to build a new set of logic chips for its calculators.

    Busicom proposed a fixed-purpose design requiring around a dozen chips. Busicom had designed the logic itself, and even verified that it was correct; it wanted Intel to build the things. Ted Hoff, manager of Intel's Application Department, realized that the design could be simplified and improved by using a general-purpose CPU instead of the specialized calculator logic that Busicom proposed. Hoff managed to convince both Intel and Busicom management that his approach was the right one.

    Work started six months later when Intel hired Federico Faggin in April 1970 to work on the project. Faggin had to design and validate the logic of the CPU. This was a challenge for Intel. As a memory company, it didn't have methodologies for designing or validating logic circuits. Intel's processes were geared towards the production of simple, regular repeating structures, rather than the highly varied logic that a CPU requires.

    Faggin's job was also made more complex by the use of silicon gate transistors. At the time, aluminum gates were standard, and while silicon eventually won out, its early development was difficult; silicon gates needed different design approaches than aluminum, and those approaches hadn't been invented yet.

    Nonetheless, Faggin was successful, and by March 1971 had completed the development work of a family of four different chips. There was a 2048-bit ROM, the 4001; a 40-byte RAM, the 4002; an I/O chip, the 4003; and finally, the CPU itself, 4004. Intel paid Busicom for the rights to the design, allowing the firm to sell and market the chip family. Branded as MCS-4, the chips started production in June 1971, before being advertised to the commercial markets 40 years ago today.

    Clumsy and cutting-edge

    The 4004 itself was a peculiar mix of cutting-edge technology and conservative cost-cutting. As an integrated CPU it was a landmark, but the design itself was clumsy even for 1970. Intel management insisted that the chip use a 16-pin DIP, even though larger, 40-pin packages were becoming mainstream at the time. This means that the chip's external bus was only four bits wide, and this single 4-bit bus had to transport 12-bit memory addresses, 8- and 16-bit instructions, and the 4-bit integers that the CPU operated on. Reading a single 16-bit instruction thus took four separate read operations. The chip itself had 740 kHz clock, using 8 clock cycles per instruction. It was capable of 92,600 instructions per second—but with the narrow multipurpose bus, achieving this in practice was difficult.

    In 1972, Intel produced the 8-bit 8008. As with the 4004, this was built for a third party—this time terminal manufacturer Datapoint—with Datapoint contributing much of the design of the instruction set, but Intel using its 4004 experience to actually design the CPU. In 1974, the company released the 8080, a reworked 8008 that used a 40-pin DIP instead of 8008's 18-pin package. Federico Faggin did much of the design work for the 8008 and 8080.

    In spite of these pioneering products, Intel's management still regarded Intel as a memory company, albeit a memory company with a sideline in processors. Faggin left intel in 1974, founding his own processor company, Zilog. Zilog's most famous product was the Z80, a faster, more powerful, software-compatible derivative of the 8080, that powered early home computers including the Radio Shack TRS-80 and the Sinclair ZX80, ZX81, and ZX Spectrum—systems that were many people's first introduction into the world of computing.

    Faggin's decision to leave Intel and go into business for himself caused some bad feeling, with Intel for many years glossing over his contribution. Nonetheless, he left an indelible mark on Intel and the industry as a whole, not least due to his decision to sign his initials, FF, on the 4004 die.

    The 8080 instruction set was then extended to 16 bits, with Intel's first 16-bit processor, the 20,000 transistor 8086, released in 1978. This was the processor that first heralded Intel's transition from a memory company that also produced processors into the world's leading processor company. In 1981, IBM picked the Intel 8088—an 8086 with the external bus cut to 8-bit instead of 16-bit—to power its IBM PC, the computer by which all others would come to be measured. But it wasn't until 1983, with memory revenue being destroyed by cheap Asian competitors, that Intel made microprocessors its core product.

    The processors of today continue to owe much of their design (or at least, the design of their instructions) to the 8086. They're unimaginably more complex, with the latest Sandy Bridge E CPUs using 2.2 billion transistors, a million-fold increase on 4004 and 100,000-fold on the 8086, the basic design elements are more than 30 years old.

    While the 4004 is widely regarded as the first microprocessor, and is certainly the best known, it arguably isn't actually the first. There are two other contenders.

    Texas Instruments' TMS 1000 first hit the market in calculators in 1974, but TI claimed it was invented in 1971, before the 4004. Moreover, TI was awarded a patent in 1973 for the microprocessor. Intel subsequently licensed this patent.

    Earlier than both of these was a processor called AL1. AL1 was built by a company named Four-Phase Systems. Four-Phase demonstrated systems built using AL1 in 1970, with several machines sold by early 1971. This puts them ahead of both TI and Intel. However, at the time AL1 was not used as a true standalone CPU; instead, three AL1s were used, together with three further logic chips and some ROM chips.

    Intel and Cyrix came to blows in a patent dispute in 1990, with TI's patent being one of the contentious ones. To prove that TI's patent should not have been granted, Four-Phase Systems founder Lee Boysel took a single AL1 and assembled it together with RAM, ROM, and I/O chips—but no other AL1s or logic chips—to prove that it was, in fact, a microprocessor, and hence that it was prior art that invalidated TI's claim. As such, although it wasn't used this way, and wasn't sold standalone, the AL1 can retrospectively claim to have been the first microprocessor.

    The 4004 is, however, still the first commercial microprocessor, and it's the first microprocessor recognized and used at the time as a microprocessor. Simple and awkward though its design may have been, it started a revolution. Ted Hoff, for convincing Busicom and Intel alike to produce a CPU, Federico Faggin, for designing the CPU, and Intel's management, particularly founders Gordon Moore and Robert Noyce, for buying the rights and backing the project, together changed the world.

    Photograph by Rostislav Lisovy

    [Nov 17, 2011] How smaller higher RPM hard drives can rip you off by George Ou

    September 19, 2006 | ZDNet

    Now let’s take a look at a 300 GB 10000 RPM hard drive that costs slightly more than the 147 GB 15000 RPM hard drive. This 10K RPM drive has an average rotational latency of 3 milliseconds which is 50% higher than the 15K RPM drive. It has an average seek time of 4.3 ms which is half a millisecond slower than the 15K RPM drive. Therefore the 10K RPM drive has an average access time of 7.3 milliseconds which means it can do a maximum of 137 IOPS for zero-size files. For 36 KB files, it would take up roughly 10% of the IOPS performance which means we should expect to see around 124 IOPS. Looking at the Storage Review performance database again, we see the actual benchmarked value is 124 IOPS.

    So we have an obvious performance winner right since 159 IOPS is better than 124 IOPS? Not so fast! Remember that the 15K RPM drive is less than 1/2 the size of the 10K RPM drive. This means we could partial stroke the hard drive (this is official storage terminology) and get much better performance levels at the same storage capacity. The top 150 GB portion of the 10K drive could be used for performance while the second 150 GB portion of the 10K drive could be used for off-peak archival and data mirroring. Because we’re partial stroking the drive using data partitions, we can effectively cut the average seek time in half to 2.15 ms. This means the average access time of the hard drive is cut to 5.15 ms which is actually better than the 15K RPM hard drive! The partial stroked 10K RPM drive would produce a maximum of 194 IOPS which is much better than 175 IOPS of the 15K RPM drive. So not only do we get an extra 150 GB archival drive for slightly more money, the active 150 GB portion of the drive is actually a better performer than the entire 147 GB 15K RPM drive.

    But this is a comparison on server drive components and we can actually see a more dramatic effect when we’re talking about the desktop storage market. In that market, you will actually pay DOUBLE for 1/4th the capacity on 73 GB 10K SATA RPM drives than typical 300 GB 7200 RPM SATA hard drives. Now the speed difference is more significant since the 7200 RPM drives have typical average seek times in the 8.9 millisecond range and you have to add 4.17 milliseconds average rotational latency for a relatively pathetic access time of 13.07 milliseconds. The 10K RPM SATA drive designed for the enthusiast performance desktop market has an average access time of 7.7 milliseconds. But since the 300 GB 7200 RPM drive is 4 times bigger than the 73 GB 10K drive, we can actually use quarter stroking and end up with a high-performance 75 GB partition along with a 225 GB partition we can use for large file archival such as a DVD collection.

    By quarter stroking the 300 GB drive, we can actually shave 6.68 ms off the seek time which means we’ll actually end up with an average access time of 6.4 milliseconds which is significantly faster than the 10K RPM "performance" drive. This means that PC enthusiasts are paying twice the money for a slower hard drive with a quarter of the storage capacity!

    Backing Up And Restoring Your Dedicated Server With SystemImager

    HowtoForge - Linux Howtos and Tutorials

    Version 1.0
    Author: Falko Timme <ft [at] falkotimme [dot] com>
    Last edited 06/17/2005

    This tutorial is based on the tutorial "Creating Images Of Your Linux System With SystemImager" (http://www.falkotimme.com/howtos/systemimager/index.php and http://www.howtoforge.com/howto_linux_systemimager) where you can find the basics about how to use SystemImager.

    Now let's assume you have a dedicated Linux server (rented or co-location) that is located in some provider's data center which is normally a few hundred kilometers away from your office or home. Now you want to make an image of that system so that you have a back up in case your server crashes, you accidentally deleted all you customers' web sites, etc. (I'm sure you have enough fantasy to make up some horror scenarios for yourself here...). Creating such an image is no problem, even on a remote system that is in a data center, it is all described in the "Creating Images Of Your Linux System With SystemImager" tutorial.

    But how do you restore such an image? That's the crucial point. The methods described in the "Creating Images Of Your Linux System With SystemImager" tutorial all require that you have physical access to your server and that your server has a floppy drive or a CD-ROM drive. But your server is a few hundred kilometers away, and nowadays only few servers have a floppy or CD-ROM drive.

    There is a solution, the only requirement is that your dedicated server has some kind of Linux rescue system which is a feature that normallly comes with dedicated servers offered by one of the big hosting companies. It basically works like this: your hosting company gives you the login to some kind of control panel where you can see a lot of information about your server, e.g. traffic consumption in the last few months, documentation, passwords, billing information, etc. There will also be a page that lets you select the boot mode of your server, i.e. normal system boot or rescue system. If you select rescue system, the server will boot into the rescue system which you can use to repair your normal system. It is similar to your Linux machines in your office or at home where you use some kind of Linux live-CD (e.g. Knoppix) to repair your system.

    Now in this tutorial I will demonstrate how to restore an image on your dedicated server on the basis of a dedicated server that the German hosting company Strato gave to me 3 months for free in order to write this howto. Many thanks to Strato for their co-operation!

    If you have successfully tried the methods described here on other hosters' dedicated servers please let me know! I will mention it here.

    This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents in the web.

    This document comes without warranty of any kind!

    C H A P T E R 4 - Installing SUSE Linux Enterprise Server 10

    Installation Task (Goal)

    Relevant Procedure(s) or Source(s)

    Run the Sun Installation Assistant.

    How to Use the Sun Installation Assistant.

    Install SLES 10 from local or remote CD/DVD drive.

    Installing SLES 10 From Distribution Media.

    Install SLES 10 from local or remote CD/DVD drive or PXE server.

    SUSE Linux Enterprise Server 10 Installation Manual

    Install SLES 10 from an image stored on a networked system.

    Creating a SLES 10 PXE Install Image on the PXE Server.

    Install SLES 10 from a PXE server.

    Installing SLES 10 From a PXE Server

    Update SLE10 software.

    Updating the SLES 10 Operating System

    Choosing a Disk-Imaging Program

    Microsoft does not provide disk-imaging software. You must purchase a third-party disk-imaging program to create a disk image of a master computer’s hard disk.

    Not all disk-imaging programs are compatible with Windows Server 2003 and Windows XP Professional. When you evaluate disk-imaging programs, make sure you choose a program that supports the following Windows Server 2003 and Windows XP Professional features:

    In addition to these required features, consider choosing a disk-imaging program that supports the following optional features:

    Some disk-imaging programs can create, resize, or extend a partition before you copy a disk image onto a destination computer. Although these features might be useful, not all disk-imaging programs can perform these tasks: in fact, some programs might cause a STOP 0x7B error (INACESSIBLE_BOOT_DEVICE). If you want to create a partition on a destination computer’s hard disk before you perform an image-based installation, you need to be sure the disk-imaging program is compatible with the file systems used by Windows Server 2003 and Windows XP Professional. If you want to resize or extend a partition before you copy a disk image onto a destination computer, use the ExtendOemPartition parameter in the Sysprep.inf file.

    For more information about Stop 0x7B errors, see article 257813, "Using Sysprep May Result in ‘Stop 0x7B (Inaccessible Boot Device)’ on Some Computers," in the Microsoft Knowledge Base. To find this article, see the Microsoft Knowledge Base link on the Web Resources page at http://www.microsoft.com/windows/reskits/webresources. For more information about using the ExtendOemPartition parameter, see "Automating Tasks Before Mini-Setup" later in this chapter.

    Note: If you are deploying a 64-bit edition of Windows XP or a 64-bit version of the Windows Server 2003 family, you must use a 64-bit disk-imaging program.

    [Nov 18, 2011] Cool Solutions Setting Up a SUSE PXE Installation Server in an Existing NetWare Environment

    30 Aug 2006

    I have a "NetWare shop" with no existing PXE boot services (read: ZENworks PXE enabled). I'm starting down the road of OES Linux, SLES, and SLED, and wanted to set up an installation server so that I can easily install machines without media. Though there are a number of good docs helping you set up an Installation server, it was difficult to determine how to get PXE boot to work with my existing NetWare 6 DHCP services. Documents refer to configuring the SLES server as a DHCP server, but I didn't want to do that, and potentially interfere with my existing, working-just-find DHCP services on NetWare. To follow is a recipe that will provide you the specific steps for getting a working PXE-based Installation server for your SUSE deployments in your existing NetWare environment. The example I use here will be for a SLED 10 installation source, though it would be pretty much the same for OES or SLES 10.

    === SLES CONFIGURATION ===

    On your installation server (SLES):

    - go to Yast, and search for tftp, nfs, and syslinux and install these packages if they aren't already installed.

    - configure the TFTP server
    Yast > Network Services > TFTP
    Select Enable
    select a path for your tftp directory (eg: /tftpboot)
    select Finish

    - configure PXE boot files
    copy the pxelinux.0 file to the tftpboot directory:

    cp /usr/share/syslinux/pxelinux.0 /tftpboot

    copy the kernel and initrd files from your first SLED installation CD to the tftp directory:
    cp /(path to media)/boot/i386/loader/linux /tftpboot/sled10.krnl
    cp /(path to media)/boot/i386/loader/initrd /tftpboot/sled10.ird
    (I choose to copy these files to a renamed destination filename that references what they are. This way, I can also copy additional kernel and initrd files as additional installation choices in my PXE boot menu)

    create a pxelinux.cfg subdirectory under the tftp directory:

    mkdir /tftpboot/pxelinux.cfg

    copy the isolinux.cfg file from the first SLED installation CD to this subdirectory renaming it to default:
    cp /(path to media)/boot/i386/loader/isolinux.cfg /tftpboot/pxelinux.cfg/default

    edit the default file to point to your SLES installation server and replace this:

    # install
    label linux
    kernel linux
    append initrd=initrd splash=silent showopts

    with this:

    # SLED 10
    label SLED10
    kernel sled10.krnl
    append initrd=sled10.ird ramdisk_size=65536 install=nfs://172.16.0.99/sources/SLED10 splash=silent showopts

    (items in bold reference your specific details)

    copy the message file from the first installation CD to the pxelinux.cfg directory:

    cp /(path to media)/boot/i386/loader/message /tftpboot/pxelinux.cfg

    edit the linux menu item in the message file (the "PXE menu") for the SLED10 entry you added in the default file above:
    change: linux - Installation
    to: SLED10 - SLED 10 Installation

    (NOTE: when typing this boot option at the PXE boot menu, it *IS* case sensitive, so you'll need to type SLED10 in uppercase in this example)

    - copy the contents of your installation DVD to a directory on your SLES server (eg: /sources/SLED10)

    mkdir -p /sources/SLED10
    cp -R /(path to media)/*.* /sources/SLED10

    If you have CD sources (as opposed to a DVD source), refer to this article for copying the CD contents into an installation source directory.

    - configure your NFS to export the directory containing your installation files:
    Yast > Network Services > NFS Server
    check to Start the NFS server, and open port in firewall (if firewall is enabled), then select Next
    click the Add Directory button, and select the directory containing your installation sources (eg: /sources), click Finish

    - restart the xinetd service

    rcxinetd restart

    So now your SLES server is ready to go, acting as a PXE server and providing the Installation source media.

    === NETWARE DHCP CONFIGURATION ===

    Now, you need to configure your NetWare DHCP server to correctly direct your PXE boot clients to your SLES server:

    - In your DNS/DHCP Management Console application:

    Select the DHCP Service tab

    Click on the subnet where your SLES box exists, select the Subnet Options tab, check on the Set Boot Parameter Option, and enter your Server Address of your SLES server and the Boot File Name to be pxelinux.0

    - If your DHCP NetWare server does not already have a PDHCP.NLM and PDHCP.INI, find it from one of your ZENworks servers or search for a Novell download containing this file and copy to your DHCP server.

    - Make the following entries in the PDHCP.INI:
    TFTP_SERVER_IP=172.16.0.99
    USE_DHCP_Port=0
    USE_BINL_PORT=1

    - Load PDHCP on your DHCP server (and add to your AUTOEXEC.NCF to start up).

    - Restart your DHCP services

    === BADA BING, BADA BOOM ===

    Boot up your PXE machine, you should get the default PXE menu!

    Now with PXE working, this PXE menu can be much more than just an menu for installing a new OS. You can add additional options to load up a number of "support disks" for diagnostics, wiping the disk, or getting to your DOS based imaging solution you have. You know, all those great support floppies you thought you had to get rid of because the computers you buy now no longer have floppy drives in them <grin>.

    But, that'll be another article...

    Creating a custom Red Hat installation DVD

    October 2005

    How to create a single CD for fast and easy customized installation.

    Setting up the build directory:

    The first thing to do is to copy all the cdrom ISOs to one location:

    mkdir -p /mnt/disk{1,2,3,4}
    mount -o loop RHEL4-U1-i386-AS-disc1.iso /mnt/disk1
    mount -o loop RHEL4-U1-i386-AS-disc2.iso /mnt/disk2
    mount -o loop RHEL4-U1-i386-AS-disc3.iso /mnt/disk3
    mount -o loop RHEL4-U1-i386-AS-disc4.iso /mnt/disk4
    We now copy all the files from the directories to a single directory: mkdir -p /data/isobuild
    rsync -rv /mnt/disk{4,3,2,1}/* /data/isobuild/
    We also need to copy across the .diskinfo file that is not caught by our *:
    cp /mnt/disk1/.diskinfo /data/isobuild/

    The .diskinfo file identifies the CD as being a correct Red Hat Installer disk and is checked by anaconda during the start of the install.

    We could now build the DVD as it is but we really should have a fiddle first :-)

    Adding more software to the DVD

    We could add some of our own rpms to /data/isobuild/RedHat/RPMS; however by just doing this does not make them available at install time. There is an XML file that is read and ensures that the packages are installed in the correct order.

    So let us throw a few random packages into the mix:

    Add some java:

    cp jre-1_5_0_03-linux-i586.rpm /data/isobuild/RedHat/RPMS/
    
    Some encryption for GAIM:
    cp gaim-encryption-2.36-3.rf.i386.rpm /data/isobuild/RedHat/RPMS/
    Updating the comps.xml file

    We need to ensure that the host computer has anaconda and anaconda-runtime installed: up2date anaconda anaconda-runtime

    Before we update the XML dependency file we need to sort out package orders. If you have added a lot of new packages you may need to remove some old packages that you have replaced with newer versions to stop conflicts.

    So the first command is: PYTHONPATH=/usr/lib/anaconda /usr/lib/anaconda-runtime/pkgorder \ /data/isobuild/ i386 > /data/isobuild/xander-pkgorder

    This creates a list of files in the order it needs to install them in the file /data/isobuild/xander-pkgorder. Sometimes an occasional RPM will not provide the information anaconda needs. You can edit the file manually and insert your RPMs at the end.

    Next we need to generate the dependency file:

    /usr/lib/anaconda-runtime/genhdlist --fileorder /data/isobuild/xander-pkgorder \ /data/isobuild/

    You will probably have a few hiccoughs the first time you run these commands. Most may be resolved by adding the missing entries to the pkgorder file or deleting duplicate packages.

    Creating an automated installer We could *now* if we wanted to build our DVD; however we can make an automated installer.

    So crack open system-config-kickstart and create a kickstart file with all the packages and partitioning etc you need for your systems.

    copy the resulting file to /data/isobuild/ks.cfg

    we can now edit the file /data/isobuild/isolinux/isolinux.cfg

    copy or change the three lines:

    label linux
      kernel vmlinuz
      append initrd=initrd.img ramdisk_size=8192
    to
    label xander
      kernel vmlinuz
      append initrd=initrd.img ramdisk_size=8192 ks=cdrom:/ks.cfg
    Then change the default at the top of the file to xander. This means that the default action is to install directly from the DVD using your kickstart file.

    Building the DVD iso

    Now to build the iso:

    cd /data/isobuild
    
    chmod a+w isolinux/isolinux.bin
    
    mkisofs -r -T -J -V "Custom RHEL4 Build" -b isolinux/isolinux.bin \
     -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 \
    -boot-info-table -o /data/custom-RHEL4-dvd.iso  /data/isobuild/
    
    Burning the DVD

    Now we can burn the image to the CD. I assume the CD writer is already set up on your system. We use cdrecord below, but you can use other programs as well. The command is invoked as:

    cdrecord -v speed=4 dev=0,0,0 /data/custom-RHEL4-dvd.iso
    
    The speed and dev options depend on your system. The device for the dev argument can be determined by using the -scanbus option to cdrecord:
    cdrecord -scanbus
    
    Using the DVD

    Once the image is burned onto the DVD, insert the DVD into the target machine and boot the machine. You should get the custom message that you created earlier. At this point, you can either press Enter at the boot prompt or let it timeout. When it times out it uses the default label, which we specified as ks (Kickstart).

    Free-Codecs.com Burn4Free 2.0.0.0 -- Spyware installer

    Absolutely avoid this, and the program simply can't be advertised as freeware...

    Burn4Free includes adware: NavHelper and NavExcel Search Toolbar free software. "NavHelper and NavExcel Search Toolbar software resolve domain name errors and enable you to search from anywhere on the web. The Search Toolbar also includes a free POPUP BLOCKER !"

    Adware (and spyware) software may record your surfing habits, deliver advertising, collect private information, or modify your system settings. Pay close attention to the End User License Agreement ("EULA") and installation options! We also, recommend you to run Microsoft AntiSpyware or other antispyware software after installation process.
    - Check tested CD burners list before downloading Burn4Free.

    ISO Commander

    ISO Commander is CD/DVD images management utility. Among utility main features are CD/DVD bootable images creation and changing, direct images editing, convention from BIN/ISO/DAO/TAO/NRG images into standard ISO file and much more.

    [Nov 18, 2011] ISO Master

    See also ISO Master - Wikipedia, the free encyclopedia

    Use ISO Master to:

    - Create or customise CD/DVD images

    - Make Bootable CDs/DVDs

    ISO images are great for distributing data files, software, and demos online

    [Nov 18, 2011] ISO Master - GUI Tool to edit ISO Images in openSUSE SUSE & openSUSE

    ISO Master which is claimed to be the best ISO editing tool is a graphical editor for ISO images. ISO Master is useful for extracting, deleting, or adding files and directories to or from an ISO image. ISO Master can read .ISO files (ISO9660, Joliet, RockRidge, and El Torito), most .NRG files, and some single-track .MDF files and can save and only save as .ISO.

    The supported operations include, add/delete files and directories under the ISO image, Modify/delete the Boot records. extract files from the ISO etc.

    Install ISO Master

    Packman as always hosts a 1-click install Yast Metapackage for ISO Master. This installer is supported on openSUSE 11.0, openSUSE 10.3, openSUSE 10.2, and SUSE 10.1 & 10.0

    [Nov 18, 2011] ISO Recorder v 3.1 Windows 7

    isorecorder.alexfeinman.com

    ISO Recorder is a tool (power toy) for Windows XP, 2003 and now Windows Vista, that allows (depending on the Windows version) to burn CD and DVD images, copy disks, make images of the existing data CDs and DVDs and create ISO images from a content of a disk folder.

    ISO Recorder has been conceived during Windows XP beta program, when Microsoft for the first time started distributing new OS builds as ISO images. Even though the new OS had CD-burning support (by Roxio), it did not have an ability to record an image. ISO Recorder has filled this need and has been one of the popular Windows downloads ever since.

    With an advent of Windows XP SP2 and Windows 2003 the version 2 of ISO Recorder has been released, which introduced some new features including ISO creation and support for non-admin user.

    Finally, in Windows Vista it became possible to address another long-standing request and provide DVD burning capability.

    Since the very beginning ISO Recorder has been a free tool (for personal use). It is recommended by MSDN download site along with Easy CD and Nero and is used by a number of companies around the world.

    Download

    [Nov 18, 2011] Mount an ISO image in Windows 7, Windows 8 or Vista

    The freeware utility from Microsoft to mount ISO Images doesn’t work in Windows 7 or Vista. Thankfully there’s another utility that does.

    The utility that we will use is called Virtual Clone Drive. This utility will let you mount .ISO, .CCD, .DVD, .IMG, .UDF and .BIN files.

    [Nov 18, 2011] openSUSE Lizards

    See also Creating modified installation system.

    I start maintaining yast2-repair and first bug which I start solving is that repair from DVD menu doesn’t work same as from installation menu. Find where is problem and also test if fix is correct is not trivial. I describe below how to modify inst-sys on DVD or software on LiveCD.

    I found useful information on our wiki – Creating modified installation system. But I don’t want install from network, as it doesn’t shown same menu as on DVD. I try use mkisofs, but it is not easy set same boot sector as have original iso image. And there I found good software – isomaster. This allows you to replace file on iso image and remember from original iso image where it has its boot sector. This iso should be easily tested in e.g. Virtual Box and I can verify, that my fix work before we release first DVD with opensuse 11.2. Just sidenote – linuxrc show warning, that your inst-sys doesn’t match checksum.

    Same way is possible edit whole LiveCD. Simple mount and copy content of live system to your disc and with zypper –root you can change software in your LiveCD. Then create squashfs ( I use mksquashfs live_root openSUSE-kde-11.1-read-only.i686-2.7.0 -no-duplicates -noappend ) and with isomaster replace file openSUSE-* and you have your modified liveCD.

    [Nov 20, 2011] Acer ICONIA Tab W500-BZ467 32 GB - Win 7 Home Premium 1 GHz - Gray

    $490 online. Probably too early for tablets. We may be looking at a mid-2012 release of Windows 8. See also ACER ICONIA W500-BZ467 TABLET - YouTube

    ...The screen is beautiful, on par with ipad 2. Speaker is much better. The SD slot & Hdmi/usb ports are definitely great.

    ... Download the new touch firmware from Acer and that should fix the problem.A stylus will also help with using touch on Win7. make clicking on the small click box and such much easier.

    ... I've had my W500 for about a month, done all the clean-outs (have you?), installed all the updates, etc. & use this extensively with W7 & full Office products & have absolutely NO issues with Windows 7 touch capabilities. I get 6+ hours between recharges on the W500 which is enough for me on any given day.

    ...With the full-size dockable keyboard complete with Ethernet port for fast Internet connections, a USB port for external devices, and the integrated Acer FineTrack pointing device with two buttons for effortless navigation

    [Nov 20, 2011] ASUS Eee PC 1015PX-SU17-BK 10.1-Inch Netbook Electronics

    This is a Linux compatible  notebook. $379.36 (Amazon price as of Nov 20, 2011)
    Amazon.com

    [Nov 20, 2011] ASUS UL20FT-B1 12-Inch Laptop (Silver)

    I bought this as a replacement for a Sony Viao 13.3"(severe overheating issues) laptop for my fiancée to write her thesis. At first I was going to buy her a netbook because they are cheaper but after some intense searching I stumbled upon this model on newegg and it happened to be sold out, so I clicked on it looked at the specs and was immediately impressed for the advertized price. It happened to be sold out because of the combination of a rebate and a super low price on newegg but even for $75 more on Amazon, it couldn't be beat.

    In use now, I got TONS of bonus points for this surprise.

    Pros:

    Cons:

    [Nov 20, 2011] Lenovo Z370 10252EU 13.3-Inch Laptop 

    [Nov 20, 2011] Some real deals

    See specs Essential G470 14 laptop

    [Nov 21, 2011] Best Windows freeware

    Multimedia

    PDF Utilties

    Productivity

    Security Folder

    Utility Folder

    [Nov 23, 2011] Linux Multipath Focusing on Linux Device Mapper -

    The HP Blog Hub

    Enterprise computing requires consistency especially when it comes to mapping storage luns to unique devices.

    In the past several months, I have encountered multiple situations where customers have lost data due to catastrophic corruption within environments utilizing device mapper (multipath) on Linux. Read further to determine if you are at risk of suffering a similar fate.

    Background:

    Does it matter if a SCSI disk device file or physical disk with which the device files references changes? While the system is up, it would be problematic for a device file to change; however, the kernel will not change an open file descriptor. Therefore, as long as the device is open ( i.e. mounted, activated LVM, etc) the device structure in the kenrel will not change. The issue is what happens when the device is closed.

    Using Qlogic or Emulex, every boot can result in a SCSI disk re-enumeration due to the fact that Linux enumerates devices based on scan order. Device Mapper “solves” this condition by providing persistent devices based on a device's inquiry string (GUID).

    Does persistent devices matter? Examples:

    Linux LVM – NO

    Veritas Volume Manager – NO

    LABEL – NO

    Oracle OCR – YES
    RAW devices – YES

    /dev/sd# in /etc/fstab -- YES

    I recently experienced an Oracle RAC issue where the OCR disks changed on a system while it was booted but before the cluster services were started.

    First RAW devices are mapped using a configuration file

    # /etc/sysconfig/rawdevices

    /dev/raw/raw1 /dev/mpath/mpath1

    /dev/raw/raw2 /dev/mpath/mpath2

    /dev/raw/raw3 /dev/mpath/mpath3

    /dev/raw/raw4 /dev/mpath/mpath4

    The problem occurred when device mapper’s mapping to the SCSI lun for mpath1 changed to a different LUN after reboot. OCR started and wrote it’s header to ASM disks. Needless to say, this is BAD.

    Device mapper configuration

    Device mapper Persistence is configured via a bindings file defined in multipath.conf

    Default location of bindings

    either

    /etc/multipath/bindings

    or

    /var/lib/multipath/bindings

    The problem is when the bindings file is not available at boot time to dm_mod. In the above scenario, the bindings file was located in a filesystem which had not been mounted yet; therefore, device mapper had no reference to use when creating the mappings to the SCSI luns presented. It was pure luck that the bindings had not changed before it did.

    The multipath.conf file reported the bindings file in /var/lib/multipath/bindings. The system booted, eventually having to mount /var covering up the bindings file. Multipath command was ran at some later date when more storage was added to the production system. Multipath command was ran to make note of the new LUNS and the file had to be built.. since there were no entries, all luns were re-enumerated. The cluster services were down on this node… but when the OCR disks were remapped to other SCSI luns and Oracle services were started and this node attempted to join the cluster, the damage was done.

    What to look for:

    LOGS

    Feb 23 13:00:05 protect_the_innocent multipathd: remove mpath15 devmap

    Feb 23 13:00:05 protect_the_innocent multipathd: remove mpath28 devmap

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath226 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath227 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath228 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath229 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath230 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath231 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath232 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath233 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath234 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath235 removed

    Feb 23 13:00:07 protect_the_innocent multipathd: dm map mpath236 removed

    Multipath.conf

    Confirm that the multipath.conf is in the root filesytem

    There is a RHEL article on this:

    https://bugzilla.redhat.com/show_bug.cgi?id=409741

    Example:

    Boot time

    mpath236 (360060e8005709a000000709a000000b3)

    multipath command was ran: followed by multipath -l

    mpath212 (360060e8005709a000000709a000000b3)

    Checking the bindings file:

    /var/lib/multipath/bindings:

    mpath212 360060e8005709a000000709a000000b3

    Solution:

    1. Confirm bindings file is available at boot time. If /var is a filesystem, configuration location of bindings file so that it is in /etc/multipath/bindigs via the multipath.conf file

    2. Create a startup script which creates a historical map of the devices at boot time so that your team will have the ability to see the device map over time

    Example command:

    # multipath -l | grep -e mpath -e sd | sed -e :a -e '$!N; s/\n/ /; ta'| sed 's/mpath/\nmpath/g' | sed 's/\\_//g'

    The above command makes it easy to map mpath device files to their unique SCSI LUNS

    [Nov 24, 2011] ASUS EB1007-B0410 EeeBox Mini Desktop PC Computers & Accessories

    Size of a paperback: 7.0 x 8.74 x 1.06. ~2.58 lb. ~$220.
    Amazon.com

    Processor Type: Intel Hard Drive Size: 250GB Processor - Clock Speed: 1.66GHz Multimedia Drive: Yes Operating System: Linux Operating System Monitor Size: N/A Graphics Type: Intel Integrated Graphics Speakers: No System Ram: 1GB Primary Color: Black Model No.: EB1007B0410 Shipping Weight (in pounds): 6.05 Product in Inches (L x W x H): 18.0 x 10.0 x 3.8 cm.

    [Nov 24, 2011] Asus EB1012PB0320 Eeebox Eb1012p B0320 Bb Sff Fcbga559 65w Epeat Gold Desktop (Black)

    Amazon.com

    [Nov 24, 2011] Lenovo ThinkPad X120e

    $359 direct. Gigabit Ethernet port.

    [Nov 24, 2011] National Testing Push Yielded Few Learning Advances Report (VIDEO)

    NEW YORK -- Education policies pushing more tests haven't necessarily led to more learning, according to a new National Research Council report.

    "We went ahead, implementing this incredibly expensive and elaborate strategy for changing the education system without creating enough ways to test whether what we are doing is useful or not," said Dan Ariely, a professor of behavioral economics at the Massachusetts Institute of Technology and member of the committee that produced the report.

    Heavily testing students and relying on their scores in order to hold schools -- and in some cases teachers -- accountable has become the norm in education policy. The No Child Left Behind Act, the largest piece of education legislation on the federal level, for example, uses performance on math and reading exams to gauge whether schools are failing or succeeding -- and which schools are closed or phased out.

    "Incentives are powerful, which means they don't always do what they want them to do," said Kevin Lang, a committee member who also chairs Boston University's economics department. "As applied so far, they have not registered the type of improvements that everyone has hoped for despite the fact that it's been a major thrust of education reform for the last 40 years."

    The tests educators rely on are often too narrow to measure student progress, according to the study. The testing system also failed to adequately safeguard itself, the study added, providing ways for teachers and students to produce results that seemed to reflect performance without actually teaching much.

    "We're relying on some primitive intuition about how to structure the education system without thinking deeply about it," Ariely said.

    Increasing test scores do not always correlate to more learning or achievement, the study authors said. For example, Lang mentioned that high school exit test scores have been found to rise while high school graduation rates stagnate.

    "None of the studies that we looked at found large effects on learning, anything approaching the rhetoric of being at the top of the international scale," Lang said. He added that the most successful effects the report calculated showed that NCLB programs moved student performance by eight hundredths of the standard deviation, or from the 50th to the 53rd percentile.

    The report, released Thursday and sponsored by the Carnegie Corporation of New York and the William and Flora Hewlett Foundation, recommends more rigorous testing of reforms before their implementation. "Before we did welfare reform, we did a lot of experiments at the state level," Lang said.

    "We tried different ways of doing it and we learned a lot, instead of deciding that on the basis of rather casual theorizing that one set of reforms was obviously the way to go," Lang added. "There has not at this point been as much experimentation at the state level in education."

    The 17-member committee responsible for the study, according to Education Week, is a "veritable who's who of national experts in education law, economics and sciences." The National Academies -- a group of four institutions chartered by Congress to consult on various issues -- launched the committee in 2002, and since then, it has tracked the effects of 15 programs that use tests as teaching incentives.

    The report comes as congress works to reauthorize and overhaul No Child Left Behind, and as states countrywide pass laws that link the hiring and firing of teachers to their students' performance on standardized tests.

    "It raises a red flag for education," Ariely said. "These policies are treating humans like rats in a maze. We keep thinking about how to reorganize the cheese to get the rats to do what we want. People do so much more than that."

    This reductive thinking, Ariely said, is also responsible for spreading the notion that teachers are in the profession for the money. "That's one of the worst ideas out there," he said. "In the process of creating No Child Left Behind, as people thought about these strategies and rewards, they actually undermined teachers' motivations. They got teachers to care less, rather than more," he added, because "they took away a sense of personal achievement and autonomy."

    The report's findings have implications for developing teacher evaluations, said Henry Braun, a committee member who teaches education and public policy at Boston College. When "we're thinking about using test-based accountability for teachers, the particular tests we're using are important," he said. "But just as important is the way it's embedded into the broader framework. The system as a whole, as it plays out, determines whether we end up with better teachers."

    WATCH: Daniel Koretz, a professor at Harvard's school of education who sat on the committee that produced the report, discusses skills needed in the 21st century.

    [Nov 24, 2011] Stanford Economist Rebuts Much-Cited Report That Debunks Test-Based Education

    The 112-page-long NRC study came at a critical point during the NCLB discussion -- and it read as a manifesto against the use of testing as a tool to promote learning, Hanushek claims. The report found NCLB to be the most effective test-based policy, but even then, it found that the law's programs moved student performance by eight hundredths of the standard deviation, or from the 50th to the 53rd percentile. Other more low-stakes tests were found to show "effectively zero" effects on achievement. According to the NRC report:

    Test-based incentive programs, as designed and implemented in the programs that have been carefully studied, have not increased student achievement enough to bring the United States close to the levels of the highest achieving countries.

    "This is an extraordinarily serious and contentious policy issue," Hanushek told The Huffington Post Monday. "I am quite taken aback by people who read the report and said that testing policies don't produce learning. The evidence that they provide indicates that accountability has provided significant positive impacts."

    In response to the report, Hanushek titled his article, "Grinding the Antitesting Ax: More bias than evidence behind the NRC panel's conclusions," and jazzed up its first page with a man in overalls, well, grinding an ax. Hanushek concludes:

    The NRC has an unmistakable opinion: its report concludes that current test-based incentive programs that hold schools and students accountable should be abandoned. The report committee then offers three recommendations: more research, more research, and more research. But if one looks at the evidence and science behind the NRC conclusions, it becomes clear that the nation would be ill advised to give credence to the implications for either NCLB or high-school exit exams that are highlighted in the press release issued along with this report.

    The committee that produced the NRC report formed about a decade ago, in the wake of the implementation of NCLB, the strongest federal test-based accountability law ever passed. The National Academies -- a group of four institutions chartered by Congress to consult on various issues -- launched the committee in 2002, and since then, it tracked the effects of 15 programs that use tests as teaching incentives. According to the report, its members were chosen to represent a balanced mix of view points due, in part, to the "tension between the economics and educational measurement literatures about the potential of test-based accountability to improve student achievement."

    Its 17 members included economists such as Duke's Dan Ariely and Boston University's Kevin Lang, educational experts like Harvard's Dan Koretz and Stanford's Susanna Loeb, in addition to a former superintendent, a psychologist, a sociologist and a political scientist. The committee also saw presentations from various experts, including Hanushek himself.

    According to Hanushek's analysis, the panel's thorough examination of multiple studies is not evident in its conclusions.

    "Instead of weighing the full evidence before it in the neutral manner expected of an NRC committee, the panel selectively uses available evidence and then twists it into bizarre, one might say biased, conclusions," Hanushek wrote.

    The anti-testing bias, he says, comes from the fact that "nobody in the schools wants people looking over their shoulders."

    Hanushek, an economist, claims that the .08 standard deviation increase in student learning is not as insignificant as the report makes it sound. According to his calculations, the benefits of such gains outweigh the costs: that amount of learning, he claims, translates to a value of $14 trillion. He notes that if testing is expanded at the expense of $100 per student, the rate of return on that investment is 9,189 percent. Hanushek criticized the report for not giving enough attention to the benefits NCLB provided disadvantaged students.

    The report, Hanushek said, hid that evidence.

    "They had that in their report, but it's buried behind a line of discussion that's led everybody who's ever read it to conclude that test-based accountability is a bad idea," he said. Hanushek reacted strongly, he said, because of the "complacency of many policymakers" who say education should be improved but that there are no effective options.

    But Lang, a member of the committee who produced the report, said Hanushek's critique is misguided. "His objection is that he feels that we said stop test-based accountability," he said. "We very clearly did not say that."

    Rather, Lang said, the report showed that test-based policies don't produce the effects claimed by their proponents. "What we said was test-based accountability is not having the kind of effect that the rhetoric suggests," Lang continued. "The rhetoric behind test-based accountability is the major force for education reform."

    But Paul Hill, a research professor and director of the University of Washington's Center on Reinventing Public Education who also sat in the NRC committee, saw merit in Hanushek's critique. "The conclusions were more negative about the contributions of test-based accountability than his review of the evidence would suggest," Hill said. "That's well worth considering."

    Hill said he was slightly concerned with the report itself, and that its tone was a product of a committee comprised of experts with mixed views on testing. "It said that test-based accountability alone won't raise achievement," he said. "I believe that. Test-based accountability, though, with reasonable supplementary policies … is a good idea."

    The apparent anti-testing bias, Hill said, came from those on the committee with backgrounds in education.

    "This is not a group of wackos," Hill said. "Inside the education profession, there's a lot of resentment against the use of tests."

    [Nov 25, 2011] Black Friday Antidote: George Carlin on Advertising and Consumerism

    "I’ve always had the impression that corporate HR and IT departments are managed by former Soviet bureaucrats. There is not a more honesty-enforcing device in modern life than a compiler and the attendant run-time system, nor a greater intellectual joy than the art and science that can be created with it. But IT departments are generally managed by people who failed programming.
    naked capitalism

    Americans roll from a holiday that has come to be about overeating to a day where merchants hope to seduce customers into an orgy of overspending.

    In an interesting bout of synchronicity, Michael Thomas just sent me a link to this George Carlin video. It may help steel the will of Black Friday conscientious objectors. I’m also looking forward to Carlin’s characteristic crudeness offending the Proper Discourse police (this clip is tame compared to The Aristocrats).

  • postmodernprimate says: November 25, 2011 at 2:02 am

    Bill Hicks is another comic genius who would be having a field day with the obscenely target-rich satire bounty modern America has become.

    Bill Hicks on Marketing

    “By the way if anyone here is in advertising or marketing… kill yourself. I’m just trying to plant seeds. Maybe one day, they’ll take root – I don’t know. You try, you do what you can. Seriously though, if you are, do. I know all the marketing people are going, “he’s doing a joke…” there’s no joke here whatsoever. Suck a tail-pipe, hang yourself, borrow a gun – I don’t care how you do it. Rid the world of your evil machinations. I know what all the marketing people are thinking right now too, “Oh, you know what Bill’s doing, he’s going for that anti-marketing dollar. That’s a good market, he’s very smart…”

  • Sock Puppet says: November 25, 2011 at 2:35 am

    George Carlin on the American Dream: http://www.youtube.com/watch?v=acLW1vFO-2Q

  •  Finance Addict says: November 25, 2011 at 2:41 am

    Also consider this on Black Friday: a research paper with a claim of hard evidence that television led to increased debt in the U.S.

    http://financeaddict.com/2011/11/black-friday-television-and-debt/

  • [Nov 25, 2011] CAS latency - Wikipedia, the free encyclopedia

    Column Address Strobe (CAS) latency, or CL, is the delay time between the moment a memory controller tells the memory module to access a particular memory column on a RAM memory module, and the moment the data from given array location is available on the module's output pins. In general, the lower the CAS latency, the better.

    [Nov 25, 2011] Linux Enterprise Server 11 SP2 Storage Administration Guide

    Publication Date 25 Nov 2011

    [Nov 25, 2011] Linux syslog may be on way out By Brian Proffitt

    It is naive to link binary format with better security. Instantly the hacking tools will be created to "sanitize" binary logs in sophisticated way. So instead of improvement of security we will get just the next level of intruders vs. defenders war. Bob Simpson in his comment below made an important point" the author of the proposal think locally rather the globally. And the key about logs security is thinking globally. But still it is true that the current syslog implementation which is coming from Sendmail is outdated and can be improved. But is can be improved without abandoning text format and throwing the child with the dirty bathwater. Remote syslog is the only real security enhancing mechanism for syslog but "chain-signing" records with some certificate can also be useful on local level and complicates tampering. The efforts spend on syslog probably be extended to shell history. Shell history also can benefit from some "anti-tampering" mechanisms. One easy thing is to make it "unshrinkable" via special filesystem attributes mechanisms.
    November 22, 2011, 1:42 PM ITworld

    In an effort to foil crackers attempts to cover their tracks by altering text-based syslogs, as well as improve the syslog process as a whole, two Red Hat developers are proposing a new binary-based tool called The Journal that could replace the syslog daemon in as early as the Fedora 17 release.

    And believe you me, some people are less than enthused by the proposed solution.

    Developers Lennart Poettering and Kay Sievers are proposing that the current 30-year-old syslog system is inefficient and too easy to misread and hack to properly perform even its most basic function: store a log of system events on a given Linux box.

    This is largely due to the free-form nature of the syslog, which basically accepts text strings in whatever form the application or daemon on the Linux system chooses to send. So, one daemon may send information about an event in one way, and another daemon in a completely different way, leaving it up to the human reader to parse the information in a useful manner. Automated log analyzer tools can help with this, but in a detailed description of The Journal, Poettering and Sievers wrote:

    "The data logged is very free-form. Automated log-analyzers need to parse human language strings to a) identify message types, and b) parse parameters from them. This results in regex horrors, and a steady need to play catch-up with upstream developers who might tweak the human language log strings in new versions of their software. Effectively, in a away, in order not to break user-applied regular expressions all log messages become ABI of the software generating them, which is usually not intended by the developer."

    That's just one of 14 points the two developers have highlighted as problems with the current syslog system. Others include:

    And so on. Poettering and Sievers highlighted one very topical problem with the syslog system to drive their points about a needed change to syslog home:

    "For example, the recent, much discussed kernel.org intrusion involved log file manipulation which was only detected by chance."

    With these factors in mind, Sievers and Poettering have come up with The Journal daemon, which will store data from system events in binary--not text--form as a list of key-value pairs that includes hashing for additional security.

    This is not the first time these two developers have proposed such sweeping changes to the Linux system infrastructure. Poettering is the developer who invented the systemd daemon that replaced the System V init daemon on Linux, as well as invented the PulseAudio sound server. Sievers was most recently one of the Fedora Project team members who proposed to move all executable files into the /usr/bin directory and their libraries into /usr/lib or /usr/lib64, as needed.

    With this binary implementation, The Journal daemon can enable the addition of metadata to each system event, such as the process ID and name of the sender, user and group IDs, and other key system data.

    "Inspired by udev events, journal entries resemble environment blocks. A number of key/value fields, separated by line breaks, with uppercase variable names. In comparison to udev device events and real environment blocks there's one major difference: while the focus is definitely on ASCII formatted strings, binary blobs as values are also supported--something which may be used to attach binary meta data such as ATA SMART health data, SCSI sense data, coredumps or firmware dumps. The code generating a journal entry can attach as many fields to an entry as he likes, which can be well-known ones, or service/subsystem/driver specific ones."

    If all of this seems a bit familiar to developers, see if this rings a bell: a lot of the effort here by Poettering and Sievers was inspired by the key/value, hash, and metadata provided to developers who use the git version control system.

    Not only will implementing The Journal make a Linux system more secure (as unauthorized log entries or unexpected data field entries will immediately be flagged by the journal daemon), its inventors hope to actually reduce the footprint of the logging system on Linux by unifying all log systems on a Linux machine and efficiently restructuring the data.

    "It is designed in a way that log data is only attached at the end (in order to ensure robustness and atomicity with mmap()-based access), with some meta data changes in the header to reference the new additions. The fields, an entry consists off, are stored as individual objects in the journal file, which are then referenced by all entries, which need them. This saves substantial disk space since journal entries are usually highly repetitive (think: every local message will include the same _HOSTNAME= and _MACHINE_ID= field). Data fields are compressed in order to save disk space. The net effect is that even though substantially more meta data is logged by the journal than by classic syslog the disk footprint does not immediately reflect that."

    But not everyone is thrilled with the proposal. Poettering and Sievers anticipated that many developers and system admins would be unhappy with The Journal's use of UUIDs to identify messages--as evidenced by their tongue-in-cheek attention to the issue in the FAQ section of their proposal document.

    But many of the objections voiced on Linux Weekly News, where the proposal was first highlighted, lament the replacement of a simple text-based system with a binary data format that will rely on one tool--The Journal--which in turn will only be available with the systemd daemon.

    Several commenters picked up on this entry in The Journal proposal FAQ:

    "Will the journal file format be standardized? Where can I find an explanation of the on-disk data structures?

    "At this point we have no intention to standardize the format and we take the liberty to alter it as we see fit. We might document the on-disk format eventually, but at this point we don't want any other software to read, write or manipulate our journal files directly. The access is granted by a shared library and a command line tool. (But then again, it's Free Software, so you can always read the source code!)"

    That entry, more than any other in the proposal document, generated a lot of controversy, as many LWN commenters objected to the idea of using a non-standard format for The Journal's data. Backwards compatibility was also a big point of concern.

    "It's a shame that we will lose the simplicity of the plain-text syslog format. But syslogs are usually compressed using gzip anyway. So essentially for me, all this means is that I use <magic-lennart-tool> instead of gzcat as the first part of my shell command, wrote commenter C. McCabe. "The big issue that I see is that a lot of system administrators will treat this as magic security dust, and not realize that they need to periodically save those hashes to a remote (and secure!) system in order to get any security benefit.

    "I also hope Lennart and co. realize the absolute necessity of backwards compatibility for the on-disk format," McCabe added. "It would really embitter a lot of system administrators if their old logs became unreadable after upgrading to the shiniest new version. But assuming this is managed well, I don't see any reason why this couldn't be a good idea."

    How this plays out in the broader Linux community will be interesting, to be sure. I personally find it notable that Fedora (and its commercial parent Red Hat) now seems to be the project where many internal infrastructure changes to the Linux operating system are getting implemented, even as distros like Ubuntu focus on the interface and user space.

    This is not a judgmental statement, but rather an observation. Linux is clearly undergoing some significant evolutionary changes and shedding some of its UNIX legacy. What remains to be seen is how these changes will affect Linux as it moves ahead.

    Selected comments

    alevesely_tw273908198

    I'm not sure why the design proposed would be definitely better than upgrading syslog() so as to cover recent standardizations such as

    * RFC 5424: The Syslog Protocol * RFC 5674: Alarms in Syslog * RFC 5675: Mapping Simple Network Management Protocol (SNMP) Notifications to SYSLOG Messages * RFC 5676: Definitions of Managed Objects for Mapping SYSLOG Messages to Simple Network Management Protocol (SNMP) Notifications * RFC 5848: Signed Syslog Messages

    Proposer of an enhancement should take the trouble to standardize it, beside coding, experimenting, and evangelizing it.

    BobSimpson

    Is this a joke? Or is it someone just trying to push their ideology of what they think should be done to the rest of the world to make their idea a standard?

    Doing something like this would be a sure way for Linux to shoot itself in the foot. For evidence, one only needs to look as far as Microsoft who insists on doing it their special way and expecting everyone else to do what they deem as "good". The concept of syslog messages are that they are meant to be 'open' so disparate systems can read the data. How to you propose to integrate with large syslog reporting/analysis tools like LogZilla (http://www.logzilla.pro)?

    The authors are correct that a format needs to be written so that parsing is easier. But how is their solution any "easier"? Instead, there is a much more effective solution available known as CEE (http://cee.mitre.org/) that proposes to include fields in the text.

    > Syslog data is not authenticated. If you need that, then use TLS/certificates. when logging to a centralized host.

    >Syslog is only one of many logging systems on a Linux machine. Surely you're aware of syslog-ng and rsyslog.

    Access control to the syslogs is non-existent.

    > To locally stored logs? Maybe (if you don't chown them to root?)

    > But, if you are using syslog-ng or rsyslog and sending to a centralized host., then what is "local" to the system becomes irrelevant.

    Disk usage limits are only applied at fixed intervals, leaving systems vulnerable to DDoS attacks. > Again, a moot point if admins are doing it correctly by centralizing with tools like syslog-ng, rsyslog and LogZilla.

    >"For example, the recent, much discussed kernel.org intrusion involved log file manipulation which was only detected by chance." Oh, you mean they weren't managing their syslog properly so they got screwed and blamed their lack of management on the protocol itself. Ok, yeah, that makes sense.

    They also noted in their paper that " In a later version we plan to extend the journal minimally to support live remote logging, in both PUSH and PULL modes always using a local journal as buffer for a store-and-forward logic" I can't understand how this would be an afterthought. They are clearly thinking "locally" rather than globally. Plus, if it is to eventually be able to send, what format will it use? Text? Ok, now they are back to their original complaint.

    All of this really just makes me cringe. If RH/Fedora do this, there is no way for people that manage large system infrastructures to include those systems in their management. I am responsible for managing over 8,000 Cisco devices on top of several hundred linux systems. Am I supposed to log on to each linux server to get log information?

    BlackSteven_YahHVZXTW

    1. It's RedHat so if they actually go with it it will be forced to the vast majority of folks that work with Linux in a corporate environment.

    2. It's RedHat so it doesn't need to be grep-able. They cater to current and former Windows users who call themselves "geeks" because they use an Operating System that is harder to use. (Linux doesn't have to be hard to use -- but don't tell that to RedHat.)

    3. The guys behind this have been behind a number of really hideous ideas. One of them actually proposed dropping /usr. (Functionally dropping /usr by dropping /bin /sbin and /lib.) If I can't unmount /usr, then it doesn't _functionally_ exist.

    4. It'll only be available with the "systemd" startup mechanism. This makes it totally pointless, as there's no way Debian derived distributions will be forcing folks to use that. It'll just be another piece of RedHat goofiness. Since prior RedHat goofiness kept me away from all RedHat derived distributions, this has little impact for me personally.

    5. Many of the issues he has problems with are either generally useful -- human-readable logs readable with any tool -- or mostly an issue when not using syslog-ng and a log vault. Do you think your system was compromised? What does your log vault say?

    6. If this system doesn't have a log vault facility it is only a matter of time before it is circumvented when root is compromised. When root is compromised anything touchable on the system is suspect. Their use of a shifting non-standard data format does nothing to make the data safer and it breaks in-place upgrades. What it means is that someone else will make a shared library that can read/write all versions of their logs and crackers will use that shared library. The fact that it is open-source and written in C means that having one library which links to multiple implementations of their "The Journal" is trivial. (Hello, C preprocessor!)

    7. Remember that -- since this is RedHat -- they have a long history of not recommending in-place upgrades. If this breaks in-place upgrades or makes long-term archival of logs impossible there is no reason to think that it will stop them.

    8. Portable software will need to support both the new system and continue to support SysLog. There's no way the BSDs would be migrating to this, even if the existing viable commercial Unix world decides to promptly oblige the whims of RedHat. More than that, basic syslog functionality can be easily faked on non-Unix-like environments without a syslog daemon and without breaking the log format. This means -- when using syslog -- the documentation for the product needs to mention the alternate location of the log but the actual documentation for log data is the same. This is not so with this new tool, where there's a different data format and different way to access the data.

    Is it a good idea? Absolutely not. Will it stop them? They're RedHat -- they're too big to fail.

    Here's another idea. What's the difference between forcing this additional data in to another tool which we will end up using through a text-based interface (so we can grep it) and actually proposing a standard for how to send log data to syslog which will support enhanced storage and retrieval on the backend? You could even offer a convenient C function to take the arguments supported by "The Journal" and pass the data in the correct format to syslog.

    Brendan Edmonds

    One of the big problems with implementing a binary base log file system is as mentioned already, only one tool can read it correctly. And you lose the security if someone finds out how the data is stored. It's the exact same thing "Problem" as a text based log file. The other problem, a lot has to change in order for programs to work with the new system, i'e dmesg needs to be changed.

    Also I can not see how you can use the same methods as before to clean the logs, and to backup these logs. For example what happened if you wanted to view that log file in Windows, how would you do this. From a tool point of view you need the correct security tokens, etc.

    You also have to remember routers use the syslogd protocol to send messages to a unix system. How will this be handled?

    I don't like the move, it defeats the whole point of UNIX. Every thing in UNIX is a text file. Anyone can add to it and anyone can remove lines from it. It is up to the kernel to control who can do what and the program, who can do what. The point about the file being text only is mute as normally only root and write to the file and a binary file has the same "problem".

    JoeGiles:

    "I don't like the move, it defeats the whole point of UNIX. Every thing in UNIX is a text file"

    O'Rly?

    When was the last time you took a look at wtmp or btmp or saXX files with vi? Not every file in Linux is a text file!

    I think its a great idea as long as its executed properly!

    NickG:

    Personally, I love all the changes coming into Fedora The filesystem is archaic, log files outright useless in their current implementation (kernel.org hack proves this), and systemd is chock full of advanced functionality. My only gripe is with the and the possibility of some logs becoming no longer backwards-compatible.

    Other than that, if PulseAudio and systemd are any indication, we'll hopefully be seeing these changes filter into the other distros as well soon.

    IFMAP_ORG_tw389617356:

    A better approach would be to use an identity-centric open standard such as IF-MAP which is already used by many security vendors to integrate security information from multi-vendor environments.

    See www.if-map.org

    [Nov 25, 2011] Release Notes for SUSE Linux Enterprise Server 11 Service Pack 2

    [Nov 26, 2011] User Access Control Lists

    Sshd user access control lists (ACLs) can be specified in the server configuration file. No other part of the operating system honors this ACL. You can either specifically allow or deny individual users or groups. The default is to allow access to anyone with a valid account. You can use ACLs to limit access to particular users in NIS environments, without resorting to custom pluggable authentication modules. Use only one of the following four ACL keywords in the server configuration file: AllowGroups, AllowUsers, DenyGroups or DenyUsers.

     # Allow only the sysadmin staff
     AllowGroups staff
     # Prevent unauthorized users.
     DenyUsers cheng atkinson

    [Nov 27, 2011] Improving security of ssh via tuning of configuration file

    ssh security can be improved by modifying /etc/ssh/sshd_config configuration file.

    Securing SSH

    Many network services like telnet, rlogin, and rsh are vulnerable to eavesdropping which is one of several reasons why SSH should be used instead. Red Hat's default configuration for SSH meets the security requirements for many environments. However, there are a few parameters in /etc/ssh/sshd_config that you may want to change on RHEL and other Linux systems.

    The chapter Restricting System Access from Servers and Networks shows how direct logins can be disabled for shared and system accounts including root. But it's prudent to disable direct root logins at the SSH level as well.

    PermitRootLogin no
    Also ensure to have privilege separation enabled where the daemon is split into two parts. With privilege separation a small part of the code runs as root and the rest of the code runs in a chroot jail environment. Note that on older RHEL systems this feature can break some functionality, for example see Preventing Accidental Denial of Service.
    UsePrivilegeSeparation yes
    Since SSH protocol version 1 is not as secure you may want to limit the protocol to version 2 only:
    Protocol 2
    You may also want to prevent SSH from setting up TCP port and X11 forwarding if you don't need it:
    AllowTcpForwarding no
    X11Forwarding no
    Ensure the StrictModes directive is enabled which checks file permissions and ownerships of some important files in the user's home directory like ~/.ssh, ~/.ssh/authorized_keys etc. If any checks fail, the user won't be able to login.
    StrictModes yes
    Ensure that all host-based authentications are disabled. These methods should be avoided as primary authentication.
    IgnoreRhosts yes
    HostbasedAuthentication no
    RhostsRSAAuthentication no
    Disable sftp if it's not needed:
    #Subsystem      sftp    /usr/lib/misc/sftp-server
    After changing any directives make sure to restart the sshd daemon:
    /etc/init.d/sshd restart

    [Nov 27, 2011] insertion sort

    NIST page

    Definition: Sort by repeatedly taking the next item and inserting it into the final data structure in its proper order with respect to items already inserted. Run time is O(n2) because of moves.

    Also known as linear insertion sort.

    Generalization (I am a kind of ...)
    sort.

    Specialization (... is a kind of me.)
    binary insertion sort.

    See also gnome sort.

    Note: Sorting can be done in place by moving the next item into place by repeatedly swapping it with the preceding item until it is in place - a linear search and move combined. This implementation is given in C. J. Shaw and T. N. Trimble, Algorithm 175 Shuttle Sort, CACM, 6(6):312-313, June 1963.

    If comparing items is very expensive, use binary search to reduce the number of comparisons needed to find where the item should be inserted, then open a space by moving all later items down one. However a binary search is likely to make this not a stable sort.

    Author: PEB

    Implementation

    (Java). An implementation (Java) due to Sedgewick and Wayne (search for Insertion sort). Algorithms and Data Structures' explanation and code (Java and C++). Other implementations may be available through the Stony Brook Algorithm Repository, Sorting. (Scheme). (Fortran).

    More information

    demonstration of several sort algorithms, with particular emphasis on insertion sort; more demonstrations; a animation (Java) of insertion sort.




    Etc

    Society

    Groupthink : Understanding Micromanagers and Control Freaks : Toxic Managers : BureaucraciesHarvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Two Party System as Polyarchy : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    Skeptical Finance : John Kenneth Galbraith : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Oscar Wilde : Talleyrand : Somerset Maugham : War and Peace : Marcus Aurelius : Eric Hoffer : Kurt Vonnegut : Otto Von Bismarck : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Oscar Wilde : Bernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 26, No.1 (January, 2013) Object-Oriented Cult : Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks: The efficient markets hypothesis : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

     

    The Last but not Least


    Copyright © 1996-2014 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Site uses AdSense so you need to be aware of Google privacy policy. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine. This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to make a contribution, supporting hosting of this site with different providers to distribute and speed up access. Currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

    Last modified: January, 20, 2013