Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Is DevOps a yet another "for profit" technocult
proposing "salvation" from the current IT datacenter difficulties ?

Slightly critical overview of some questionable aspects of  DevOps as a new smoke screen for outsourcing of  IT and getting higher management bonuses

Version 3.0 (Apr 7, 2021)

News

Hybrid Cloud as Alternative to "Pure" Cloud Computing

Recommended Links

Continuous delivery -- running on the blade edge

Agile -- Fake Solution to an Important Problem

Infrastructure as Code
Unix Configuration Management Tools Unix System Monitoring Enterprise Job schedulers Questionable costs efficiency of pure cloud Lock in related problems Cloud Mythology
Preventing Vendors From Playing The Blame Game Bandwidth communism Problem of loyalty Issues of security and trust in "pure" cloud environment Typical problems with IT infrastructure Heterogeneous Unix server farms
Dictionary of corporate bullshit Dispelling IT Management Myths Fundamental Absurdity of IT Management Conway Law Brooks law Managing Managers Expectations
Troubleshooting Remote Autonomous Servers Configuring Low End Autonomous Servers Review of Remote Management Systems Virtual Software Appliances Real Insights into Architecture Come Only From Actual Programming RHCSA
High Demand Cults Leaders Practices Groupthink Pollyanna creep Machiavellians Manipulators Tricks Disciplined Minds Belief-coercion in high demand cults

Webliography of problems with "pure" cloud environment

Bozos or Empty Suits (Aggressive Incompetent Managers) The Peter Principle Sysadmin Horror Stories Humor Etc
The term cult usually refers to a social group defined by its religious, spiritual, or philosophical beliefs, or its common interest in a particular personality, object or goal. The term itself is controversial and it has divergent definitions in both popular culture and academia and it also has been an ongoing source of contention among scholars across several fields of study.[1][2] In the sociological classifications of religious movements, a cult is a social group with socially deviant or novel beliefs and practices...

... ... ...

...In 1990 Lucy Patrick commented: "Although we live in a democracy, cult behavior manifests itself in our unwillingness to question the judgment of our leaders, our tendency to devalue outsiders and to avoid dissent. We can overcome cult behavior, he says, by recognizing that we have dependency needs that are inappropriate for mature people, by increasing anti-authoritarian education, and by encouraging personal autonomy and the free exchange of ideas."[108]

Cult - Wikipedia

Groupthink is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Group members try to minimize conflict and reach a consensus decision without critical evaluation of alternative viewpoints by actively suppressing dissenting viewpoints, and by isolating themselves from outside influences.

Groupthink requires individuals to avoid raising controversial issues or alternative solutions, and there is loss of individual creativity, uniqueness and independent thinking. The dysfunctional group dynamics of the "in-group" produces an "illusion of invulnerability" (an inflated certainty that the right decision has been made). Thus the "in-group" significantly overrates its own abilities in decision-making and significantly underrates the abilities of its opponents (the "out-group"). Furthermore, groupthink can produce dehumanizing actions against the "out-group".

Antecedent factors such as group cohesiveness, faulty group structure, and situational context (e.g., community panic) play into the likelihood of whether or not groupthink will impact the decision-making process.

Groupthink - Wikipedia

 

WARNING: People too sensitive to spelling and grammar errors  are strongly recommended to end the reading of the paper at this point.


Introduction

 

Fad: “an intense and widely shared enthusiasm for something, especially one that is short-lived.”

Hoopla: "excitement surrounding an event or situation, especially when considered to be unnecessary fuss."

Companies often spend a lot of money on questionable IT technology. It happens for all sorts of reasons. More often than not they are chasing the latest shiny new fad. If management bonuses are involved the fad is almost irresistible and in this case very few organizations are tracking what they’re getting in terms of business outcomes for their investment in the fad.  Are those people wasting money? Yes, but that's company money, and the size of executive bonuses outweighs other considerations. Maximization of executive bonuses is always win-win strategy for higher management under neoliberalism.  For a  neoliberalized company outsourcing of IT is the most reliable way to achieve that. DevOps provides a nice smokescreen to hide the sad truth that the adoption  is more about outsourcing than the technology in question.

And in case of dramatic failure you can always blade external consultant for the fiasco. Everybody knows the benefits of the cloud—lower personnel costs and the technology still is gradually improving. But many companies end up spending way too much to make the move online—because they don’t realize what the shift entails and what a hit it means for for the hardware "knowledge sphere". Without people with critical knowledge of hardware (which can be achieved only by running own  datacenter) are often fall pray of unscrupulous consultants.  For such companies cloud means vendor lock-in which they can't escape.

But many companies end up spending way too much to make the move online—because they don’t realize what the shift entails.

You all read about typical advantages typically listed for DevOps methodology: it supposedly helps you deliver products faster, improves company profitability (of course ;-), ensures continuous integration (without asking the key question: is this a  good idea?), removes roadblocks from your releases (magically, without changing the level of incompetence of the management ;-), and gives you a competitive advantage (without changing the quality of developers).  Those are all signs that this is yet another software development methodology fad (DevOps – Fad or Here to Stay- - DZone DevOps):

In software development, as trends gain more popularity, they gain new adopters at ever-increasing rates. Often, it is simply because that’s what everybody seems to be doing and there is a Fear Of Missing Out (FOMO).  This leads many organizations to get swept up in fads, some of which are so short-lived that the fad has passed before any real benefits are recognized. Then on to the next development craze.

Today, DevOps is the trend that is grabbing headlines and attracting all the attention. While some organizations have resisted making a DevOps transition – either due to confusion about what DevOps entails, concerns that it may just be a passing fad, or a simple aversion to change – other organizations have jumped in with both feet

Those claims alert all seasoned IT professionals because they are very similar to claims of previous IT fads that now are completely or semi-forgotten (DevOps is a Myth):

In the same way we see this happening with DevOps and Agile. Agile was all the buzz since its inception in 2001. Teams were moving to Scrum, then Kanban, now SAFE and LESS. But Agile didn’t deliver on its promise of better life. Or rather – it became so commonplace that it lost its edge. Without the hype, we now realize it has its downsides. And we now hope that maybe this new DevOps thing will make us happy. 

Typically fads like DevOps run for a decade or slightly more; after that they are rejected and forgotten. We are probably in the middle of this path right now as all this DevOps hoopla started around 2015.  So a reasonable guess is that nowhere after 2025 DevOps will be open for a critical review. In this sense this pager is premature and as such the acceptance of my augments will be low.

Typically fads like DevOps run for a decade or slightly more; after that they are rejected and forgotten. We are probably in the middle of this path right now as all this DevOps hoopla started around 2015

If we are to believe DevOps advocates, we should accept that DevOps magically invalidates Peter Principle, Parkinson law and several other realistic assessment of corporate environment (and specifically IT environment which was aptly illustrated by Dilbert cartoons)  and creates software paradise of Earth, kind of "software communism" as in "From each according to his ability, to each according to his needs - Wikipedia" :-) 

In reality, it often creates the environment aptly depicted in The Good Soldier Svejk (the book that NYT put in the category of Greatest Books Never Read -- or hardly read by Americans; it served an inspiration for "Catch 22" and Kurt Vonnegut's "Slaughterhouse-Five")

For US readers I only want to note that there is some Svejkian qualities/traits in Heller's Yossarian, the ordinary guy trapped in the twisted logic of army regulations. Yossarian is smart enough to want to escape war by pretending he's crazy -- which makes him sane enough to be kept in the army.

Hasek's piercing satire describes the confrontation of this ordinary man with the chaos of overcentralized organizations, even if that chaos is cloaked in military discipline. And to certain extent this situation  in a small scale replicates in large datacenters.  A similar tragedy is replayed as a farce in large corporate environments. Including identity craze  -- quite similar to the tangled politics of the Czechs, Austrians and Hungarians  in Hapsburg empire ;-)  And the problem of incompetence of management still looms large. Under neolibneral high level managemnt is occupied by bean counters instead of engenners. The results can be sizing in fate of IBM and HP. top mane a few companies.

The Good Soldier Svejk environment is always to some extent present in large datacenter, were absurdities loom large and like Peter Pniciple suggersts it is often easer and with less discussions to get approval on $10M project (move to Azure in one example here) then 100K (for example, upgrading outdated networking equipment). To make the situation worse often project  leaders selected in such a way that they correspond to a justification given to Svejk: "He may be an idiot but he's our superior and we must trust that our superiors know what they are doing; there's a war on."  The case of an idiot manager (often an MBA or a bean counter), or worse a sociopath in a corner office, or a control freak in management role in the large datacenter is a topic that is waiting for the new Hasek.

There were at least half-dozen previous attempts to achieve nirvana in the corporate datacenter and, especially, enterprise software development, starting from "verification revolution" initiated by Edsger W. Dijkstra, who probably was one of first "cult leader" style figure in the history of programming.  It all basically comes down to this: some IT companies (for some, often short, period of time) somehow achieve better results than others in software development.  They promote their success as the sign that they adopted some new magic methodology.  Later such a  company often flops, but methodology Hoopla still linger in the air for some time, like the Cheshire cat smile.

The role of religious myths in IT

Religious myth and religious thinking plays a trmendous role in IT. It is rarely discussed. Methodologies like DevOps, Agile, etc don’t reflect reality. They are sets of carefully constructed religious myths.  But they are useful as any region to management as it creates a way to attribute any accidental and temporary success to the management and to avoid sacking of management. It also helps to soothe customer frustration when we forced to admit that the software delivered is a mess and does not behave as expected by the customer. It’s a pretty ingenious neoliberal way of pushing the responsibility down from the most powerful to the least powerful -- every time( "they did not get DevOps and that's why we are having problems").  It is an easier path than to admit that the quality of management was abysmally low. That technical competence of managers is gone and that software was developed in chaotic and  unprofessional manner.

Due to this capability, DevOps is an important secular religion (techno-cult), which changes the way of delivery software in corporation (often by completely botching the whole process),  but as a religion it greatly helps to ignore real reasons for the failures and concentrates of myths. Emergence of DevOps clergy greatly helps.

OK, containers are neat (pioneered by FreeBSD in 1990th and later adopted and extended to the modern form by Sun in Solaris 10 (2000)), putting your stuff in the cloud can, for certain types of workloads (mainly  "episodic" workloads with rare high peaks and long low intensity valleys, such as genomic decoding, social sites, store fronts, etc) be neat (pioneered by Amazon, two decades ago). And even Ansible, which in a way can be viewed as  a reimplementation of IBM JCL on a new level can be very useful for the same reasons as JCL was and remains useful: many sysadmin tasks are simple sequential tasks (waterfall model) and mechanism when failure of one step of the task means the failure of all the task is often the only control mechanism needed.

But the key reason of all this hoopla with DevOps is that cutting down infrastructure costs and outsourcing is a honey for higher management (as it drives up their bonuses). And this pretty mercantile consideration is the real driver of DevOps: it is prudent to view it as yet another smoke screen for outsourcing  (in accounting terms, this is shifting CapEx to OpEx, not much more) hidden in religious zeal of adopting  new technology. 

Like is the case with many cults, the details of this "ideology" are fuzzy, exact definition is non-existent,  and the only solid agreement within the DevOps community is that this is "a good thing". Nobody agrees about the details, or on the precise definition.  This should be clear for anyone who read articles about DevOps. First of all the typical level of such articles is  quite mediocre, or worse. They just do not contain any bright ideas.  By-and-large they are nothing but marketing materials for certain companies and unscrupulous academics who want to ride outsourcing bandwagon under the smoke screen of IT. 

DevOps definition is fuzzy by design as this is mainly marketing campaign  for outsourcing IT infrastructure

Still several large and successful companies such as Netflix and Amazon supposedly practice DevOps (and, in case of Netflix, heavily advertize it).  The question is why? In case of Netflix probably because the company is nothing more then a special portal for viewing movies, and claiming that they practice some new shiny technology probably can help in marketing their product.  They have a very average, I would say mediocre portal, which in now way represents anything like the state of the art. But for them ability to run on cloud servers is critical as they have peak loads which are concentrated in few hours of the day and particular days.  In case of Amazon they use the cloud for their needs too, so the ability to sell parts of its helps the bottom line. It is nice and profitable side business for them. And the main Amazon datacenter role is to run the storefront, which by definition has its peak hours, so the ability to expand computational facilities during those hours is critical and here the cloud fits the bill.

While there are multiple definition of DevOps, a typical definition usually includes the usage (typically in the cloud environment) of the following methodologies ( DevOps - Wikipedia )

  1. Codecode development and review, source code management tools, code merging. NOTE: source management tool originated in early 70th. GIT (which is based on ideas of BITKEEPER) in 2005
  2. Buildcontinuous integration tools, build status. NOTE: Used from early 70th. Originated on mainframe as overnight builds. 
  3. Testcontinuous testing tools that provide feedback on business risks. NOTE: Fuzzy concept .
  4. Packageartifact repository, application pre-deployment staging. NOTE: Software packaging became popular in Solaris since 1990. Red Hat RPM format is available since 1997. 
  5. Releasechange management, release approvals, release automation. NOTE: Change management was in place in large mainframe software project since late 60th.
  6. Configureinfrastructure configuration and management, infrastructure as code tools.  NOTE:  Good  tools definitely has value, but it is difficult to do them right and so far DevOps does not produce any innovative ideas in this area.  Parallel execution tools originated in Unix in late 70th. CFEengine was written in 1993 as a tool used for managing large numbers of Unix workstations that run heterogeneous  operating systems, e.g. Solaris, Linux, AIX, Tru64 and HP-UX.
  7. Monitor applications performance monitoring, end-user experience. NOTE: Performance monitoring and optimization was in place since at least late 60th. Knuth groundbreaking article An empirical study of FORTRAN programs  was published in 1971.  Importance of end-user experience  was discussed in Fred Brooks  The Mythical Man-Month published in 1975., were he point out on importance of fighting  the "second system" effect: the tendency of small, elegant, and successful systems to be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence ( feature creep). Mainly due to the desire to provide the consumer with a more useful or desirable product, in order to increase sales or distribution without understanding real needs of the consumer. Once the product reaches the point at which it does everything customer needs additional bell and whistles increases the cost of development, lessen the customer satisfaction and efficiency. The fear of losing customers due to sting to the old version and a perceived lack of improvement also play some role in feature creep.

Most of those are "no-brainers" and should be used by any decent software development organization, but some are questionable: more frequent  releases is a good idea only for minor releases and bug fixes, never for major releases.  They tend to freeze  the current (often deficient) architecture and prevent the necessary architectural changes.

More frequent  releases is a good idea only for minor releases and bug fixes, never for major releases.  They tend to freeze  the current (often deficient) architecture and prevent the necessary architectural changes.

A very important tip here is that DevOps definition is fuzzy by design as this is mainly marketing campaign  for outsourcing IT infrastructure (and a very successful one),  not a technological revolution (although technological progress, especially in hardware and communications, makes some things possible now that were unthinkable before 2000; progress in servers remote control, web and smartphones alone were the real game changers, much bigger then all DevOps Hoopla)

That means that you as a sysadmin affected by this marketing campaign have a certain flexibility with what to call DevOps ;-) In other words you can include something  that you like and claim that this is essential for DevOps. It's a bad cook who can't name the same cutlet with 12 different names ;-) And you can even teach this thing to lemmings as a part of DevOps training, joining the cult. You can't even imagine how gullible are DevOps enthusiasts and how badly they want additional support

TIP: DevOps definition is quote fuzzy. It is mostly  "this is a good thing" type of definitions. So you can include something  that you like and claim that this is essential for DevOps. It's bad cook who can't name the same cutlet with 12 different names ;-) And you can even teach this thing to lemmings as a part of DevOps training.

Nothing is new under the Sun ;-)

Out of those seven "sure" things listed above, the continuous delivery is probably most questionable idea. As sticking to it undermines efforts to revise the architecture of the large software packages. It essentially tend to "freeze" the current architecture "forever".  As such it is a very bad thing, a fallacy. 

Moreover "continuous  delivery" is nothing new as most software development projects now provide access to beta versions of the software.  Providing beta versions is just another name for continues delivery :-) Also most organization for ages practice night builds. Which helps to keep developers deviating too much from the common core.

But while the release of beta versions is a useful tool that helps to prepare better for the eventual rollout and provides additional feedback  form beta users, only "beta addicts" use them in production environment. Continues delivery makes all users beta testers. If the goal of this whole exercise is to engage users in testing and detecting the bugs you need to assume that the userd are loyal to your company and the product and really want to become the beta testers, because after a while they can defect.

But it has one useful affect: as te product constantly changes users are forced to kee[ up with modifications binding them to this modern variant of Red Queen Race.  Few would eve quetion the absurdity of the whole idea.  

Night builds and providing beta versions are just another names for continues delivery :-) You can use this fact to your advantage

The same reservation are true for automated testing which now is sold under the umbrella of "continuous testing". Compiler developers used this since 1960th (IBM/360 compiler development efforts), and Perl developers created automated suit for this purpose decades ago.  So again, nothing new under the sun.

The key observation here is that while it is "a very good thing", it is not that easy to implement properly outside several narrow domains, such as programming languages, web interfaces and such.  And again, in compiler development this technology is used since 60th. First tools to use command output for driving further processing were developed in 70th  (where this approach was pioneered by Expect), automatic testing of web interfaces stated in mid 1990th (where several powerful commercial tools were created and marketed), etc.

Most constructive ideas associated with cloud computing  were used in computational clusters since late 1990, or so.  They were reflected in Sun concept of the "grid" which originated with their purchase in 2000 Gridware, Inc. a privately owned commercial vendor of advanced computing resource management software with offices in San Jose, Calif., and Regensburg, Germany.[29] Later that year, Sun offered a free version of Gridware for Solaris and Linux, and renamed the product Sun Grid Engine( Sun Grid as cloud service was launched in March 2006 or 14 years ago)

Those ideas revolve around Sun earlier idea "network is a computer" and including the idea of using central management unit for a large group of servers or the whole datacenter (like "headnode" in computational cluster), central monitoring, logging and parallel execution tools to deliver changes to multiple servers simultaneously. 

Similar ideas were developed by Tivoli under the name "system management software services"  starting from 1989 ( IBM acquired Tivoli in 1996). Accounting to Wikipedia (Tivoli Software )

Service management segments related to the Tivoli brand software and services included the following:

The most dangerous aspect of DevOps is the attempt of elimination
or, at least, a diminishing the role of sysadmins in the datacenter

The other important (and rather dangerous) aspect of DevOps is the attempt of elimination or, at least, a diminishing role of sysadmins in the datacenter. Of course, developers always dreamed of getting root privileges. That simplifies many things and cuts a lot of red tape  ;-). But the problem here is with the overcomplexity of modern Linux one person can't be a master of two trades.  He can barely be a master of one.  Major enterprise Linux distributions such as Red Hat and Suse are just tar pits of overcomplexity and can swallow naive developers who think that the can cross them alive ;-)

The idea of a DevOps engineer who is wearing two hats: of sysadmin and of a programmer is a fallacy. Each field now is complex enough to require specialization. There’s no way to measure if one person is more of a DevOps engineer or not,  because the balance between knowledge of a particular programming toolset and the knowledge of Linux as an operating system is very tricky and depends on jobs responsibilities.

Most sysadmins worth its name are quite proficient in scripting (several key developers/promoters  of Perl were sysadmin; now many know Python on a pretty decent level). Therefore they can instantly be renamed DevOps engineers. That's allows you to join  the techno-cult, but changes nothing. The problem is that  most sysadmins do not have enough time to study a favorite scripting language in depth and operate with a small subset necessary for to the tool they create or maintian.  

The situation with programmers is more even worse. In most cases they only imagine that they know Linux. In reality they know such a small subset of sysadmin knowledge that many of them are unable to pass even basic sysadmin certification exam such as Red Hat RHCSA. This fact can be profitably exploited giving you an opportunity of a meaningful activity (teaching developers RHCSA course)  under the flag of OOP ("if you can't beat them, join") instead of useless and humiliating beating the drum and marching with the DevOps banner -- participating in a senseless DevOps training sessions.   I once taught elements of Bash scripting to a group of researchers under the flag of DevOps training :-)

Here is one tip on how to deal with ambitious (and usually reckless) developers, who try to obtain root access to the servers under the flag of DevOps.  You should say that yes, you are glad to do this, but only after passing such an elementary for them test as RHCSA certification, and you are confident  that for such a great specialist it is not a big deal.  Usually requests ends at this point ;-)  If you strongly do not like the person you can add Bash test from LinkedIn to the mix, or something similar :-).

TIP: If ambitious (and usually reckless) developers try to obtain root access to the servers under the flag of DevOps. You should say yes, you will glad to do this, but only after passing RHCSA certification

In reality what kills the idea once and for all is the complexity of modern operating systems. With the current complexity of RHEL (as of RHEL 7), the idea that a regular software developer can learn this level of complexity to function as full fledged sysadmin is completely fallacious unless you can implant to the guy the second head ( spending substantial money of classes and training can help, but a lot of sysadmin skills are based on raw experience including painful blunders and such. )

This is especially true for handing of the disasters in the data center, of SNAFU in military jargon.  This were ou need all the skills you have, as Red hat support is often useless and nothing more that a human assisted query into there Knowledgebase.  Of course VMs and containers create new oppostunities to go back to the previous version but still this is a the situation when you typically need to find a root cause. and this is not an easy task and often is connected with a blunder committed by one of the sysadmins or developers, acting on incomplete or erroneous information, or assumption.

Also idea that sysadmin can raise their programming skill to the level of professional developers in, say, Python does not take into account that many sysadmin became sysadmin because they do not want to become a developers, and are happy with writing small scripts in bash and AWK.

Additional layers of complexity in a form of Ansible or Puppet help only in situation when everything works OK. Please note that you can use Ansible as "parallel ssh" for execution of "ad hoc" scripts so you can adopt it without much damage. 

Any tool like Ansible  does add a level of indirection. As soon as serious problem arise you need people capable to go down  to the level of kernel and see what is happening.  How can a person  writing scripts in Python do this is anybody guess. As Red Hat support was by-and-large destroyed,  and now by default, is not much more then a query to Red Hat tech database, you face serious problem with downtime. As an experiment create an LVM volume consisting of two disk arrays (PV1 and PV2). Then fail two disks in RAID 5 array of PV2(by simply removing them.) Now open ticket with Red Hat and  see how Red Hat will help you to restore the data on LVM volume (PV1 is still intact).

Switching to the use of VM  solves certain problem with maintenance and troubleshooting  (availability of the baseline to which you can always return) while creating other problems due to shared networking and memory in such servers (which make VM mostly glorified workstation as for computer power, but with current capabilities of workstations this often is more than enough).

So this trend can help (although VMware will appropriate all your savings in the process;-), but here again the problem with the increased level of complexity might bite you sooner or later.   The same is true for Azure, Amazon cloud, etc.  Also, for anything than rare high peak low troth workloads and experimentation they are prohibitively expensive. Even for large corporations, who can negotiate special deals with those providers.

As a side note I would say that with version 7 RHEL is over the head even for many sysadmins as for troubleshooting serious problems.   IMHO it was a major screw up -- adding systemd (clearly Apple inspired solution) means that many old skills (and books) can't be reused. New skills need to be acquired by sysadmin and this is neither cheap, nor quick process.  They now try to mask this complexity with Ansible and Web console, but that does not solve problems, only swipe them under the carpet. 

Naive developers who think that system administration is "nothing special" and eagerly venture into this extremely complex minefield based on this college course in Ubuntu and running a Ubuntu box at home very soon learn the meaning of the terms SNAFU and "horror story".  As in wiping, say a couple of terabytes of valuable corporate data with one click (or with one simple script.) In this case DevOps converts to Opps...

In case of wiping, say a couple of terabytes of valuable corporate data with one click (or with one simple script.) by newly minted sysadmins converted from the developers DevOps converts to Opps...

You can just cut the red tape by providing the most ambitious and capable developers (and only them, you need to be selective and institute kind of internal exam for that) access to root in virtual instances as crashing virtual machine is less serious event then crashing a physical server -- you always have (or should have) a previous version of VM ready to be re-launched in minutes. 

But beyond that point, God forbid.  Root access to real medium size or large server (say 2 sockets or 4 sockets server with 32GB-128GB of RAM and 16TB or more  of storage) running important corporate application should be reserved to people who have at least entry level Linux-based admin certification such as RHCSA, and (which is especially important) hands-on expertise with backups.

During his career each sysadmin goes though his own set of horror stories, but "missing backup" (along with creative uses or rm)  is probably the leitmotiv on the most.  Creating new horror stories in the particular organization is probably not what  higher management with their quest for bonuses and neoliberalization of IT ( neoliberal "shareholder value" mantra means converting IT staff into contractors and outsourcing large part of the business to low cost countries like, say, Eastern European countries) meant by announcing their new plush DevOps initiative. 

Each sysadmin goes though his own set of horror stories, but "missing backup"  (along with creative uses or rm) is probably the leitmotiv on the most.  Creating new horror stories in the particular organization is probably not what  higher management with their quest for bonuses and neoliberalization of IT ( neoliberal "shareholder value" mantra means converting IT staff into contractors and outsourcing large part of the business to low cost countries like, say, Eastern European countries) meant by announcing their new plush DevOps initiative.

Good understanding of the Linux environment now requires many years of hand on 10 hours a day work experience (exactly how many years, of course, depends on the person).  The minimum for reaching "master" level of a given skill is estimated to be around 10,000 hours, the earlier you start the better. Please note that many sysadmin came from hobbyists background and start tinkering with hardware in high school or earlier. So, a couple years after graduating from colleague they often have almost ten year experience. And taking into account  Byzantium tendencies of mainstream programming languages (and those days you need to know several of them, say bash, Python and Javascript or Java ) 30,000 hours is a more reasonable estimate (one year is approximately 3000 working hours).  Which means formula 4+6 (four years of college and 5-6 years on the job self-education ) to get to speed in any single specialty (iether programmer or system administrator). When you need to learn the second craft the process, of course can go faster (especially for programmers)  but 10K hours rules still probably applies (networking staff along probably need that amount of hours).

The idea to give root access to non-trained developer who never passes, say, RHCSA Certification is actually much bigger fallacy that people assume. If you want  to be particular nasty in BOFH fashion, you can give root access to business critical server to several developers (of course, only under pressure and with  written approval from your management, if you are not  suicidal). If you  survive fallout from the subsequent SNAFU, for example wiping out 10TB of genomic data with one rm command (and you will be blamed, so you need to preserve all the emails with approvals), then you not only can remove all administrative access from those "victim of overconfidence," but get the management officially prohibit  this practice. At least until the memory about this accident fades and another datacenter  administration decides to repeat old mistakes.

Networking knowledge does not come automatically with DevOps hoopla

The key slogan of DevOps movement is "all power to the developers" ;-).   But while idea is noble this goal is completely unrealistic. No amount of automation can replace a role of  specialist. Ansible and all other fashionable toys are good when system is running smoothly. As soon as you hit major software or hardware problem  you need to operate on the lower level of abstraction and get into nitty-gritty details configuration, they are not only useless but  harmful. 

Try for a test add echo statement  like echo "Hello DevOps" to .bashrc on your  account and then ask somebody who is DevOps zealot to help to troubleshoot problem with scp (scp stops working, but ssh to such a box still works). But, at least, Ansible is useful for automating  routine tasks.

Although even in this area it is overrated and in reality does not provide much gains over much simpler and more reliable Pdsh. All those examples with creation of accounts a

 
le border="2" width="90%" bgcolor="#FFFF00">   are pretty artificial and do not pass a smell test. But some application using Ansible still might be useful (for example hardware inventory application) but only in a sense that "with enough thrust pigs can fly". See Unix Configuration Management Tools.

In any case,  as a smoke screen to protect yourself from fake changes (which can destroy any functioning datacenter) desired by DevOps zealots, Ansible deployment makes certain sense. Formally it is clearly a part of DevOps toolkit. Initially Ancible can be used strictly as a clone of Pdsh until the need for more complex functionality arise. 

Even in the cloud, as soon  as you try to do something more or less complex you need a specialist completely devoted to learning this part of infrastructure. One problem that I noticed that most developers have a very weak (often close to zero) understanding of networking. That's their Achilles spot and that's why they often suggest and sometimes even implement a crazy WAN based solution (aka cloud solutions).

Most developers have a very weak (often close to zero) understanding of networking. That's their Achilles spot and that's why they often suggest and sometimes even implement a crazy WAN based solution (aka cloud solutions) replacing tried and true internal applications

According to Wikipedia The Fallacies of Distributed Computing are a set of common but flawed assumptions made by programmers in development of  distributed applications. They were originated by Peter Deutsch (who was at the time at Sun Microsystems ), and his "eight classic fallacies" -- describing false assumptions that programmers new to distributed applications typically made.

They can be summarized as following (Wikipedia):

  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn't change.
  6. There is one administrator.
  7. Transport of data over WAN cost is zero.
  8. The network is homogeneous.

 There is also another similar, but more entertaining,  RFC1925 known as The Twelve Networking Truths

The Fundamental Truths

(1) It Has To Work.

(2) No matter how hard you push and no matter what the priority, you can't increase the speed of light.

(2a) (corollary). No matter how hard you try, you can't make a baby in much less than 9 months. Trying to speed this up *might* make it slower, but it won't make it happen any quicker. Callon Informational [Page 1]

(3) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead.

(4) Some things in life can never be fully appreciated nor understood unless experienced firsthand. Some things in networking can never be fully understood by someone who neither builds commercial networking equipment nor runs an operational network.

(5) It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.

(6) It is easier to move a problem around (for example, by moving the problem to a different part of the overall network architecture) than it is to solve it.

(6a) (corollary). It is always possible to add another level of indirection.

(7) It is always something

(7a) (corollary). Good, Fast, Cheap: Pick any two (you can't have all three).

(8) It is more complicated than you think.

(9) For all resources, whatever it is, you need more.

(9a) (corollary) Every networking problem always takes longer to solve than it seems like it should.

(10) One size never fits all.

(11) Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works.

(11a) (corollary). See rule 6a.

(12) In protocol design, perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

Those WAN blunders costs corporation serious money with zero of negative return on the investment.  It is easy to suggest to transfer say 400TB over Atlantic for a person who do not understand what is the size of the pipeline in datacenter No.1 and datacenter No.2 in the particular corporation. Or implement a monolithic 5 PB "universal storage system" which become single point of failure despite all IBM assurances that GPFS is indestructible and extremely reliable;  in this case  a serious GPFS bug can produce a failure after which you can kiss goodbye to several terabytes of corporate data. If you are lucky, most of them are useless or duplicated somewhere else, but still... 

Black swans and DevOp

The term  "Black swans" is the name of a very rare outlier events that has a severe impact. In production systems, these are problems with software or hardware that you do not suspect exits until it is way too late. When they strike they can't be fixed quickly and easily by a rollback or some other standard response from your vendor tech-support playbook. They are the events you tell new sysadmins years after the fact.

Additional and more complex automation increases probability of black swans not diminishes it -- "Complex systems are intrinsically hazardous and brittle systems." Once you have multiple automation systems, their potential interactions become extremely complex and impossible to predict. For example, your automation system can restart servers that were shut down for maintenance in the middle of firmware update. 

Additional and more complex automation increases probability of black swans not diminished it -- "Complex systems are intrinsically hazardous and brittle systems."  For example, your automation system can restart servers that were shut down for maintenance in the middle of firmware update.  

It is easy for a developer to buy a two socket server with Dell professional support configuration included and than, in four years down the road, discover that RIAD5 need monitoring and failure of two disks in RAID5 configuration is fatal. As well as the fact that Dell professional services did not include spare in RAID5 configuration because the developer if his naivety  demanded as much disk space as possible (high end Dell controllers are capable of supporting RAID 5 with a spare which is a more reliable configuration then plain vanilla RAID5 as even failure of two disk does not lead to the destruction of the disk array and potential loss of data). And if RAID5 lost two disks you are probably on your way to enrich data recovery companies such as OnTrack ;-) 

Complexity of modern Linux environment and DevOps

Those "horror stories" can be  continued indefinitely, but the point is that the level of complexity of modern IT infrastructure is such that you do need to have different specialists responsible for different parts of IT. And while separation of IT roles into developer and sysadmin has its flaws and sysadmins already benefit  from leaning more programming and developer more about underlying operating system, you can't jump over mental limitation of mere mortals.  Still it is true that some, especially talented, sysadmins  are pretty capable to became programmers as they already know shell and/or Perl on the level that makes learning of a new scripting language a matter of several months.  And  talented developers can learn OS on the level close to a typical sysadmin in quite a short time. But they are exceptions,  not a rule.

In any case, the good old days when a single person like Donald Knuth in late 50th early 60th was on a night shift at the university datacenter a programmer, system operator and OS engineer simultaneously are gone.  That's why certifications like  Certified AWS specialist and similar for other clouds like Azure  started to proliferate and holders of those certification are in demand.

Attempt of DevOps reverse the history and return us to the early days of computing when programmer was simultaneously a system administrator (and sometimes even hardware engineer ;-) are doomed to be a failure. The costs of moving everything to the cloud are often exorbitant, especially when the issues with WAN are not understood beforehand and this fiasco is covered with   PowerPoint presentations, destined to hide the fact that after top management got their bonuses the enterprise IT entered the state of semi-paralysis. Yes, cloud has its uses but it is far for being panacea.

The other aspect here is the question of loyalty. By outsourcing datacenter as DevOps recommends, you essentially rely of kindness of strangers in case of each more or less significant disaster.  Strangers that have zero loyalty to your particular firm/company no matter how SLA are written.  In other words,  the drama of outsourcing helpdesk is now replayed on a higher level, and with more dangerous consequences.

Software engineering proved to be very susceptible to various fads
 which resemble pseudo-religious movements (with high levels of religious fervor).

Another warning sign that  DevOps adepts are not completely honest is lofty goals. Nobody in his sound  mind would object to achieving stated DevOps goals

The goals of DevOps span the entire delivery pipeline. They include:

The only question is: Can DevOps really achieves them or is mostly a hype? Listen to this song to find out ;-)  

Historically software engineering proved to be very susceptible to various fads which usually take the form of  pseudo-religious movements (with high levels of religious fervor). Prophets emerge and disappear with alarming regularity (say, each ten years or so -- in other words the period long enough to forget about the previous fiasco). We already mentioned the "verification revolution", but we can dig even deeper and mention also "structured programming revolution" with its pseudo-religious crusade against goto statements (while misguided it, at least, has a positive effect on introducing additional high level control structures into programming languages; see important historically article by Donald Knuth Structured programming with goto statements cs.sjsu.edu). 

Verification hoopla actually destroyed careers of several talented computer scientists such as David Gries (of   Compiler Construction for Digital Computers  fame; who also participated in creation of amazing, brilliant teaching PL-C compiler designed by Richard W. Conway and Thomas R. Wilcox ). It also damaged long term viability of achievements of  Niklaus Wirth ( a very talented language developer, who participated in creation of Algol 60 and later developed Pascal and Modula). He also is known for his Wirth's law: "Software is getting slower more rapidly than hardware becomes faster."  

Dismal duct database with some added blog features (reviews.)   But if Amazon at least tries to implement several most relevant types of searches, Netflix does not. Fr rare movies it is rather challenging to find what you interested in, unless you know the exact title ;-).  Movies are not consistently indexed by director and major stars.  The way they deal with reviews is actually sophomoric.  But the colors are nice, no question about it :-)

Amazon is much better, but in certain areas it still has "very average" or even below average quality. One particularly bad oversight is that the reviews are indexed only in several basic dimensions (number of stars, popularity (up votes) and chronological order), not by "reviewer reputation" (aka karma), number of previous reviews (first time reviewers are often fake reviewers), date of the fist review by this review written, and other more advanced criteria.  For example, I can't exclude reviewers who  wrote less then 2 reviews and who first wrote review less a year ago.

You also can't combine criteria in the search request.  That create difficulties with  detecting fake reviewers, that is a real problem on Amazon review system. As a result it requires additional, often substantial, work to filter our "fake" reviews (bought, or "friends reviews" produced by people who are not actual customers).  They might even dominate Amazon rating for some products/books.

In any case that "effect" diminishes the value of Amazon rating and makes Amazon review system "second rate".    Recently Amazon developers tried to compensate this by "verified purchase" criteria, but without much success, as, for example, in most cases the book can be returned and unopened product can be returned too.  While some fake reviews are detectable by the total number of reviews posted (often this number is one ;-). or their style, in many cases it is not possible easily distinguish "promotional campaign" of the author of the book (or vendor of the product) from actual reviews. Shrewd companies can subsidize purchases in exchange for positive reviews.  In this sense Amazon interface sucks and sucks badly.

Amazon cloud has its uses and was generally a useful innovation, so all the DevOps hoopla about it is not that bad,  but it is rather expensive and is suitable mostly for loads with huge short term peaks and deep, long valleys.  For example genomic decoding. The cost of running the infrastructure of a medium firm (200-300 servers) on Amazon cloud is comparable with the cost of running private datacenter with hardware support outsources and local servers providing local services (the idea of autonomous remote servers) by "local" staff which definitely has higher loyalty to the corporation and which can be more specialized.  In this case all servers also can be managed from the central location which creates synergies similar to cloud and the cost of personnel is partially offset by lower cost of WAN connections due to provision on several local services (email, file storage, local knowledgebase, website, etc) by remote autonomous datacenters (using IBM terminology)/

WAN access is Achilles spot of the cloud infrastructure and costs a lot for medium and large firms with multiple locations. Remote management tools now are so powerful that a few qualified sysadmin now can run pretty large distributed datacenter with comparable (or higher) reliability to Amazon or Microsoft clouds.

Especially if servers are more or less uniform and can be organized in the grid.  Of course you need to create a spreadsheet comparing the costs of those two variants, but generally a server with two 24 core CPUs, 128 GB and several terabytes of local storage can run a lot of services locally without any (or with very limited) access to WAN and it cost as much as one-two years of renting similar computational capabilities on Amazon cloud or Azure, while having 5 year manufacturer warranty.

Although details are all that matter in such cases and are individual to each company, cloud is not the only solutions for the problems that plague modern datacenter.  And in many cases it is a wrong solution as outsourcing deprives the company the remnants of IT talent and it became hostage of predatory outsourcing firms. You just need to wait a certain amount of time to present it as "new" and revolutionary solution for problem caused by the DevOps and cloud ;-) Again DevOps is often a smoke screen for outsourcing and management priorities change with time after facing the harsh reality of new incarnation of  "glass datacenters." People forgot how strongly centralized  IBM mainframe environment (aka "glass datacenters")  were hated in the past. Often executives that got the bonuses for this "revolutionary transformation"  is gone in a couple of years ;-). And the new management became more open to alternative solutions  as soon as they experience reality of cloud environment and the level of red tape, nepotism and cronyism rampant in Indian outsourcing companies ("everything is a contract negotiation").  Some problems just can't be swiped under the carpet. 

Perverted feedback loop

DevOS increase previous tendency to rate performance of system administrators in the same way they rate the performance of helpdesk personnel via customer surveys. That creates additional and unnecessary pressure of system administrators and the desire to please the user at all costs.

Which, in view that certain percent of users are ignorant jerks (losers in sysadmin terminology) creates problem. Now can't say directly in user face that his request is absurd and dictate by his ignorance, even diplomatically, because that will put down your ratings. Not that this will hurt you too much, but just understanding that this perverted feedback loop is in place creates problems.  You need to find other areas to press and possibly punish such users for their obnoxious behaviour. One way is to press violation of the terms of serve. If you control the terms of service for particular set of servers (HPC cluster, WEb server farm, etc) you can always put items that can be used against lusers. One trivial way is some kind of annual certification for security or knolwdge or DevOps or operating system knowledge test,  something like that. In other words welcome to the "Bastard operator from hell" on new level.

How to play this  game

As large enterprise brass now is hell-bent on deploying DevOps (for obvious, mostly greedy reasons connected with outsourcing and bonuses, see below ;-)  it would be stupid to protest against this techno-cult. In most cases your honestly will not be appreciated, to say the least. In other words direct opposition will be a typical "career limiting move" even if you 100% right (and it is unclear whether you are: the proof of the pudding is in the eating) 

So you need to adapt somehow, and  try to make lemonade out of lemons.  You can concentrate on what is positive on DevOps agenda for now and see how the situation looks like when the dust settles. There are two promising technologies that you can adopt under DevOps umbrella  (we already mentioned Ansible):

The first thing you can do in order to ride DevOps hoopla is to negotiated some valuable training. And I am not talking about this junk courses in DevOp (although off-site, say in NYC, utilized as extra vacation they also have some value, but this opportunity closed with COVID-19 epidemic ;-). I am talking, for example, getting at least a couple of courses in Docker and Ansible. Again, if you standardize on Ansible,  you can legitimately ask for Python classes and with some efforts get them. 

That actually is in line DevOps philosophy as according to it sysadmins need to grow into developers and developers "downsize" into sysadmins merging in one happy and highly productive family ;-) So you can navigate this landscape from the point  of view of getting  additional training and try to squeezed some water from the stone: corporations now are notoriously tight as for training expenses.

Try to get some training out of DevOps hoopla. If, for example, you standardize on Ansible or Puppet you can legitimately ask for Python classes. That actually is in line DevOps philosophy as according to it sysadmins need to grow into developers and developers into sysadmins merging in one happy and highly productive family ;-)

Summarizing, one way to play this game is to equate DevOps with usage of some useful for you software (Docker) and  system management tool (say Ansible) or other interesting for you technology. Which you can safely claim belongs to DevOps. It is a fake movement so some exaggeration, or deviation usually will not be noticed by the adepts.

DevOps hoopla creates unique chances to improve helpdesk and documentation software in the company

For some reason two products of Austrian firm Atlassian ( Jira and Confluence ) are strongly associated with DevOp.  Unlike the majority of DevOps associated products they are definitely above average software (Jira more so and Confluence less so; but they can be integrated which adds value) which typically is heads above what company is using. Replacing the current Helpdesk system with Jira and documentation system with Confluence might improve those two important areas of your environment and they are worth trying.

Another similar to Confluence product -- Microsoft Teams also makes sense to deploy as a part of DevOps hoopla.  It is still pretty raw but the level of integration of  a web forum, file repository and wiki is interesting.  It also integrates well with Outlook which is the dominant email client in large corporations.

Docker can be a very valuable for sysadmins tool

Docker has it value in both sysadmin area and in applications area. So you can  claim that this is a prerequisite for DevOps and site some books on the subject.  Of course, this is not a panacea from many of the existing enterprise datacenter ills, but the technology is interesting, implementation is elegant and the approach has merits, especially if you are in a proxy protected  environment, which means that the installation of non standard applications a huge pain. 

In case of research applications a developer often can make them run quicker  in Docker environment, especially if application is available on Ubuntu or Debian in packaged form, but not on CentOS or RHEL.  There are some useful collections of prepackaged application, mainly for bioscience. See for example BioContainers (the list of available containers can be found here .) 

Esoteric applications that are difficult to install what which  require specific libraries with version different that, say, used by your RHEL version can really benefit from Docker, as it allow your to compartmentalize  "library hell" and use the most  suitable for the particular application flavor of Linux (which is of course the flavor in which the application was developed). For example, many scientific application are native to Debian, which is not an enterprise Linux distribution.  Docker allow you to run instances of applications installed from Debian packages on a Red Hat server ("run anywhere" meme). And if Docker is not enough you can always fall back to regular VM such as XEN.

In any case Docker is not a fad. It is an interesting and innovative "light-weight VM" technology which was first commercially introduced by Sun in Solaris 10 in 2005 (as Solaris zones)  and replicated in Linux approximately 10 years later. The key idea that this is essentially packaging technology -- all "VM" share the kernel with the base OS.  It is a variant of paravitualization, which produces minimal overhead in comparison with running application as a task on a real server. Much more efficient then VMware. Linux containers like Solaris zones are essentially extension of the concept of jail ( the idea of extending jail concept to more "full" VM environment originated in FreeBSD.)  It is definitely worth learning, and in some cases to be widely deployed.  Anyway, this is something real, not the typical shaman-style rituals that "classic" DevOps try to propagate ( Lysenkoism first played out as a real tragedy, now it degenerated into a farce ;-) 

As most DevOps propagandists and zealots are technologically extremely stupid, it is not that difficult to deceive them.  Sell the Docker as the key part of DevOps 2.0 toolkit, the quintessence of the cloud technology (they like the word "cloud" ) which lies in the core of DevOps 2.0  -- a hidden esoteric truth about DevOps that only real gurus know about :-)  They will eat that...

On higher, datacenter level you can  try to push the adoption of Red Hat OpenShift,  which is a kind of in-house cloud and is cheaper and more manageable then Amazon elastic cloud, or Azure.  And in some cases  makes sense to deploy.  That might allow you to extract from management payment for RHEL Learning Subscription. Try to equate the term "hybrid cloud" with use of OpenShift within enterprise. You can also point out, that unless you have short peak and long no load periods both Azure and AWS are pretty expensive, and it is not wise to put all eggs into one basket.

GIT can be useful tool; and GitLab can help to distribute scripts and knowledge, if you have a distributed team of sysadmins. You can also use  GitLab  as a proxy for continued integration

GIT is not that impressive for Linux configuration management, but sometimes can be used. See, for example, etckeeper package; if you find it useful and want to use it just claim that this is DevOps 2.0 too,  and most probably you will be blessed to deploy it. And while it has drawbacks, it allows to record actions of multiple sysadmin on the server that result in changes of files in /etc directory, treating it as a software project with multiple components. So this is far from perfect but still a usable tool for solving the problem of multiple cooks in the same kitchen.

The main value here is GitLab in that in many large enterprises it is now installed internally as a part of DevOps wave.  If so, you need to learn how to use it as it is useful tool for exchange of information between distributed teams of sysadmins. 

In some, rare cases you might even wish to play "continues  integration" game. If corporate brass demands  continuous integration  to be implemented you might try  to adapt software like GitLab for this purpose and use GitLab "pipelines", which provides some interesting opportunities for several scripting languages such as Python and Ruby.  Automated testing part is probably the most useful. While you can always write independent scripts for automated testing, integration of them into GitLab framework often is a better deal.

But we should be aware that the key problem with  "continuous integration" and closely related concept of "continuous testing" is that for testing scripts the most difficult part is the validation of the output of the test: the road to hell is always paid with good intention.

And  while the idea of automated testing is good, for many scripts and  software packages the implementation is complex and man power consuming. Actually this is one area were outsourcing might help. So far there was not breakthrough in this area and much depends on your own or developers abilities. Regular expression can help to check output,  but  they are not a panacea.

Also the most complex is your testing script the more fragile it is and has less chances to survive changes in the codebase. So complex testing scripts tend to freeze the current architecture. And as such as harmful.

In its essence, continuous delivery is an overhyped variation of the idea of the night build, used for ages. If you want to play a practical joke on your developers tell them to use Jenkins as an automated tests tool.  Jenkins is a very complex tool (with multiple security vulnerabilities), that you generally should avoid. But it can be installed via Docker container.  AWS has a "ready made" project How to set up a Jenkins build server  which can save you from troubles and wasting your time;-). But often you do not need to install it;  just associate the term Jenkins with "continuous integration" and provide for them a Docker container with Jenkins.  Your advantage is that developers usually do not have deep understanding of Linux, and, especially, virtualization issues. And usually they do not want to learn too much new staff which gets them too far away from their favorite programming language.

So there is a strong, almost 100% chance, that after the period of initial excitement they will ignore this technology and stick to GIT and their old custom testing scripts ;-)

You can use Ansible for the creating an illusion of DevOps deployment

Ansible is definitely less toxic solution then Puppet. And it offers possibility to ride the DevOps hoopla without doing any harm.  I do not see any innovative ideas in it (this is just a reimplementation of JCL on a new level), but it has some useful tidbits, is a fashionable, and is heavily pushed by Red Hat (which now, BTW now is an IBM company; and any seasoned sysadmin knows what that means).

Unlike IBM JCL it does not invoke scripts and programs directly. It does this via special wrappers, which is understandable as it needs to transport the scripts and to execute them on several different systems. Wrappers are called modules and some of them provide capabilities beyond running scripts and copying files to remote servers (which is the scope of tools like pdsh).  For example, modifying crontab.

Also there are some side benefits here: centralization of reporting, creation of you own repository of playbooks (you need to find good prototypes, so that you do not reinvent the bicycle ) with some possibilities of sharing playbooks for common operations (installation of LAMP stack and its components, especially MySQL is one example)  

As any JCL-style system, Ansible is perfect for implementing post-installation steps on freshly installed servers. But its capabilities heavily depends on the quality of utilities which actually perform each steps. Wrappers are good but they are not "replacement for displacement." You usually  can adapt you old utilities to this new role with minimum investment of time and efforts.  So it can get a boost to your own sysadmin utilities development activities.

There are multiple posts on the Web about how to convert simple installation "step by step" shell scripts into Ansible playbooks. If done without excessive zeal (for example,  how often you install Apache server to justify tinkering with Ansible) some such  conversion even might be useful. I would recommend to ignore attempt to exaggerate the value of Idempotency and push it down  your throat; in most case simply sanity checks are more then enough. Browsing articles written about conversion of bash scripts to playbook might provide a good initial step for learning Ansible. Start by using Ansible as glue to coordinate the scripts and to do the ssh part. See potentially useful details in the following articles:

GitHub contains several collection some of playbooks some elements of which you probably can adapt and reuse.  Try to avoid complex modules at least the first year of working with Ansible. Limit yourself to very basics -- wrappers for executing commands and scripts; plus probably yum module (or equivalent for Suse and Debian.) Browsing  Web for tutorials also allow you to find some reusable playbooks that serve as examples in corresponding tutorials.

Here are result of a quick search of GitHub. I can attest the quality and this is not much but something for the start.

What is important for us is that Ansible constitute an important part of  Infrastructure as Code(IaC) hoopla.  So you might be able to get some Python training on the wave of DevOps enthusiasm in your particular organization. 

It is also can be used in a way Pdsh  or parallel are used (Ansible can be viewed as  Pdsh on steroids), so all your previous scripts and skills are still fully applicable. But please not that in comparison with pdsh Ansible adds a lot of complexity (unless you use it as pdsh). It introduces its own JCL language (aka  "system automation" language),  some modules duplicates existing Unix tools functionality, and like most tools promoted by DevOps it is overhyped. But is small dozes and without excessive enthusiasm it is  not that bad.  It is a useful tool and investment in learning entry level of playbooks pays quickly off allowing to groups some of your script into more manageable and flexible "ensembles". Again, especially convenient to use Ansible for series of steps needed after kickstart installation (post installation). It is competitive with hand written drivers for the same.  See Unix Configuration Management Tools for more "in-depth" discussion.

Selling Ansible deployment to management it might be worth mentioning that it is well integrates with GIT (another magic word that you need to know and use).

As I mentioned above, adopting Ansible (or pretending to adopt it ;-)  might allow your to get some valuable Python  training.

NOTE about Puppet:

Out of sense of self-preservation it makes sense to stay out of Puttet wave. While I am strongly prejudiced against Puppet (and consider writing Puppet automation scripts as a new type of masturbation), Puppet also was/is the most  loud in  DevOps hoopla ( you can judge what type of company you are dealing by the checking if they advertise Agile on their  pages; Puppet does do that and thus can be considered to be snake oil sellers product ;-). They also pretend to provide "continues delivery" so formally you can classify Puppet as such a  tool, the pretence that might perfectly suit your needs:

No one wants to spend the day waiting for someone to manually spin up a VM or twiddle their thumbs while infrastructure updates happen. Freedom comes when you get nimble, get agile, and tackle infrastructure management in minutes, not hours. Puppet Enterprise uniquely combines the industry-standard model-driven approach we pioneered with the flexibility of task-based automation. Welcome back, lunch break.

... ... ...

Continuous delivery isn’t just for developers. Build, test, and promote Puppet code, high-five your teammates, and get back to doing what you love with Continuous Delivery for Puppet Enterprise. Instead of wondering if you have the latest code running, now you can streamline workflows between Dev and Ops, deliver faster changes confidently and do it all with the visibility your teams need.

BTW the way many years ago one really talented system administrator created Perl for very similar (actually more broad) purposes. Including the attempt to make life of sysadmins easier.  And his solution is still preferable today.

When pressed to deploy some useless junk, try to deflect efforts into something useful and improve your bargaining position

While I mentioned two useful tools that suit DevOps banner, there might be more.  Summarizing, I think that "creative"  ad hoc interpretation of DevOps might improve your bargaining position, and can serve as a draft of the plan of your fight against the most stupid aspects of the DevOps onslaught.

It is important  to try to dictate the terms of the game using your superiority in understanding of Linux and control of the data center servers.  Depending on your circumstances and the level of religious zeal for DevOps in your organization, that might be a viable strategy against unreasonable demands of developers. In any case do not surrender without a fight and please understand that a direct frontal attack on this fad is a bad strategy. 

Now if some  brainwashed with DevOps developer tries to enforce his newly acquired via DevOps hoopla rights to manage the servers ("we are one team"), you can instantly put him in his place pointing our that those weaklings do not know DevOps 2.0 technology and as such does not belong to this "Brave New World." Send all such enthusiasts to the Docker exile, pointing out that control of physical servers is so yesterday.   It is pretty effective political line that gives you much better chances for survival then the direct attack on this fad. 

Still, you might also need to visit a couple of brainwashing  session (aka "DevOps Learning Bundle") to demonstrate your loyalty.

Critique of DevOps as a methodology: three concepts relevant to the discussion of DevOps

While DevOps is notoriously fuzzy concept, I think there are three concepts very relevant to the discussion of DevOps that can be legitimately criticized:

DevOps and cults: A group does not have to be religious to be cultic in behavior

You probably heard a little bit about so called high demand cults. They exist in techno-sphere too, and while features are somewhat watered down they are still recognizable (High Demand Cults)

Remember ... A group does not have to be religious to be cultic in behavior. High demand groups can be commercial, political and psychological. Be aware, especially if you are a bright, intelligent and idealistic person. The most likely person to be caught up in this type of behavioral system is the one who says “I won’t get caught. It will never happen to me. I am too intelligent for that sort of thing.”

The following statements, compiled by Dr. Michael Langone, editor of Cultic Studies Journal, often characterize manipulative groups. Comparing these statements to the group with which you or a family member is involved may help you determine if this involvement is cause for concern. 

My initial impression that former cultists come face with a multiplicity of losses, accompanied by a deep, and sometimes debilitating, sense of anguish. See for example interviews with defector from Mormonism on YouTube

... ... ....

My hope upon initiating this research was to provide a link between cult leaders and corporate psychopath and demonstrate that cult leaders practices (that are more or less well understood and for which extensive literature exists) have a strong predictive power for the behavior of a corporate psychopath. We should not focus just on  the acute and long-term distress accompanying reporting to corporate psychopath. 

Here are some psychological mechanisms used:

  1. Control of the Environment and Communication The control of human communication is the most basic feature of the high demand cult  environment. This is the control of what the individual sees, hears, reads, writes, experiences and expresses. It goes even further than that, and controls the individuals communication with himself - his own thoughts.
     
  2. The Mystique of the Organization. This seeks to provoke specific patterns of behaviour and emotion in such a way that these will appear to have arisen spontaneously from within the environment. For the manipulated person this assumes a near-mystical quality. This is not just a power trip by the manipulators.  They have a sense of “higher purpose” and see themselves as being the “keepers of the truth.” By becoming the instruments of their own mystique, they create a mystical aura around the manipulating institution - the Party, the Government, the Organization, etc. They are the chosen agents to carry out this mystical imperative.
     
  3. Everything is black & white. Pure and impure is defined by the ideology of the organization. Only those ideas, feelings and actions consistent with the ideology and policy are good. The individual conscience is not reliable. The philosophical assumption is that absolute purity is attainable and that anything done in the name of this purity is moral. By defining and manipulating the criteria of purity and conducting an all-out war on impurity (dissension especially) the organization creates a narrow world of guilt and shame. This is perpetuated by an ethos of continuous reform, the demand that one strive permanently and painfully for something which not only does not exist but is alien to the human condition.
     
  4. Absolute “Truth” . Their “truth” is the absolute truth. It is sacred - beyond questioning. There is a reverence demanded for the leadership. They have ALL the answers. Only to them is given the revelation of “truth”.
     
  5. Thought terminating clichés. Everything is compressed into brief, highly reductive, definitive-sounding phrases, easily memorized and easily expressed. There are “good” terms which represents the groups ideology and “evil” terms to represent everything outside which is to be rejected. Totalist language is intensely divisive, all-encompassing jargon, unmercifully judging. To those outside the group this language is tedious - the language of non-thought. This effectively isolates members from outside world. The only people who understand you are other members. Other members can tell if you are really one of them by how you talk.

This are the hallmarks not only unhealthy cult movements but also aberrant churches such as Church of Scientology or  Prosperity theology .

Hubbard called Dianetics "a milestone for man comparable to his discovery of fire and superior to his invention of the wheel and the arch". It was an immediate commercial success and sparked what Martin Gardner calls "a nationwide cult of incredible proportions".[136] By August 1950, Hubbard's book had sold 55,000 copies, was selling at the rate of 4,000 a week and was being translated into French, German and Japanese. Five hundred Dianetic auditing groups had been set up across the United States.[137]

... ... ...

The manuscript later became part of Scientology mythology.[75] An early 1950s Scientology publication offered signed "gold-bound and locked" copies for the sum of $1,500 apiece (equivalent to $15,282 in 2017). It warned that "four of the first fifteen people who read it went insane" and that it would be "[r]eleased only on sworn statement not to permit other readers to read it. Contains data not to be released during Mr. Hubbard's stay on earth."[81]

... ... ...

In October 1984 Judge Paul G. Breckenridge ruled in Armstrong's favor, saying:

The evidence portrays a man who has been virtually a pathological liar when it comes to his history, background and achievements. The writings and documents in evidence additionally reflect his egoism, greed, avarice, lust for power, and vindictiveness and aggressiveness against persons perceived by him to be disloyal or hostile. At the same time it appears that he is charismatic and highly capable of motivating, organizing, controlling, manipulating and inspiring his adherents. He has been referred to during the trial as a "genius," a "revered person," a man who was "viewed by his followers in awe." Obviously, he is and has been a very complex person and that complexity is further reflected in his alter ego, the Church of Scientology.[327]

The key indicator is greedy, control oriented leadership, ministers who enrich themselves at the expense of followers (one minister of Prosperity Theology asked followers to donate to him money for his new private jet).  Attempts to extract money from the followers requiring payment for some kind of training, "deep truth", sacred manuscripts reading.

The person who raises uncomfortable questions or does not "get with the program" is ostracized. Questioning of the dogma is discouraged.

Rehash of old ideas (or old vine  in the new bottles) along with poignant critique of the "status quo" are typical signs of techno cult

In reality nothing is new under the sun in software development. DevOps rehashed the ideas many of which are at least a decade old, with some that are at least 30 years old (Unix configuration management, version control).  And the level of discussion is often lower than the level on which those ideas are discussed in The Mythical Man-Month, which was published in 1975 (another sign of "junk science").

What is more important that all under the surface of all those lofty goals is the burning desire of companies brass to use DevOps as another smoke screen for outsourcing. Yet another justification for "firing a lot of people."

Under the surface of all those lofty goals is the burning desire of companies brass to use DevOps as another smoke screen for outsourcing. Yet another justification for "firing a lot of people."

As in any cult,  there are some grains of rationality in DevOps, along with poignant critique of the status quo. Which works well to attract the followers. Overhyping some ideas about how to cope with the current, unsatisfactory for potential followers situation in enterprise IT is another sign of a technocult.  Some of those ideas are, at least on superficial level, pretty attractive. Otherwise such a technocult can't attract followers.  As a rule, techno-cults generally emerge in time of huge dissatisfaction and strive by proposing "salvation" from the current difficulties:

...think this is one of the main reason why we see this DevOps movement, we are many that see this happen in many organizations, the malfunctioning organization and failing culture that can’t get operation and development to work towards common goals. In those organizations some day development give up and take care of operation by them self and let the operation guys take care of the old stuff.

Nobody can deny that there a lot of problems in corporate datacenters those days. bureaucratization is rampant and that stifle few talented people who still did not escape this environment, or switched to writing open source software during working hours, because achieving anything within the constrain of the existing organization is simply impossible ;-)

But along the this small set of rational (and old) ideas there a set of completely false, even bizarre ideas and claims, which make it a cult.. There is also fair share of Pollyanna creep.  Again, it is important  to understand that  part of the promotion campaign success (with money for it mostly from companies who benefit from outsourcing) is connected with the fact that corporate IT brass realized than DevOps can serve well as a smote screen for another round of outsourcing of "ops".  (Ulf Månsson about infrastructure )

This creates new way of working, one good example is the cultural change at Nokia Entertainment UK, presented at the DevOps conference in Göteborg, by inclusion going from 6 releases/year and 50 persons working with releases to 246 releases/year with only 4 persons, see http://www.slideshare.net/pswartout/devopsorg-how-we-are-including-almost-everyone. That story was impressive.

Pretention that this is a new technology is artfully created by inventing a new language ripe with new obscure terms. Another variant of Newspeak. And this is very important to understand. It is this language that allow to package bizarre and unproven ideas into the cloak of respectability.

Primitivism of thinking and unfounded claims of this new IT fashion (typical half-life of IT fad is less then a decade; for example, who now remembers all this verification hoopla  and books published on the topic) are clearly visible in advocacy papers such as Comparing DevOps to traditional IT Eight key differences - DevOps.com. Some of the claims are clearly suspect, and smell of "management consultant speak" (an interesting variety of corporate bullshit). For example:

Traditional IT is fundamentally a risk averse organization.  A CIO’s first priority is to do no harm to the business.  It is the reason why IT invests in so much red tape, processes, approvals etc.  All focused on preventing failure.  And yet despite all these investments, IT has a terrible track record – 30% of new projects are delivered late, 50% of all new enhancements are rolled back due to quality issues, and 40% of the delay is caused by infrastructure issues.

A DevOps organization is risk averse too but they also understand that failure is inevitable.  So instead of trying to eliminate failure they prefer to choose when and how they fail.  They prefer to fail small, fail early, and recover fast.  And they have built their structure and process around it.  Again the building blocks we have referred to in this article – from test driven development, daily integration, done mean deployable, small batch sizes, cell structure, automation etc. all reinforce this mindset.

Note a very disingenuous claim "IT has a terrible track record – 30% of new projects are delivered late, 50% of all new enhancements are rolled back due to quality issues, and 40% of the delay is caused by infrastructure issues."   I think all previous fashionable methodologies used the same claim, because the mere complexity of large software projects inevitably leads to overruns and doubling estimated time and money is a good software engineering methodology known since publishing of Mythical man month  :-)

As for another disingenuous claim  "So instead of trying to eliminate failure they prefer to choose when and how they fail.  They prefer to fail small, fail early, and recover fast." this is not a realistic statement.  It is a fake. In case of a SNAFU you can't predict the size of the failure. Just look at history of Netflix, Amazon or Google Gmail failures.  Tell this mantra "They prefer to fail small, fail early, and recover fast"  to  customers of Google or Amazon during the next outage

Note also such criticism is carefully swiped under the carpet and definition of DevOps evolves with time to preserve its attractiveness to new members ( DevOps is Dead! Long Live DevOps! - DevOps.com ). In other words the correct definition of DevOps is "it is a very good thing" ;-). For example:

Some are seekers on the quest for the one, true DevOps. They were misled. I’m here to say: Give it up. Whatever you find at the end of that journey isn’t it. That one true DevOps is dead. Dead and buried. The search is pointless or, kind of worse: The search misses the point altogether.

 DevOps began as a sort of a living philosophy: about inclusion rather than exclusion, raises up rather than chastises, increases resilience rather than assigns blame, makes smarter and more awesome rather than defines process and builds bunkers. At any rate, it was also deliberately never strictly defined.

In 2010, it seemed largely about Application Release Automation (ARA), monitoring, configuration management and a lot of beginning discussion about culture and teams. By 2015, a lot of CI/CD, containers and APIs had been added. The dates are rough but my point is: That’s all still there but now DevOps discussions today also include service design and microservices. Oh, and the new shiny going by the term “serverless.” It is about all of IT as it is naturally adapting.

Positive elements of DevOps are far from new, while some advertized as "breakthrough" ideas such as continues delivery clearly false  

I will be repeating myself here. Like any repackaging effort ,DevOps presents old, exiting  technologies as something revolutionary. They are not. Among measures that DevOps "movement" advocates several old ideas  make sense if implemented without fanaticism. And they could be implemented before the term DevOps (initially NoOps) was invented.  That includes:

  1. The move of applications to VM,  including but not  limited to light weight virtual machines (Docker). And generally idea of wider adoption of light weight virtual machines in enterprise environment, instead or in addition to traditional (and expensive) VMware (I do not understand advantages of running Linux under VMware -- this is VM tuned to Windows that implements heavy virtualization (virtualization of the CPU), while Linux being open source requires para-virtualization (virtualization of system calls), actually available in XEN and  derivatives such as Oracle virtual manager).
  2. Wider adoption of scriptable configuration tools and configuration management system ( Infrastructure as code -- IaC ). Truth be told this not necessary to do via Puppet and similar complex Unix configuration management packages) as well as continued delivery schemes.  Although Puppet is a bad choice as it is too complex, has unanticipated quirks and rather low reliability it is a step forward. But it might be  two steps back as it contributes to creating "Alice in wonderland" environment when nobody can troubleshoot the problem because of complex subsystems involved.  See  Unix configuration management for details.
  3. Wider usage of version management for configuration files. Such systems as GIT and Subversion is definitely underutilized in enterprise environment and it can be implemented both wider and better.  None of them fits Unix configuration management requirements perfectly. But some intermediate tools can be created that compensate deficiencies
  4. Wider use of scripting and scripting languages such as Python, Ruby, shell and (to a much lesser extent) good old Perl.   Of them only Python spans both Unix system administration and software development areas and as such is preferable (despite Perl being a stronghold of Unix sysadmins).  That actually includes de-emphasizing Java development in favor of Python.
  5. Attempts to cut development cycle to smaller more manageable chunks (although idea of "continuous delivery" is mostly bunk). Here a lot of discretion is needed as overdoing this destroys the development process and, what is even more dangerous, de-emphasizes the value of architectural integrity and architecture as a concept (this is a heritage of Agile, which is mostly a snake oil). 

Some of those innovations ( like Docker) definitely makes more sense than others because preoccupation with VMware as the only solution for virtualization in enterprise environment is unhealthy from multiple points of view.

But all of them are adding, not removing complexity.  Just the source of it different (in case of Docker you also can somewhat compensate increase of the complexity due to the switch to virtual environment by the fact that you can compartmentalize most critical applications in the separate "zones (using Solaris terminology -- Docker is essentially reimplementation of Solaris zones in Linux). 

May be the truth lies somewhere in between and selected and "in moderation" implementation of some of those technology in the datacenter can be really beneficial. But excessive zeal can hurt not help. In this sense presence of newly converted fanatics is a recipe for additional problems. if not a disaster.

I actually saw the situations, when implementation of DevOps brought corporate IT to a screeching halt, almost complete paralysis when nothing but maintaining (with great difficulties) of the status quo was achieved for the first two years. Outsourcing in those cases played major negative role as new staff needs large amount of time and effort to understand the infrastructure and processes and  attempt to "standardize" some  servers and services. Often failing dismally due to complex  interrelations between them and applications in place. 

While the current situation in the typical large datacenter is definitely unsatisfactory from many points of view, the  first principle should be "do no harm". In many cases it might sense to announce the switch to DevOps, force people to prepare to it (especially discard old servers and unused applications in favor of virtual instances on private or public cloud) and then cancel the implementation. You probably can probably achieve 80% of positive effect this way, avoiding 80% of negative effects.  Moving datacenter to a new location can also tremendously help and can be used instead of DevOps implementation :-)  

While the current situation in a typical large datacenter is definitely unsatisfactory from many points of view, the  first principle should be "do no harm".

Again, the main problem with DevOps hoopla is that, if implemented in full scale, it substantially increases the complexity of environment. Both virtual machines and configuration management tools provide additional "levels of indirection" which makes troubleshooting more complex and causes of failures more varied and complex.  That represents problems even for seasoned Unix sysadmins to say nothing for poor developers who are thrown into this water and asked to swim.

Sometimes DevOps methodology implemented in modest scope does provide some of the claimed benefits (and some level of configuration management in a large datacenter is a must). The question is what if the optimal solution for this and how not to overdo it. But in a typical DevOps implementation this question  somehow does not arise and it just degenerates into another round of centralization and outsourcing which make the situation worse. Often much worse to the level when It really became completely dysfunctional. I saw this affect of DevOps implementation in large corporations.

So it should be evaluated on case by case basic, not as panacea. As always much depends of the talent of the people who try to implement  it. Also change in a large datacenter is exceedingly difficult and often degenerates into what can be called one step forward, two steps back". For example, learning such tools as Puppet or Chief requires quite a lot of effort for rather questionable return on investment as complexity precludes full utilization of the tool and it is downsized to basic staff.  So automation using them is a mixed blessing.

Similarly move to lightweight VM (and virtual servers in general) which is a part of DevOps hype are easier to deploy, but load management of multiple servers running on the same box is an additional and pretty complex task. Also VMware that dominates VM scene is expensive (which mean that the lion share of saving are going to VMware, not to the enterprise which deploys it ;-) and is a bad VM for Linux. Linux needs para-virtualization not full CPU virtualization that VMware, which was designed for Windows, offers (with some interesting optimization tweaks). Docker, which is a rehash of the idea of Solaris zones is a better deal, but is a pretty new for Linux technology and it has its own limitations. Often severe as this is a light weight VM. 

DevOps and junk science: the typical signs of "junk science"

Junk science is and always was based on cherry-picked evidence which has carefully been selected or edited to support a pre-selected "truth".  Facts that do not fit the agenda are suppressed (Groupthink).  Apocalyptic yelling  are also very typical. Same for Pollyanna creep.   Deployment is typically top-down. Corporate management is used as an enforcement  branch (corporate Lysenkoism). Here are some signs of "junk science" (Seven Eight Warning Signs of Junk Science):

Here is a non-exclusive list of seven eight symptoms to watch out for:
  1. Science by press release. It’s never, ever a good sign when ‘scientists’ announce dramatic results before publishing in a peer-reviewed journal. When this happens, we generally find out later that they were either self-deluded or functioning as political animals rather than scientists. This generalizes a bit; one should also be suspicious of, for example, science first broadcast by congressional testimony or talk-show circuit.
  2. Rhetoric that mixes science with the tropes of eschatological panic. When the argument for theory X slides from “theory X is supported by evidence” to “a terrible catastrophe looms over us if theory X is true, therefore we cannot risk disbelieving it”, you can be pretty sure that X is junk science. Consciously or unconsciously, advocates who say these sorts of things are trying to panic the herd into stampeding rather than focusing on the quality of the evidence for theory X.
  3. Rhetoric that mixes science with the tropes of moral panic. When the argument for theory X slides from “theory X is supported by evidence” to “only bad/sinful/uncaring people disbelieve theory X”, you can be even more sure that theory X is junk science. Consciously or unconsciously, advocates who say these sorts of things are trying to induce a state of preference falsification in which people are peer-pressured to publicly affirm a belief in theory X in spite of private doubts.
  4. Consignment of failed predictions to the memory hole. It’s a sign of sound science when advocates for theory X publicly acknowledge failed predictions and explain why they think they can now make better ones. Conversely, it’s a sign of junk science when they try to bury failed predictions and deny they ever made them.
  5. Over-reliance on computer models replete with bugger factors that aren’t causally justified.. No, this is not unique to climatology; you see it a lot in epidemiology and economics, just to name two fields that start with ‘e’. The key point here is that simply fitting historical data is not causal justification; there are lots of ways to dishonestly make that happen, or honestly fool yourself about it. If you don’t have a generative account of why your formulas and coupling constants look the way they do (a generative account which itself makes falsifiable predictions), you’re not doing science – you’re doing numerology.
  6. If a ‘scientific’ theory seems tailor-made for the needs of politicians or advocacy organizations, it probably has been. Real scientific results have a cross-grained tendency not to fit transient political categories. Accordingly, if you think theory X stinks of political construction, you’re probably right. This is one of the simplest but most difficult lessons in junk-science spotting! The most difficult case is recognizing that this is happening even when you agree with the cause.
  7. Past purveyers of junk science do not change their spots. One of the earliest indicators in many outbreaks of junk science is enthusiastic endorsements by people and advocacy organizations associated with past outbreaks. This one is particularly useful in spotting environmental junk science, because unreliable environmental-advocacy organizations tend to have long public pedigrees including frequent episodes of apocalyptic yelling. It is pardonable to be taken in by this the first time, but foolish by the fourth and fifth.
  8. Refusal to make primary data sets available for inspection. When people doing sound science are challenged to produce the observational and experimental data their theories are supposed to be based on, they do it. (There are a couple of principled exceptions here; particle physicists can’t save the unreduced data from particle collisions, there are too many terabytes per second of it.) It is a strong sign of junk science when a ‘scientist’ claims to have retained raw data sets but refuses to release them to critics.

If we are talking about DevOps as a software development methodology it is similar to Agile. The latter was a rather successful attempt to reshuffle a set of old ideas for fun and profit (some worthwhile, some not so much) into attractive marketable technocult and milk the resulting "movement" with books, conferences, consulting, etc. 

In 2017 only few seasoned software developers believe that Agile is more then self-promotion campaign of group of unscrupulous and ambitious people, who appointed themselves as the high priests of this cult.  Half-life of such "cargo cult" programming methodologies is usually around a decade, rarely two  (Agile became fashionable around 1996-2000). Now it looks like Agile is well past the "hype stage" in the software methodology life cycle and attempts to resurrect it with DevOps will fail.

Political dimension of DevOp movement: DevOp as a smoke screen for outsourcing

As DevOps carries certain political dimensions (connection to the neoliberal transformation of the society, with its fake "cult of creativity", rise of role (and income) of the top 1% and decline of IT "middle class"). Outsourcing and additional layoffs are  probably the most prominent result of introduction of DevOps into real datacenters. So DevOp often serves as a Trojan horse for the switch  to outsourcers and  contract labor. 

As DevOps carries certain political dimensions (connection to the neoliberal transformation of the society, with its fake "cult of creativity", rise of top 1% and decline of middle class), outsourcing and additional layoffs are probably the most prominent result of introduction of DevOps into real datacenters.

The whole idea that by adding some tools VM-run virtual instances and  some additional management capabilities introduced by tools like Puppet you can by successfully computerized and transferred the IT to outsourcers and contractors is questionable.

No amount  of ideological brainwashing you can return datacenter to the good old days in Unix minicomputers when a single person was a masters of all trades -- a developer, a system administrator and a tester. This is impossible due that current complexity of environment: there is a large gap in the level of knowledge of (excessively complex) OS by a typical sysadmin (say with RHCE certification) and a typical developer.  Attempts to narrow this gap via tools (and outsourcers) which is the essence of DevOps movement  can go only that  far.

But the most alarming tendency is that DevOps serve as a smoke screen for further outsourcing and moving from a traditional data center to the cloud deployment  model with contractors as the major element of the work force.  In other words DevOps are used by corporate brass as another way to cut costs (and the costs of IT in most large manufacturing corporations are already about 1% or less; so there is not much return achievable for this cost cutting anyway).

From 1992 to 2012 Data Centers already experienced a huge technological reorganization, which might be called Intel revolution. Which dramatically increased role of Intel computers in the datacenter, introduced new server form factors such as blades and new storage technologies such as SAN and NAS. And make Linux  the most poplar OS in the datacenter, displacing Solaris, AIX and HP-UX.

In addition virtualization became common in Windows world due to proliferation of VMware instances.

Faster internet and wireless technologies allowed more distributed workforce and ability for people work part of the week from home. Smartphones now exceed the power of 1996 desktop. Moreover there was already a distinct trend  of  the consolidation of datacenters within the large companies.

As the result in multinationals (and all large companies) companies many services, such as email and to lesser extent file storage are already provided via internal company cloud from central servers. At the same time it became clear that along with technical challenges "cloud services" create a bottleneck on WAN level and present huge threat to security and privacy.   The driving force behind the cloud is the desire to synchronize and access data from several devices that people now own (desktop, laptop, smartphone, tablets. In other words to provide the access to user data from multiple devices (for example email can be read on smartphone and  laptop/desktop).  The first such application, ability to view corporate e-mail from the cell phone, essentially launched Blackberry smartphones into prominence. 

In view of those changes managing datacenter remotely became a distinct possibility. That's why DevOps serves for higher management as a kind of "IT outsourcing manifesto".  But with outsourcing the problem o loyalty comes into forefront.

DevOps and cargo cult science

Another relevant concept as for discussion of  DevOps is Cargo Cult science. Cargo cult science comprises practices that have the semblance of being scientific, but do not in fact follow the scientific method. The term was first used by physicist Richard Feynman during his 1974 commencement address at the California Institute of Technology. Software development provides a fertile ground for cargo cult science. For example, talented software developers are often superstars who can follow methods not suitable for "mere mortals" like continuous delivery.  The came can happen with  organization, which have some unique circumstances that make continues delivery successful. For example if "development" consists of just small patches and bug fixes, while most of the codebase remains static.

I think Bill Joy (of BSD Unix, csh,  NFS, Java, vi editor fame ) was such a software superstar when he produced BSD tapes. He actually created vi editor using terminal over 1200 baud modem  (Bill Joy's greatest gift to man – the vi editor • The Register) -- which is excruciatingly slow.  The feat which is difficult if not impossible for "mere mortal", if just for the lack of patience (transmission speed of 1200 baud modem is close to the speed with mechanical typewriters can print).  The same is true for Donald Knuth who created singlehandedly a Fortran complier during a summer, when he was still a student.  And Ken Thompson--who is the father of Unix. And  Larry Wall who created Perl while being  almost blind on one eye.  But that does not mean that there practices are scalable.  Brooks book "Mythical man-month" is still relevant today as it was at the moment of publication. 

The biggest mistake you can make  as a manager or a large and important  software project is to delegate the key functions in the design to mediocre people. You can't replace software talent with the organization, although organization helps. any attempt to claim otherwise are cargo cult science. In other word tin software engineering there is no replacement for displacement.

Connection to Agile 

Like Agile before DevOps  emphasizes a culture of common goals (this time between "operations" and "developers" with the idea of merging them like in good old times  -- the initial name was NoOps; which is pretty questionable idea taking into account the complexity of the current IT infrastructure)  and get things done together, presenting itself as new IT culture.

Also details are fuzzy and contradictory. They vary from "prophet" to another. That strongly suggests that like many similar IT "fashions"  before it DevOps just mean "a good idea" (remember an attributed to Mahatma Gandhi acerbic remark to a self-confident Western journalist: on  the question “What do you think of Western civilization?,” he reportedly have answered, “I think it would be a good idea” :-)

While it seems the IT management is rushing to embrace the concept of DevOps (because it justifies further outsourcing under the smokescreen of new terms), nobody agrees on what it actually means. And that creates some skepticism.

DevOps paints a picture of two cultures ("operations" vs. "developers" ) once at odds ("glass datacenter"), now miraculously working together in harmony. But first of all there are weak and strong developers. There are weak and strong Unix sysadmins )and strong sysadmin are often are not bad software developers in their own set of languages, mostly scripting languages.)

The problem of huge, excessive complexity of modern IT infrastructure can't be changed with some fashionable chair reshuffling. What actually happen is that more talented members of the team get additional workload. That's why some critics claim that DevOps kills developers. Meaning "talented developers".  And it certainly can be done but it is easier said then done.

The fact that DevOps is somehow connected with Agile  is pretty alarming and suggests that might well be yet another "snake oil" initiative. With a bunch of talented an unprincipled salesmen who benefit from training courses, consulting gigs, published books, conferences, and other legal ways to extract money from lemmings. 

When you read sentences like "DevOps as an environment where an agile relationship will take place between operations and development teams. "(https://www.quora.com/What-is-devops-and-why-is-it-important  ) you quickly understand what type of people can benefit from DevOps.

The danger of provider lockup

The key objection to DevOps is that reliance on super-platforms such as Amazon cloud or Microsoft Azure could, in the future potentially intellectually capture the organization and remaining sysadmins (and IT staff in general), who are increasingly distant  from “nuts and bolts" of operating system and hardware and operate in what essentially is proprietary environment of the particular vendor.  That converts Unix sysadmin into a flavor on Windows sysadmin with less nice GUI.  In other words they need  put all trust in the platforms and detached themselves from "nuts and bolts" levels. That means that, in a way, they became as dependent  on those platforms as opiates addicts on their drags.

The also creates a set of  DevOps promoters such as cloud providers, who want to become "gatekeepers", binding users to their  technology.  Those gatekeepers once they became non-displaceable makes sure that the organization lost the critical mass of technical IQ in "low level" (operating system level) infrastructure and can't abandon them without much pain.

At this point  they start to use this dependency to their advantage. Typically they try to estimate customers "willingness to pay" and, as a result,  gradually increase the price of their services. IBM was a great practitioner of this fine art in the past. That's why everybody hated it.  VMware is also proved to be quite adept in this art.

Not that Amazon cloud is cheap. It is not, even is we calculate not only hardware and electricity savings, but also (and rather generously as in one sysadmin for 50 servers)  manpower saving they provide. It is often cheaper to run your own hardware within the internal cloud then Amazon unless you have a huge peaks.

The same is even more true for VMware. If in the particular organization VMware used as virtualization platform for Linux (which due to the being open source allows para- virtualization, instead of full virtualization that VMware implements) , then to talk about savings is possible only when one is sufficiently drunk.  Not that such savings do not exist. They do, but lion share of them goes directly to VMware, not to the organization which deploy this platform.

The danger is that you are basically willingly allowed to capture yourself and willing to be part of this ecosystem that is controlled by one single "gatekeeper." Such a decision creates an environment in which  switching costs can be immense.   That's why there is such a competition amount three top players in cloud provider space for the new enterprise customers. The first  who grab a particular customer is the one to control and can milk such a customer for a long, long time. Classic DevOps advocates response that "you should have negotiated better" is false because people not have enough information when they enter negotiations, and it's too late when they finally will "get it."

The need to the middlemen (sysadmins) automatically arises when your system is way too complex

DevOps is presented by adherents as all singing, all dancing universal solution to all problems of mankind, or, at least, of current problems such as the overcomplexity, alienation of developers. paralysis via excessive security, and red tape that exists in the modern data center.

But the problem is that the level of complexity of modern It is such that the division of labor between sysadmins and developers is not only necessary it is vital for the success.

Such hope is ignoring the fact that there is no "techno cure" for large datacenter problems, because those problem are not only technological in nature, but also reflect complex mix of sociological (the curse of overcomplexity is one of those; see The Collapse of Complex Societies; neoliberal transformation of the enterprise with switch to outsourcing and contractor labor is another) and, especially, balance of power between various groups within the data center. Such as corporate management, developers and operation staff. 

Which create pretty interesting mix from sociological point of view and simultaneously creates a set of internal conflict and constant struggle for power between various strata of the datacenter ecosystem. From this point of view DevOps clearly represents political victory for developers, and management  at the expense of other players and first of all system administrators.

In a way, this idea can be viewed as a replay (under  a new name)  old Sun idea "network is computer".  That does not mean that there is a rational element in DevOps, as the trend to merge individual servers into some kind of computational superstructure. As exemplified by  Amazon cloud and Azure with their complete switch to virtual instances and attempt to diminish the role of "real hardware".

This tend exist for quite a long time. For example Sun grid was just one and early successful attempts in this direction which led to creation of the whole class of computer environment which now are known as computational clusters. DevOps can be viewed as an attempt to create "application clusters".

To the extent that it is connected with advances in virtualization such as Solaris zones and Linux containers it is not that bad if solid ideas get some marketing push.

But  the key question here is: can we really eliminate the sysadmin role in such a complex environment as, for example modern Linux (as exemplified by RHEL 7) ? Does cloud environment  such as Azure creates for developer the possibility to get rip of sysadmin and do it all by himself/herself.

No, but it is true that with virtualization the level of dependency can be less that in classic datacenter with real servers.  One hugely positive thing from the point of view of developers that in virtual instances in cloud environment they have access to root.  That really free them for shackles imposed by the fact that of "real servers" only sysadmin has access to root, and it can be grated to the developer only temporary and with some red tape involved.

But for important production servers that is a double-edge sword. If the developer does not fully understand the ropes (and with the complexity of RHEL 7 he just can't)  he/she can bring the server down in no time by a reckless action, repaying various Sysadmin Horror Stories on a new level. So the level of defense in the form of sysadmin is now absent.  And the environment is no less complex that in classical datacenter. Especially with RHEL 7.

There is another aspect in cloud technology. An organization can not productively use technology platform that is far above technical capabilities of the staff. And if you try at the end of the day you will suffer unpredictable set of negative consequences.  The elimination of sysadmins lower the level of technical competence in the eEnterprise dramatically. The sooner the management realize that this is a false gold the better. After all people have only one head and companies like Red Hat ignore this limitation with the monster complexity of RHEL 7 (which added the systemd daemon)

That effects are only to cost overruns (which usually can be swiped under the carpet). There are additional effects which are somewhat similar to those predicted by the first two Softpanorama Laws of Computer Security:

  1. In a long run the level of security of any large enterprise Unix environment can not be significantly different from the average level of qualification of system administrators responsible for this environment...
     
  2. If a large discrepancy between the level of qualification of system administrators and the level of Computer Security of the system or network exists, the main trend is toward restoring equilibrium at some, not so distant, point...

In other words elimination of sysadmin level means that technology can't be utilized fully because of lack of understanding, including lack of understand of security by the developers. 

The problem of reckless, power hungry developers

But there are also other reasons why this mode of work is not reproducible in the in the modern datacenter. The maximum you can achieve (and that's pretty complex undertaking) is to remove some artificial bureaucratic barriers and make developer more productive because he can control more of the whole process not to bump with his head into bureaucratic obstacles when he need to make some changes in the environment in addition to his application.

But it is very important to  understand  that there are good developers and bad developers. Reckless power hungry developer, who will utilize those new opportunities 100%,  is a great danger. By removing bureaucratic obstacles you also remove some safeguards against such developers. There is no free lunch.

Reckless developer is a great danger.  By removing  bureaucratic obstacles you also remove safeguards against such developers. There is no free lunch. 

In other words the idea of even partial elimination of specialization between developers and operation staff is a dangerous illusion taking  into account the complexity of modern operating systems.  Giving a talented developer his own sysadmin and tester makes sense. But making him his own sysadmin and tester is borderline idiotism. Talented developers is a very rare commodity and  their use should be highly optimized.  Re-reading Mythical Man-month helps to get proper line of thinking about this problem. 

The idea of removing extra red tape also has some validity, but here you are facing uphill battle against entrenched IT management interests, which try to isolate itself from people in the trenches with additional management layers (and those "redundant" management layers is kind of  cancer of IT departments, because those middlemen tend to replicate like rabbits in uncontrollable fashion). 

High level IT management want technoserfs to work like slaves on this techno plantation to get their bonuses. And they want to achieve that without  direct contact with  those "badly smelling people" creating those middle management positions that isolate them. But in order to pay those people the herd of technoserfs needs to thinned, as   private profits and bonuses for the brass here comes before efficiency :-)  Generally the question of parasitic rents within IT organization is a very interesting topic, but it is beyond  the scope of this paper.

As large datacenters and large software development problem are inherently complex. So going over budget  and taking twice longer than it should in large software project is more or less typical course of events. A larger danger here is compete fiasco, like in the past happened with too enthusiastic idiots who implemented SAP in their companies. This type of failures actually can crash the whole company and put it out of business.

Any claims that those fundamental issues can be somehow eliminated with some cosmetic measures ("magic dust") is a typical snake oil salesmen staff. Software development talent, especially high level software  development  talent that encompass also strong architectural vision, is a very scarce commodity.  In you have no such developers, that chances of successful completion of a complex project are slim. No superficial remedies can help to solve this tremendously difficult problem. Huge  blunders of incompetent  high level IT managers is also a reality.

In short the key problem of modern datacenter and modern software applications is lack of top software talent both  on the level of software developers and high IT management. As well as mind boggling complexity of both the operating systems and applications.  They are probably the most complex artifacts created by mankind.  And that means that it is unclear whether the better solution lies in increasing the complexity or decreasing  it. I tend to think that the latter might be a better path forward (KISS principle).  Which puts me on the collision course with DevOps hoopla. Your mileage may vary.

Cost saving is often an illusion and often are achieved by avoiding to calculate recurrent costs

Cost wise DevOps (typically implemented with the help of outsourcers) is a rather expensive proposition which provides both higher complexity and lower reliability. Outsourcing costs are low usually only at the beginning as most vendors perform "bait and switch" underbidding their services and then try to compensate this by inflating operating costs, whenever possible, especially in areas were local "know how" disappeared.

So benefits, is they exist, are not automatic, but are dependent of the qualification of people implementing this initiative.

VM problems while relatively rare still are unwelcome addition to the set of existing problem; also if hypervisor or server goes down all VMs go down with it). Using VM is very similar to replacing enterprise servers with desktops, but at a higher cost (BTW "disposable desktops" is an interesting alternative to VMware VMs cost wise).  The cost of "decent" desktop ($500-$800) now is one tens of the cost of a "decent" enterprise server ($6K-$8K). but ten desktops can definitely do more computation wise.

Adding to the mix management tools such as Puppet  works for multiple desktops too. 

Why DevOps emerged and what problem it addresses

DevOps was a reaction to the problem in companies who now need to maintain thousands of servers, such as Netflix, Amazon, Facebook and Google.

So DevOps is intrinsically connected with attempts of merging servers into some computational superstructure and automation of system administration using Unix configuration management tools. BTW that does not mean that Puppet or Chief are the proper way to do that (the king might still be naked ;-).  They just bask in DevOps glory. And try to sustain and maintain the DevOps hype for fun and profit.   

The second part is an attempt to break isolated and fossilized silos in which developers and operation staff live in large corporation. Sometimes with almost without interaction.  So in a way this as an over-reaction to a dysfunctional ops organization (as Adrian Cockcroft admitted in his comment.), which actually clearly demonstrates that  he does not understand how complex organization operated (his idea of abolishing operations is just an attempt to grab the power by "developer class") and never read the book by Charles Perrow (Complex Organizations A Critical Essay).  In complex organization technological issues re always intersect with power struggle and effects of neoliberalism on IT and corporate environment in general. Blowback of excessive outsourcing of IT is hitting large corporations and is a part of problem that IT organizations experience now. DevOps in this sense is a part of the problem and not a part of the solution.  You will never read about this this in "DevOps hoopla" style of books that proliferate (Amazon lists around hundred of books on this topic; most junk). In reality globalization while solving  one set of problem creates another and her IT is both the victim and the part of this "march of neoliberalism" over globe:

Information and communications technologies are part of the infrastructure of globalization in finance, capital mobility and transnational business. Major changes in the international economic landscape are intertwined and contemporary accelerated globalization is in effect a package deal that includes informatization (applications of information technology), flexibilization (changes in production and labour associated with post-Fordism), financialization (the growing importance of financial instruments and services) and deregulation or liberalization (unleashing market forces). This package effect contributes to the dramatic character of the changes associated with globalization, which serves as their shorthand description. 

A good early overview why DevOps emerged (when it was still called NoOps) was made in 2012 by Mile Loukidis, who wrote historically important paper of the subject.

Adrian Cockcroft’s article about NoOps at Netflix ignited a controversy that has been smoldering for some months. John Allspaw’s detailed response to Adrian’s article makes a key point: What Adrian described as “NoOps” isn’t really. Operations doesn’t go away. Responsibilities can, and do, shift over time, and as they shift, so do job descriptions. But no matter how you slice it, the same jobs need to be done, and one of those jobs is operations. What Adrian is calling NoOps at Netflix isn’t all that different from Operations at Etsy. But that just begs the question: What do we mean by “operations” in the 21st century? If NoOps is a movement for replacing operations with something that looks suspiciously like operations, there’s clearly confusion. Now that some of the passion has died down, it’s time to get to a better understanding of what we mean by operations and how it’s changed over the years.

At a recent lunch, John noted that back in the dawn of the computer age, there was no distinction between dev and ops. If you developed, you operated. You mounted the tapes, you flipped the switches on the front panel, you rebooted when things crashed, and possibly even replaced the burned out vacuum tubes. And you got to wear a geeky white lab coat. Dev and ops started to separate in the ’60s, when programmer/analysts dumped boxes of punch cards into readers, and “computer operators” behind a glass wall scurried around mounting tapes in response to IBM JCL. The operators also pulled printouts from line printers and shoved them in labeled cubbyholes, where you got your output filed under your last name.

The arrival of minicomputers in the 1970s and PCs in the ’80s broke down the wall between mainframe operators and users, leading to the system and network administrators of the 1980s and ’90s. That was the birth of modern “IT operations” culture. Minicomputer users tended to be computing professionals with just enough knowledge to be dangerous. (I remember when a new director was given the root password and told to “create an account for yourself” … and promptly crashed the VAX, which was shared by about 30 users). PC users required networks; they required support; they required shared resources, such as file servers and mail servers. And yes, BOFH (“Bastard Operator from Hell”) serves as a reminder of those days. I remember being told that “no one” else is having the problem you’re having — and not getting beyond it until at a company meeting we found that everyone was having the exact same problem, in slightly different ways. No wonder we want ops to disappear. No wonder we wanted a wall between the developers and the sysadmins, particularly since, in theory, the advent of the personal computer and desktop workstation meant that we could all be responsible for our own machines.

But somebody has to keep the infrastructure running, including the increasingly important websites. As companies and computing facilities grew larger, the fire-fighting mentality of many system administrators didn’t scale. When the whole company runs on one 386 box (like O’Reilly in 1990), mumbling obscure command-line incantations is an appropriate way to fix problems. But that doesn’t work when you’re talking hundreds or thousands of nodes at Rackspace or Amazon. From an operations standpoint, the big story of the web isn’t the evolution toward full-fledged applications that run in the browser; it’s the growth from single servers to tens of servers to hundreds, to thousands, to (in the case of Google or Facebook) millions. When you’re running at that scale, fixing problems on the command line just isn’t an option. You can’t afford letting machines get out of sync through ad-hoc fixes and patches. Being told “We need 125 servers online ASAP, and there’s no time to automate it” (as Sascha Bates encountered) is a recipe for disaster.

The response of the operations community to the problem of scale isn’t surprising. One of the themes of O’Reilly’s Velocity Conference is “Infrastructure as Code.” If you’re going to do operations reliably, you need to make it reproducible and programmatic. Hence virtual machines to shield software from configuration issues. Hence Puppet and Chef to automate configuration, so you know every machine has an identical software configuration and is running the right services. Hence Vagrant to ensure that all your virtual machines are constructed identically from the start. Hence automated monitoring tools to ensure that your clusters are running properly. It doesn’t matter whether the nodes are in your own data center, in a hosting facility, or in a public cloud. If you’re not writing software to manage them, you’re not surviving.

Furthermore, as we move further and further away from traditional hardware servers and networks, and into a world that’s virtualized on every level, old-style system administration ceases to work. Physical machines in a physical machine room won’t disappear, but they’re no longer the only thing a system administrator has to worry about. Where’s the root disk drive on a virtual instance running at some colocation facility? Where’s a network port on a virtual switch? Sure, system administrators of the ’90s managed these resources with software; no sysadmin worth his salt came without a portfolio of Perl scripts. The difference is that now the resources themselves may be physical, or they may just be software; a network port, a disk drive, or a CPU has nothing to do with a physical entity you can point at or unplug. The only effective way to manage this layered reality is through software.

So infrastructure had to become code. All those Perl scripts show that it was already becoming code as early as the late ’80s; indeed, Perl was designed as a programming language for automating system administration. It didn’t take long for leading-edge sysadmins to realize that handcrafted configurations and non-reproducible incantations were a bad way to run their shops. It’s possible that this trend means the end of traditional system administrators, whose jobs are reduced to racking up systems for Amazon or Rackspace. But that’s only likely to be the fate of those sysadmins who refuse to grow and adapt as the computing industry evolves. (And I suspect that sysadmins who refuse to adapt swell the ranks of the BOFH fraternity, and most of us would be happy to see them leave.) Good sysadmins have always realized that automation was a significant component of their job and will adapt as automation becomes even more important. The new sysadmin won’t power down a machine, replace a failing disk drive, reboot, and restore from backup; he’ll write software to detect a misbehaving EC2 instance automatically, destroy the bad instance, spin up a new one, and configure it, all without interrupting service. With automation at this level, the new “ops guy” won’t care if he’s responsible for a dozen systems or 10,000. And the modern BOFH is, more often than not, an old-school sysadmin who has chosen not to adapt.

James Urquhart nails it when he describes how modern applications, running in the cloud, still need to be resilient and fault tolerant, still need monitoring, still need to adapt to huge swings in load, etc. But he notes that those features, formerly provided by the IT/operations infrastructures, now need to be part of the application, particularly in “platform as a service” environments. Operations doesn’t go away, it becomes part of the development. And rather than envision some sort of uber developer, who understands big data, web performance optimization, application middleware, and fault tolerance in a massively distributed environment, we need operations specialists on the development teams. The infrastructure doesn’t go away — it moves into the code; and the people responsible for the infrastructure, the system administrators and corporate IT groups, evolve so that they can write the code that maintains the infrastructure. Rather than being isolated, they need to cooperate and collaborate with the developers who create the applications. This is the movement informally known as “DevOps.”

Amazon’s EBS outage last year demonstrates how the nature of “operations” has changed. There was a marked distinction between companies that suffered and lost money, and companies that rode through the outage just fine. What was the difference? The companies that didn’t suffer, including Netflix, knew how to design for reliability; they understood resilience, spreading data across zones, and a whole lot of reliability engineering. Furthermore, they understood that resilience was a property of the application, and they worked with the development teams to ensure that the applications could survive when parts of the network went down. More important than the flames about Amazon’s services are the testimonials of how intelligent and careful design kept applications running while EBS was down. Netflix’s ChaosMonkey is an excellent, if extreme, example of a tool to ensure that a complex distributed application can survive outages; ChaosMonkey randomly kills instances and services within the application. The development and operations teams collaborate to ensure that the application is sufficiently robust to withstand constant random (and self-inflicted!) outages without degrading.

On the other hand, during the EBS outage, nobody who wasn’t an Amazon employee touched a single piece of hardware. At the time, JD Long tweeted that the best thing about the EBS outage was that his guys weren’t running around like crazy trying to fix things. That’s how it should be. It’s important, though, to notice how this differs from operations practices 20, even 10 years ago. It was all over before the outage even occurred: The sites that dealt with it successfully had written software that was robust, and carefully managed their data so that it wasn’t reliant on a single zone. And similarly, the sites that scrambled to recover from the outage were those that hadn’t built resilience into their applications and hadn’t replicated their data across different zones.

In addition to this redistribution of responsibility, from the lower layers of the stack to the application itself, we’re also seeing a redistribution of costs. It’s a mistake to think that the cost of operations goes away. Capital expense for new servers may be replaced by monthly bills from Amazon, but it’s still cost. There may be fewer traditional IT staff, and there will certainly be a higher ratio of servers to staff, but that’s because some IT functions have disappeared into the development groups. The bonding is fluid, but that’s precisely the point. The task — providing a solid, stable application for customers — is the same. The locations of the servers on which that application runs, and how they’re managed, are all that changes.

One important task of operations is understanding the cost trade-offs between public clouds like Amazon’s, private clouds, traditional colocation, and building their own infrastructure. It’s hard to beat Amazon if you’re a startup trying to conserve cash and need to allocate or deallocate hardware to respond to fluctuations in load. You don’t want to own a huge cluster to handle your peak capacity but leave it idle most of the time. But Amazon isn’t inexpensive, and a larger company can probably get a better deal taking its infrastructure to a colocation facility. A few of the largest companies will build their own datacenters. Cost versus flexibility is an important trade-off; scaling is inherently slow when you own physical hardware, and when you build your data centers to handle peak loads, your facility is underutilized most of the time. Smaller companies will develop hybrid strategies, with parts of the infrastructure hosted on public clouds like AWS or Rackspace, part running on private hosting services, and part running in-house. Optimizing how tasks are distributed between these facilities isn’t simple; that is the province of operations groups. Developing applications that can run effectively in a hybrid environment: that’s the responsibility of developers, with healthy cooperation with an operations team.

The use of metrics to monitor system performance is another respect in which system administration has evolved. In the early ’80s or early ’90s, you knew when a machine crashed because you started getting phone calls. Early system monitoring tools like HP’s OpenView provided limited visibility into system and network behavior but didn’t give much more information than simple heartbeats or reachability tests. Modern tools like DTrace provide insight into almost every aspect of system behavior; one of the biggest challenges facing modern operations groups is developing analytic tools and metrics that can take advantage of the data that’s available to predict problems before they become outages. We now have access to the data we need, we just don’t know how to use it. And the more we rely on distributed systems, the more important monitoring becomes. As with so much else, monitoring needs to become part of the application itself. Operations is crucial to success, but operations can only succeed to the extent that it collaborates with developers and participates in the development of applications that can monitor and heal themselves.

Success isn’t based entirely on integrating operations into development. It’s naive to think that even the best development groups, aware of the challenges of high-performance, distributed applications, can write software that won’t fail. On this two-way street, do developers wear the beepers, or IT staff? As Allspaw points out, it’s important not to divorce developers from the consequences of their work since the fires are frequently set by their code. So, both developers and operations carry the beepers. Sharing responsibilities has another benefit. Rather than finger-pointing post-mortems that try to figure out whether an outage was caused by bad code or operational errors, when operations and development teams work together to solve outages, a post-mortem can focus less on assigning blame than on making systems more resilient in the future. Although we used to practice “root cause analysis” after failures, we’re recognizing that finding out the single cause is unhelpful. Almost every outage is the result of a “perfect storm” of normal, everyday mishaps. Instead of figuring out what went wrong and building procedures to ensure that something bad can never happen again (a process that almost always introduces inefficiencies and unanticipated vulnerabilities), modern operations designs systems that are resilient in the face of everyday errors, even when they occur in unpredictable combinations.

In the past decade, we’ve seen major changes in software development practice. We’ve moved from various versions of the “waterfall” method, with interminable up-front planning, to “minimum viable product,” continuous integration, and continuous deployment. It’s important to understand that the waterfall and methodology of the ’80s aren’t “bad ideas” or mistakes. They were perfectly adapted to an age of shrink-wrapped software. When you produce a “gold disk” and manufacture thousands (or millions) of copies, the penalties for getting something wrong are huge. If there’s a bug, you can’t fix it until the next release. In this environment, a software release is a huge event. But in this age of web and mobile applications, deployment isn’t such a big thing. We can release early, and release often; we’ve moved from continuous integration to continuous deployment. We’ve developed techniques for quick resolution in case a new release has serious problems; we’ve mastered A/B testing to test releases on a small subset of the user base.

All of these changes require cooperation and collaboration between developers and operations staff. Operations groups are adopting, and in many cases, leading in the effort to implement these changes. They’re the specialists in resilience, in monitoring, in deploying changes and rolling them back. And the many attendees, hallway discussions, talks, and keynotes at O’Reilly’s Velocity conference show us that they are adapting. They’re learning about adopting approaches to resilience that are completely new to software engineering; they’re learning about monitoring and diagnosing distributed systems, doing large-scale automation, and debugging under pressure. At a recent meeting, Jesse Robbins described scheduling EMT training sessions for operations staff so that they understood how to handle themselves and communicate with each other in an emergency. It’s an interesting and provocative idea, and one of many things that modern operations staff bring to the mix when they work with developers.

What does the future hold for operations? System and network monitoring used to be exotic and bleeding-edge; now, it’s expected. But we haven’t taken it far enough. We’re still learning how to monitor systems, how to analyze the data generated by modern monitoring tools, and how to build dashboards that let us see and use the results effectively. I’ve joked about “using a Hadoop cluster to monitor the Hadoop cluster,” but that may not be far from reality. The amount of information we can capture is tremendous, and far beyond what humans can analyze without techniques like machine learning.

Likewise, operations groups are playing a huge role in the deployment of new, more efficient protocols for the web, like SPDY. Operations is involved, more than ever, in tuning the performance of operating systems and servers (even ones that aren’t under our physical control); a lot of our “best practices” for TCP tuning were developed in the days of ISDN and 56 Kbps analog modems, and haven’t been adapted to the reality of Gigabit Ethernet, OC48* fiber, and their descendants. Operations groups are responsible for figuring out how to use these technologies (and their successors) effectively. We’re only beginning to digest IPv6 and the changes it implies for network infrastructure. And, while I’ve written a lot about building resilience into applications, so far we’ve only taken baby steps. There’s a lot there that we still don’t know. Operations groups have been leaders in taking best practices from older disciplines (control systems theory, manufacturing, medicine) and integrating them into software development.

And what about NoOps? Ultimately, it’s a bad name, but the name doesn’t really matter. A group practicing “NoOps” successfully hasn’t banished operations. It’s just moved operations elsewhere and called it something else. Whether a poorly chosen name helps or hinders progress remains to be seen, but operations won’t go away; it will evolve to meet the challenges of delivering effective, reliable software to customers. Old-style system administrators may indeed be disappearing. But if so, they are being replaced by more sophisticated operations experts who work closely with development teams to get continuous deployment right; to build highly distributed systems that are resilient; and yes, to answer the pagers in the middle of the night when EBS goes down. DevOps.

Photo: Taken at IBM’s headquarters in Armonk, NY. By Mike Loukides.

Supplement

PSYCHOLOGY OF SPIRITUAL SECTS. The psychological dynamics underlying the creation and growth of spiritual movements.

The main features

Which are the features of psychological influence most common to spiritual movements ?


1. Type of members.

There are many types of members, each with its own motivation.

The weaker the individual's independence, the more will he be tied to the group. Members who understand group-mechanisms, prepared to cope with them in order to direct their attention to the spirit, will benefit most as they are selective in picking up the cream of what is given and taking the rest with a grain of salt.

2. Leader/founder/guru

New religious movements arise usually around a father/mother figure who has gained authority after receiving a special revelation, communication, truth or insight. His charisma will vouchsafe loyal followers, even if his lifestyle may give rise to severe doubts to some. He may boost his prestige by claiming to follow the footsteps of a an esteemed spiritual teacher, represent an esoteric tradition, be of noble descent, or channel the wisdom of a great mind. (Eckankar's Paul Twitchell is the last in the lineage of 970 "Eckmasters")
He/she represents an archetype in members' subconscious minds. That of a wise father, or mother. As such he/she will have a compelling influence on followers who project their father/mother complex on him/her.

Alternatively women may fall in love with the leader, worship him, exert themselves to cater after his wishes and whims. They will try to stay in his vicinity, make themselves indispensable and slowly take control of the movement. Jealousy amongst them will make things even worse and split the ranks.

The psychological make-up of a guru may be generalized as follows:

Jeffrey Masson (see below) has this to say about gurus:

Every guru claims to know something you cannot know by yourself or through ordinary channels. All gurus promise access to a hidden reality if only you will follow their teaching, accept their authority, hand your life over to them. Certain questions are off limits. There are things you cannot know about the guru and the guru's personal life. Every doubt about the guru is a reflection of your own unworthiness, or the influence of an external evil force. The more obscure the action of the guru, the more likely it is to be right, to be cherished. Ultimately you cannot admire the guru, you must worship him. You must obey him, you must humble yourself, for the greater he is, the less you are - until you reach the inner circle and can start abusing other people the way your guru abused you. All this is in the very nature of being a guru.

Sub-conscious drive

Nature seems to instill in a person, faced with a mission, great task, or challenge, a feeling of superiority, insurmountable optimism, and enormous self-esteem, bordering on an inflated ego, to accomplish what is needed. This drive is reminiscent of the reckless impetus of the adolescent. Having reached maturity a person may feel "chosen" - impelled to forge ahead with vigor and inspire others. Undaunted in the face of obstacles and criticism, it is as if a cloak of invulnerability is laid on his/her shoulders.
Similarly an artist may be driven by a compulsion to express an inner content. He will be prepared to sacrifice everything to give way to his creative impulse. Fortunately his sacrifice does not involve more than the people immediately around him.

Not so with the leader. The number of his followers may grow to considerable proportions. Nature is not concerned whether his sense of superiority has any real foundation. The inflated ego is more or less instinctively driven towards a goal.
Although attaining heights no one would have thought conceivable of that person, when the hour of truth has come events may prove that he has overreached himself, disregarded good advice, or lost complete sense of reality. The result may be either catastrophe, or the uncritical followers may be saddled up with a heritage built on quicksand - on a flight of fancy without actual foundation.
This applies to many fields of human endeavour (Hitler), but specially in the treacherous domain of the spirit.

Discipline - nausea

The teacher may come to the conclusion that unless his followers change fundamentally - undergo a catharsis, or transformation - they will never be able to move forward. He/she regards them as being "asleep" (Jesus, Gurdjieff). Unless drastic measures are employed they will not wake up. To jolt them out of their complacency great sacrifices are demanded. Jesus asked a rich young man to give up all his worldly possessions (S.Matthew 19:21) before following him. Masters in Zen Buddhism, or Gurdjieff, made novices undergo a harsh regime in order to crack open and attain a different state of mind.

This I can have no quarrel with, if it is done against a background of compassion. If the unselfish motive disappears, or commercial considerations become dominant, the harsh discipline may become morbid and degrading. Having lost his dedication the teacher may become nauseated by the mentality and sheepishness of his followers, and in cases derive a sadistic delight in tormenting them.

In recent years reports are brought out about sexual violation of members by guru's, leaders and....bishops! Another example of authority being abused.

The path of a guru is like that of a razor's edge. He may so easily succumb to the temptation of exploiting the power he has attained over his followers. Financial irresponsibility, abuse of followers, reprehensible sexual behaviour......... mass suicide, it is all within his reach once he has overstepped boundaries.

Legacy

During his lifetime the leader will act as a moderator and steer the movement. He will re-interprete his teachings as he sees fit from the responses he receives. The death of the founder marks a turning point. His teachings will become inflexible, as no one dares to temper with them as he did himself. The élan disappears, rigidity takes over, unless another figure arises that leads the movement in a different direction, for better or for worse (St.Paul).

3. Doctrine/teaching

The more secret(ive) the leader's sayings the better. Pronouncements are characterized by great certainty and authority as if it were the word of God. In some cases it is presented as such. By his special way of delivery and presentation it may escape the audience that similar wisdom may be found in any book on spirituality nowadays found in the bookshop around the corner.

Whether the guru bases his wise words on actual experience or on hearsay is difficult to ascertain. In general it may be said that the more mystifying his teachings the stronger their appeal. After all it is beyond reason and should appeal only to the heart.
An exception should be made for true mystical literature based on inner experience which can hardly be expected to appeal to the intellect, but be appreciated intuitively, especially by those who had similar experiences.

Group-speak

Members may adopt fresh meanings to words, talk to each other in a jargon that the outsider can hardly follow (group-speak). The result being an inability to relate in speech, or explain new concepts to the outsider (Fourth Way).
(This may be best understood in other fields: help-programs of software, pop-up windows, warning-messages, not to speak of manuals for installing hardware, drawn up by boffins, are a nightmare to most users!)

Another characteristic is to lift out of context one aspect of religious truth and make it absolute. Such key truth will overshadow all other aspects of faith.
It may be:

etc. When this occurs other significant facets of faith are pushed to the background.

Such partial truths are often heralded as the result of a search for knowledge. The motto "Knowledge is power" is used to suggest that the statements are objective, scientific, or historical facts. Actually they cannot stand the touchstone of the merest critical scrutiny.
Authorities may be paraded to back-up such claims. They have either never been heard of, cannot be considered impartial, or their pronouncements have been lifted out of context. The discussion about the veracity of evolution is full of such red herrings.

4. Uniqueness of the movement

Movements will extol usually their superiority over others. After all there should be a strong reason to select that particular group. Some present themselves as being the sole way towards salvation, being God's chosen people. Others make a promise of a benefit that is only reserved for members of that sect. To avert attention some pride themselves of not having a teaching, or for their openness and democratic rules.

In short new movements will advance a variety of reasons for their uniqueness. Herewith a few:

Noteworthy is the vehemency with which groups stress differences between each other. The closer movements share an outlook the more virulent the attacks on their rivals become, seemingly more than on groups which follow a completely different belief.

Eric Hoffer writes in his 'The True Believer': "true believers of various hues ....view each other with mortal hatred and are ready to fly at each other's throat..."

This manifests itself specially when groups split. In Christianity one could not steep low enough to attack other followers of Christ, who held a slightly different opinion. It resulted in persecution of heretics, burning of early Christian literature, and disastrous wars.

Despite their peaceful appearance relatively new spiritual movements like Theosophy, Rosicrucianism, etc., following splits, exert themselves in accusations against former comrades.

Attacks against belief in paranormal phenomena, for instance by CSICOP, are reminiscent of the zeal of a Christian crusade, be it that they have their roots in humanism and its desperately clinging to a rationalistic/materialistic outlook on life current at the beginning of this century. Consequently the groups of these 'evangelists of rational enlightenment' have similar behavioural patterns and vehemency as sects.

5. Probation and conversion

Certain sects are only too eager to accept individuals. They may have high entrance fees. Or their members are swayed by zeal to convert.

Many movements will put up a barrier by means of an initiation to test the applicant's fitness to become part of the group. Henceforth they will play an important pioneer-part in the foretold future. Having reached such coveted stage members will not fail to follow what they are being told for fear of expulsion.

The new member may undergo a conversion, gaining a completely new insight in the meaning of life, see it in a way the sect does. His previous life with all its relationships has become meaningless. He may have turned himself inside out by a confession of his previous "sins". His conversion is marked by a feeling of peace, happiness and transcendence.

6. Failure of predictions

Common belief in a prophecy will be a strong binding force. One of the principal attractions of the first Christian sects was that they offered salvation from a threatening disaster. That being the end of the world. Only the baptized would await a glorious future. Sects like the Jehova's Witnesses have taken over this succesful formula.

Christians have had to come up with all sorts of arguments to explain away the unfulfilled prediction of their founder regarding the end of the world: "This generation shall not pass away, till all these things be accomplished." (S.Matthew 24:34). One of the lame excuses being that this prediction concerns the fall of Jersusalem only. However, all prophecies in the New Testament in this respect suggest that the impending doom was to be expected in their lifetime.

Jehova's Witnesses have taken the risk of being more specific in their predictions. Older members, who built their faith on them, have had the humiliating experience of having had to explain away various times in their lives the failure of the outcome of their forewarnings.

But predictions are not limited to the religious faiths. The New Age movements use this shared belief in portents as well. For more than sixty years an imminent landing of UFO's has been predicted. Various cults claimed in vain to be their first contactees.
In other movements the second coming of Christ was a main feature (Benjamin Creme). In Theosophy a Messenger was expected from 1975 onward.

The uncritical believers in Edgar Cayce's trance sayings put weight on his predictions of cataclysms (photo Edgar Cayce).

Nostradamus' (photo) obscure astrological foresayings have captured the minds of people for centuries. Each time his verses were interpreted again to suit the circumstances. In hindsight some of his quatrains seem to have relevance to the catastrophe of the destroyed World Trade Center. Quatrains I, 87 - IX, 92 and X, 59 may refer to skyscrapers in New York involved in a terrible explosion.

Sociologists have observed that, failure of prediction results in quite the opposite effect on believers. Contrary to what one would expect it may cause a rally amongst members. Failure is blamed on a misunderstanding, or a faux pas by members. To counteract ridicule they tend to stick together more than ever.

Of course there is a limit. According to a social survey, when predictions fail to materialize three times in a row members are bound to stop, reflect and draw conclusions.
The shattering of such false hopes comes as a severe blow and may mark the beginning of the end of a movement.
One wonders in this respect how many members of the People Forever International sect promoting physical immortality for its followers would have to die before their groups would break up in disappointment. (Since I wrote this ten years ago I have been informed that indeed members have died and the movement broke up in 1998!)
Yet, we see from the Jehova's Witnesses that skilful manoeuvring may off-set unfulfilled prophecies.

To what extremes such believes can lead shows the mass suicidal action of the Heaven's Gate sect and later in Uganda. Such tragic endings are the result of various contributing factors, which are beyond the scope of this article.

7. Belief versus intellect/Secrets

Often disciplines followed in spiritual movements have the effect of a lowering of the threshold to the unconscious mind. Suggestion will begin to play an important part. Precepts are being experienced as the truth, sacrosanct and sure. There is no element of doubt anymore about assumptions and speculation, although actually they lack any factual foundation.

Absolute belief that the Bible is God's word is the cornerstone of most orthodox Christian sects. In Islam the Koran is supposed to contain the word of Allah.
Intellectual analysis of faith is tentamount to heresy.

The ideal breeding ground for convictions are mass-gatherings. During mass gatherings, such as congresses, members are stirred up to an euphoria, the effect of which may linger on for weeks. This is the precise period of time for leaders, or committees, to announce fresh sectarian measures, postulate incredible notions/prophecies, call for further sacrifices, etc. etc. It will all be accepted unquestioningly. Only at a later date, when the euphoria has worn off, will one start to wonder about what was decided.

Secrets

Spiritual movements often hide a corpse in their closet. It may be a part of the history of the movement, details about the hidden life of the leader, or a once revered figure. Things may have been written by them one does not like to be reminded of. A fight, quarrel, full of vehemence and hatred, may have led to a split.

There are so many examples that a long list could be drawn up of the many concealed secrets of spiritual groups.
Whereas in most movements the works of the leader are almost known by heart, Jehova's Witnesses hardly know of the existence of the seven volumes of writings Studies in the Scriptures of their founder Charles Taze Russell (1852-1916). Some of his opinions are such cause of embarrassment that they are not deemed worth reading nowadays.

Eventually a renegate member will reveal such secrets in writing. Frantic denials and counter accusations by those in charge presently will follow almost automatically. These are usually accepted in gratitude by devotees, who cannot get over the shock of such revelations.

8. Common practice, work and ritual

Communal singing, ritual and (incomprehensible) practices (Freemasonry) are strong binding factors. The more irrational they are, the better. Others are a special food regime, the change of name, clothing, or a common aversion.
Joint work for the benefit of the group gives the feeling of a common endeavour and unites the participants. So does proselytization in the streets, menial work of construction and renovation of premises. There is a thin line between true participation and exploitation, however.

Dubious was the practice, common in the seventies, to incite members to criticize one of them to an extent that he/she would break down under the weight of often absurd allegations and insults, resulting in a brainwash effect.

9. Sacrifices, financial secrecy, favours to the rich.

Finances are always a ticklish matter. Human groups always wish to grow. Finances are important. Accountability is often not considered appropriate. Danger arises that members of the inner circle become lax in expenditure of members' contributions. Ambitious schemes call for a constant need for funding. This is the ideal breeding ground for favours to wealthy members. Those who contribute generously stand more chance to be taken in confidence and admitted to the inner circles. Often, as a proof of loyalty, extraordinary sums of money are demanded.

Degrees of initiation may be dependent on one's years of loyalty to the group. In Eckankar up to 8 degrees are given. However, if one fails to pay membership's fees for some time, degrees of initiation may be stripped off again.

Next to financial contributions members will often be expected to offer services to the group. However, if they also have to work for practically nothing in commercial enterprises it becomes dubious. Movements that gather wealth at the expense of their members are questionable. Seldom or never requests for return of contributions/investments are honoured.

10. Unquestioning leadership, reprehensible behaviour amongst members

Man in a herd may not show the best side of his nature. Unconscious drives may reign his/her behaviour. This is applicable especially in circumstances that man strives for the spiritual. Heshe may tend to show split-personality behaviour. On one hand the spiritual personality which is supposed to have come to terms with his animal nature. It is wise, friendly and compassionate on the outside. In the shadows lurks the personality that has been forced into the background, still ridden with all the expulsed human frailties. In moments of weakness it will see its chance to play hideous tricks. It will do so without being noticed by the person involved. The result being: uncharitable behaviour, envy, malicious gossip, hypocrisy, harsh words, insensitivity, unfounded criticism and even worse, not expected from such charismatic figure. It is one of the main reasons for people leaving a particular group in great disappointment.

It is not often realized that, like other human groups, spiritual movements behave like organisms. Group-psychological processes manifest which are sometimes not unlike those in primitive societies. There is the pecking order, the alpha members, and also the group-instinct directed against similar groups. Aggression goes unnoticed and is tolerated when an acceptable common goal is provided. For instance hostility against an individual outside the group, or a critical member inside. This has the effect of strengthening ties within the group like in the animal world.

If leadership loses contact with its members it will have to exert greater discipline. Deviating opinions cannot be tolerated anymore. Persons who hold them are seen as traitors. Acting against them, preferably in secret, is the only way out for the leadership to avert this danger. Members may disappear suddenly without the reasons becoming known, much to the surprise of those left behind. For such machinations in Theosophy read Emily Lutyens: "Candles in the Sun".

Spiritual newsgroups on Internet provide illustration of (un)concious nastiness being ventilated under the veil of anonimity. Messages are often rife with diatribe, personal attacks and misunderstanding. Many of such contributors have no interest at all in the matters discussed. Yet even in closed newsgroups, only open to subscribers, complaints about the tone of communications are being aired.

11. Fear of exclusion

The stronger members are tied to a group, the more the fear of exclusion lurks. They may have invested their life's savings in the work (Scientology), paid a percentage of their income, failed to conclude their study, or make a career, or sacrificed a succesful one.
In many cases a member will have alienated himself from family and friends. He has been told to cut ties with the past. (In the Attleboro cult followers are advised to burn photographs that remind them of bygone days). No wonder his or her sudden conversion, accompanied by fanatism and urge to proselytize, has shied away former friends and relatives.
There is no way left but to seek comfort and understanding with members of the spiritual group.

Isolation is sometimes intentionally sought. Formerly, in the Bhagavan Shri Rajneesh movement, members went about in red/orange dresses and wore mala's with a photo of their master, so setting themselves aside from the mundane world.

The Hare Krishna movement goes even further. Groups of members go out into the streets in their oriental dresses for song and dance routines. However, in most movements the alienation is far more subtle and the natural outcome of an adverse attitude towards the materialism of society.

The true nature of the so-called friendships within the group will only be revealed after a devotee has left the fold. Members have seen this happen, but did not give it a thought at the time, because it happened to someone else. But when they undergo the same fate themselves they will feel the humiliation of being ignored, not being greeted anymore, marriage gone - even not being recognized by one's own children anymore.

The outcast feels thrown in an abyss. He is cut off from social contacts, his life in pieces. The magnitude of this desperate experience should not be under-estimated. The renegade will feel deep shame. He may have confessed in the group intimate secrets, which are now being ridiculed by his former so-called friends. The expulsee, deeply hurt, may become embittered and even enter into a suicidal mental state.

Those readers who have been a member of a movement may recognize some of the above psychological mechanisms. The first reaction of non-members may be to vow never to enter a group. Let us bear in mind, however, that it should be considered a challenge to face these obstacles for the benefit that may result from association with kindred spirits.

A prerequisite is that these conditions are being noticed, looked in the eye, and not denied. The closer people live together, the more group-tensions will build up. Even in reputable circles as Freudian psycho-analytical associations they occur. Few communes are granted a long life as a result of one or more of the pitfalls summarized above. Headquarters, contrary to expectations, are known to be hotbeds of gossip, mutual repulsion and cynism.

So, do not be disheartened and join a group of your liking. After all people who marry also see wrecked marriages all around them, yet go ahead intent on a happy union in mutual trust, without regard to the outcome.
Involvement with other people will lead to personal growth if the consequences are anticipated. The more one stands on one's own feet the more benefit will arise from cooperating with others. It should be borne in mind that the saying "It is better to give, than to receive" is not merely a moral precept. (Read my precepts for living)

Please remember that there are hundreds of movements and that it has not been my intention to summarize them all, or to level any form of criticism at one of them. Indicating the psychological mechanisms operative in some, or all of them, has been my main theme.

On a separate page I have gone into the mysterious presence-phenomenon arising between people who meet in harmony.

In conclusion one may take heed of Krishnamurti's words in 1929 when he refused to become a 'World Teacher' of an organisation set up for him:
I maintain that Truth is a pathless land, and you cannot approach it by any path whatsoever, by any religion, by any sect. Truth being limitless, unconditioned, cannot be organised, nor should any organisation be formed to lead or coerce people along any particular path. If you first understand that, then you will see how impossible it is to organize a belief. A belief is purely an individual matter, and you cannot and must not organize it. If you do, it becomes dead, crystallised; it becomes a creed, a sect, a religion, to be imposed on others.

© Michael Rogge, 2011


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 01, 2021] How To Waste Hundreds of Millions on Your IT Transformation

May 30, 2021 | zwischenzugs.com

Declare a major technology transformation!

Why? Wall Street will love it. They love macho "˜transformations'. By sheer executive fiat Things Will Change, for sure.

Throw in "˜technology' and it makes Wall Street puff up that little bit more.

The fact that virtually no analyst or serious buyer of stocks has the first idea of what's involved in such a transformation is irrelevant. They will lap it up.

This is how capitalism works, and it indisputably results in the most efficient allocation of resources possible.

A Dash of Layoffs, a Sprinkling of Talent

These analysts and buyers will assume there will be reductions to employee headcount sooner rather than later, which of course will make the transformation go faster and beat a quick path to profit.

Hires of top "˜industry experts' who know the magic needed to get all this done, and who will be able to pass on their wisdom without friction to the eager staff that remain, will make this a sure thing.

In the end, of course, you don't want to come out of this looking too bad, do you?

So how best to minimise any fallout from this endeavour?

Leadership

The first thing you should do is sort out the leadership of this transformation.

Hire in a senior executive specifically for the purpose of making this transformation happen.

Well, taking responsibility for it, at least. This will be useful later when you need a scapegoat for failure.

Ideally it will be someone with a long resume of similar transformational senior roles at different global enterprises.

Don't be concerned with whether those previous roles actually resulted in any lasting change or business success; that's not the point. The point is that they have a lot of experience with this kind of role, and will know how to be the patsy. Or you can get someone that has Dunning-Kruger syndrome so they can truly inhabit the role.

The kind of leader you want.

Make sure this executive is adept at managing his (also hired-in) subordinates in a divide-and-conquer way, so their aims are never aligned, or multiply-aligned in diverse directions in a 4-dimensional ball of wool.

Incentivise senior leadership to grow their teams rather than fulfil the overall goal of the program (ideally, the overall goal will never be clearly stated by anyone "" see Strategy, below).

Change your CIO halfway through the transformation. The resulting confusion and political changes of direction will ensure millions are lost as both teams and leadership chop and change positions.

With a bit of luck, there'll be so little direction that the core business can be unaffected.

Strategy

This second one is easy enough. Don't have a strategy. Then you can chop and change plans as you go without any kind of overall direction, ensuring (along with the leadership anarchy above) that nothing will ever get done.

Unfortunately, the world is not sympathetic to this reality, so you will have to pretend to have a strategy, at the very least. Make the core PowerPoint really dense and opaque. Include as many buzzwords as possible "" if enough are included people will assume you know what you are doing. It helps if the buzzwords directly contradict the content of the strategy documents.

It's also essential that the strategy makes no mention of the "˜customer', or whatever provides Vandelay's revenue, or why the changes proposed make any difference to the business at all. That will help nicely reduce any sense of urgency to the whole process.

Try to make any stated strategy:

Whatever strategy you pretend to pursue, be sure to make it "˜Go big, go early', so you can waste as much money as fast as possible. Don't waste precious time learning about how change can get done in your context. Remember, this needs to fail once you're gone.

Technology Architecture

First, set up a completely greenfield "˜Transformation Team' separate from your existing staff. Then, task them with solving every possible problem in your business at once. Throw in some that don't exist yet too, if you like! Force them to coordinate tightly with every other team and fulfil all their wishes.

Ensure your security and control functions are separated from (and, ideally, in some kind of war with) a Transformation Team that is siloed as far as possible from the mainstream of the business. This will create the perfect environment for expensive white elephants to be built that no-one will use.

All this taken together will ensure that the Transformation Team's plans have as little chance of getting to production as possible. Don't give security and control functions any responsibility or reward for delivery, just reward them for blocking change.

Ignore the "˜decagon of despair'. These things are nothing to do with Transformation, they are just blockers people like to talk about. The official line is that hiring Talent (see below) will take care of those. It's easy to exploit an organisation's insecurity about its capabilities to downplay the importance of these

The decagon of despair.

[Mar 28, 2021] The Fake News about Fake Agile

Adherents to obscure cult behave exactly the way described. Funny that the author is one of the cultists, a true believer in Agile methodology.
Aug 23, 2019 | www.iconagility.com

All politics about fake news aside (PLEASE!), I've heard a growing number of reports, sighs and cries about Fake Agile. It's frustrating when people just don't get it, especially when they think they do. We can point fingers and vilify those who think differently -- or we can try to understand why this "us vs them" mindset is splintering the Agile community....

[Feb 03, 2021] A new userful bussword -- Hyper-converged infrastructure

Feb 03, 2021 | en.wikipedia.org

From Wikipedia, the free encyclopedia Jump to navigation Jump to search

Hyper-converged infrastructure ( HCI ) is a software-defined IT infrastructure that virtualizes all of the elements of conventional " hardware -defined" systems. HCI includes, at a minimum, virtualized computing (a hypervisor ), software-defined storage and virtualized networking ( software-defined networking ). [ citation needed ] HCI typically runs on commercial off-the-shelf (COTS) servers.

The primary difference between converged infrastructure (CI) and hyper-converged infrastructure is that in HCI, both the storage area network and the underlying storage abstractions are implemented virtually in software (at or via the hypervisor) rather than physically, in hardware. [ citation needed ] Because all of the software-defined elements are implemented within the context of the hypervisor, management of all resources can be federated (shared) across all instances of a hyper-converged infrastructure. Expected benefits [ edit ]

Hyperconvergence evolves away from discrete, hardware-defined systems that are connected and packaged together toward a purely software-defined environment where all functional elements run on commercial, off-the-shelf (COTS) servers, with the convergence of elements enabled by a hypervisor. [1] [2] HCI infrastructures are usually made up of server systems equipped with Direct-Attached Storage (DAS) . [3] HCI includes the ability to plug and play into a data-center pool of like systems. [4] [5] All physical data-center resources reside on a single administrative platform for both hardware and software layers. [6] Consolidation of all functional elements at the hypervisor level, together with federated management, eliminates traditional data-center inefficiencies and reduces the total cost of ownership (TCO) for data centers. [7] [ need quotation to verify ] [8] [9]

Potential impact [ edit ]

The potential impact of the hyper-converged infrastructure is that companies will no longer need to rely on different compute and storage systems, though it is still too early to prove that it can replace storage arrays in all market segments. [10] It is likely to further simplify management and increase resource-utilization rates where it does apply. [11] [12] [13]

[Jul 02, 2020] DevOps is a Myth Effective Software Delivery Enablement

Jul 02, 2020 | otomato.link

DevOps is a Myth

Tags : Agile Books DevOps IT management software delivery

Category : Tools (Practitioner's Reflections on The DevOps Handbook) The Holy Wars of DevOps

Yet another argument explodes online around the 'true nature of DevOps', around 'what DevOps really means' or around 'what DevOps is not'. At each conference I attend we talk about DevOps culture , DevOps mindset and DevOps ways . All confirming one single truth – DevOps is a myth . /img/sapiens.jpg

Now don't get me wrong – in no way is this a negation of its validity or importance. As Y.N.Harrari shows so eloquently in his book 'Sapiens' – myths were the forming power in the development of humankind. It is in fact our ability to collectively believe in these non-objective, imagined realities that allows us to collaborate at large scale, to coordinate our actions, to build pyramids, temples, cities and roads.

There's a Handbook!

I am writing this while finishing the exceptionally well written "DevOps Handbook" . If you really want to know what stands behind the all-too-often misinterpreted buzzword – you better read this cover-to-cover. It presents an almost-no-bullshit deep dive into why, how and what in DevOps. And it comes from the folks who invented the term and have been busy developing its main concepts over the last 7 years.


Now notice – I'm only saying you should read the "DevOps Handbook" if you want to understand what DevOps is about. After finishing it I'm pretty sure you won't have any interest in participating in petty arguments along the lines of 'is DevOps about automation or not?'. But I'm not saying you should read the handbook if you want to know how to improve and speed up your software manufacturing and delivery processes. And neither if you want to optimize your IT organization for innovation and continuous improvement.

Because the main realization that you, as a smart reader, will arrive at – is just that there is no such thing as DevOps. DevOps is a myth .

So What's The Story?

It all basically comes down to this: some IT companies achieve better results than others . Better revenues, higher customer and employee satisfaction, faster value delivery, higher quality. There's no one-size-fits-all formula, there is no magic bullet – but we can learn from these high performers and try to apply certain tools and practices in order to improve the way we work and achieve similar or better results. These tools and processes come from a myriad of management theories and practices. Moreover – they are constantly evolving, so we need to always be learning. But at least we have the promise of better life. That is if we get it all right: the people, the architecture, the processes, the mindset, the org structure, etc.

So it's not about certain tools, cause the tools will change. And it's not about certain practices – because we're creative and frameworks come and go. I don't see too many folks using Kanban boards 10 years from now. (In the same way only the laggards use Gantt charts today) And then the speakers at the next fancy conference will tell you it's mainly about culture. And you know what culture is? It's just a story, or rather a collection of stories that a group of people share. Stories that tell us something about the world and about ourselves. Stories that have only a very relative connection to the material world. Stories that can easily be proven as myths by another group of folks who believe them to be wrong.

But Isn't It True?

Anybody who's studied management theories knows how the approaches have changed since the beginning of the last century. From Taylor's scientific management and down to McGregor's X&Y theory they've all had their followers. Managers who've applied them and swore getting great results thanks to them. And yet most of these theories have been proven wrong by their successors.

In the same way we see this happening with DevOps and Agile. Agile was all the buzz since its inception in 2001. Teams were moving to Scrum, then Kanban, now SAFE and LESS. But Agile didn't deliver on its promise of better life. Or rather – it became so commonplace that it lost its edge. Without the hype, we now realize it has its downsides. And we now hope that maybe this new DevOps thing will make us happy.

You may say that the world is changing fast – that's why we now need new approaches! And I agree – the technology, the globalization, the flow of information – they all change the stories we live in. But this also means that whatever is working for someone else today won't probably work for you tomorrow – because the world will change yet again.

Which means that the DevOps Handbook – while a great overview and historical document and a source of inspiration – should not be taken as a guide to action. It's just another step towards establishing the DevOps myth.

And that takes us back to where we started – myths and stories aren't bad in themselves. They help us collaborate by providing a common semantic system and shared goals. But they only work while we believe in them and until a new myth comes around – one powerful enough to grab our attention.

Your Own DevOps Story

So if we agree that DevOps is just another myth, what are we left with? What do we at Otomato and other DevOps consultants and vendors have to sell? Well, it's the same thing we've been building even before the DevOps buzz: effective software delivery and IT management. Based on tools and processes, automation and effective communication. Relying on common sense and on being experts in whatever myth is currently believed to be true.

As I keep saying – culture is a story you tell. And we make sure to be experts in both the storytelling and the actual tooling and architecture. If you're currently looking at creating a DevOps transformation or simply want to optimize your software delivery – give us a call. We'll help to build your authentic DevOps story, to train your staff and to architect your pipeline based on practice, skills and your organization's actual needs. Not based on myths that other people tell.

[Oct 08, 2019] I swear to god we spend 60% of our time planning our sprints, and 40% of the time doing the work, and management wonders why our true productivity has fallen through the floor...

Notable quotes:
"... Scrum is dead, long live Screm! We need to implement it immediately. We must innovate and stay ahead of the curve! level 7 ..."
"... First you scream, then you ahh. Now you can screm ..."
"... Are you saying quantum synergy coupled with block chain neutral intelligence can not be used to expedite artificial intelligence amalgamation into that will metaphor into cucumber obsession? ..."
Oct 08, 2019 | www.reddit.com

MadManMorbo 58 points · 6 days ago

We recently implemented DevOps practices, Scrum, and sprints have become the norm... I swear to god we spend 60% of our time planning our sprints, and 40% of the time doing the work, and management wonders why our true productivity has fallen through the floor... level 5

Angdrambor 26 points · 6 days ago

Let me guess - they left out the retrospectives because somebody brought up how bad they were fucking it all up? level 6

lurker_lurks 15 points · 6 days ago

Scrum is dead, long live Screm! We need to implement it immediately. We must innovate and stay ahead of the curve! level 7

JustCallMeFrij 1 point · 6 days ago

First you scream, then you ahh. Now you can screm

StormlitRadiance 1 point · 5 days ago

It consists of three managers for every engineer and they all screm all day at a different quartet of three managers and an engineer. level 6

water_mizu 7 points · 6 days ago

Are you saying quantum synergy coupled with block chain neutral intelligence can not be used to expedite artificial intelligence amalgamation into that will metaphor into cucumber obsession?

malikto44 9 points · 6 days ago

I worked at a place where the standup meetings went at least 4-6 hours each day. It was amazing how little got done there. Glad I bailed. k comments 1.0k Posted by u/bpitts2 3 days ago Rant If I go outside of process to help you for your "urgent" issue, be cool and don't abuse the relationship.

What is it with these people? Someone brought me an "urgent" request (of course there wasn't a ticket), so I said no worries, I'll help you out. Just open a ticket for me so we can track the work and document the conversation. We got that all knocked out and everyone was happy.

So a day or two later, I suddenly get an instant message for yet another "urgent" issue. ... Ok ... Open a ticket, and I'll get it assigned to one of my team members to take a look.

And a couple days later ... he's back and I'm being asked for help troubleshooting an application that we don't own. At least there's a ticket and an email thread... but wtf man.

What the heck man?

This is like when you get a free drink or dessert from your waiter. Don't keep coming back and asking for more free pie. You know damn well you're supposed to pay for pie. Be cool. I'll help you out when you're really in a tight spot, but the more you cry "urgent", the less I care about your issues.

IT folks are constantly looked at as being dicks because we force people to follow the support process, but this is exactly why we have to make them follow the process. 290 comments 833 Posted by u/SpicyTunaNinja 4 days ago Silver Let's talk about mental health and stress

Hey r/Sysadmin , please don't suffer in silence. I know the job can be very difficult at times, especially with competing objectives, tight (or impossible) deadlines, bad bosses and needy end users, but please - always remember that there are ways to manage that stress. Speaking to friends and family regularly to vent, getting a therapist, or taking time off.

Yes, you do have the ability to take personal leave/medical leave if its that bad. No, it doesn't matter what your colleagues or boss will think..and no, you are not a quitter, weak, or a loser if you take time for yourself - to heal mentally, physically or emotionally.

Don't let yourself get to the point that this one IT employee did at the Paris Police headquarters. Ended up taking the lives of multiple others, and ultimately losing his life. https://www.nbcnews.com/news/world/paris-policeman-kills-2-officers-injures-3-others-knife-attack-n1061861

EDIT: Holy Cow! Thanks for the silver and platinum kind strangers. All i wanted to do was to get some more awareness on this subject, and create a reminder that we all deserve happiness and peace of mind. A reminder that hopefully sticks with you for the days and weeks to come.

Work is just one component of life, and not to get so wrapped up and dedicate yourself to the detriment of your health. 302 comments 783 Posted by u/fresh1003 2 days ago By 2025 80% of enterprises will shutdown their data center and move to cloud...do you guys believe this?

By 2025 80% of enterprises will shutdown their data center and move to cloud...do you guys believe this? 995 comments 646 Posted by u/eternalterra 3 days ago Silver Career / Job Related The more tasks I have, the slower I become

Good morning,

We, sysadmins, have times when we don't really have nothing to do but maintenance. BUT, there are times when it seems like chaos comes out of nowhere. When I have a lot of tasks to do, I tend to get slower. The more tasks I have pending, the slower I become. I cannot avoid to start thinking about 3 or 4 different problems at the same time, and I can't focus! I only have 2 years of experiences as sysadmin.

Do you guys experience the same?

Cheers, 321 comments 482 Posted by u/proudcanadianeh 6 days ago General Discussion Cloudflare, Google and Firefox to add support for HTTP/3, shifting away from TCP

Per this article: https://www.techspot.com/news/82111-cloudflare-google-firefox-add-support-http3-shifting-away.html

Not going to lie, this is the first I have heard of http3. Anyone have any insight in what this shift is going to mean on a systems end? Is this a new protocol entirely? 265 comments 557 Posted by u/_sadme_ 8 hours ago Career / Job Related Leaving the IT world...

Hello everyone,

Have you ever wondered if your whole career will be related to IT stuff? I have, since my early childhood. It was more than 30 years ago - in the marvelous world of an 8-bit era. After writing my first code (10 PRINT " my_name " : 20 GOTO 10) I exactly knew what I wanted to do in the future. Now, after spending 18 years in this industry, which is half of my age, I'm not so sure about it.

I had plenty of time to do almost everything. I was writing software for over 100K users and I was covered in dust while drilling holes for ethernet cables in houses of our customers. I was a main network administrator for a small ISP and systems administrator for a large telecom operator. I made few websites and I was managing a team of technical support specialists. I was teaching people - on individual courses on how to use Linux and made some trainings for admins on how to troubleshoot multicast transmissions in their own networks. I was active in some Open Source communities, including running forums about one of Linux distributions (the forum was quite popular in my country) and I was punching endless Ctrl+C/Ctrl+V combos from Stack Overflow. I even fixed my aunt's computer!

And suddenly I realised that I don't want to do this any more. I've completely burnt out. It was like a snap of a finger.

During many years I've collected a wide range of skills that are (or will be) obsolete. I don't want to spend rest of my life maintaining a legacy code written in C or PHP or learning a new language which is currently on top and forcing myself to write in a coding style I don't really like. That's not all... If you think you'll enjoy setting up vlans on countless switches, you're probably wrong. If you think that managing clusters of virtual machines is an endless fun, you'll probably be disappointed. If you love the smell of a brand new blade server and the "click" sound it makes when you mount it into the rack, you'll probably get fed up with it. Sooner or later.

But there's a good side of having those skills. With skills come experience, knowledge and good premonition. And these features don't get old. Remember that!

My employer offered me a position of a project manager and I eagerly agreed to it. It means that I'm leaving the world of "hardcore IT" I'll be doing some other, less crazy stuff. I'm logging out of my console and I'll run Excel. But I'll keep all good memories from all those years. I'd like to thank all of you for doing what you're doing, because it's really amazing. Good luck! The world lies in your hands! 254 comments 450 Posted by u/remrinds 1 day ago General Discussion UPDATE: So our cloud exchange server was down for 17 hours on friday

my original post got deleted because i behaved wrongly and posted some slurs. I apologise for that.


anyway, so, my companie is using Office365 ProPlus and we migrated our on premise exchange server to cloud a while ago, and on friday last week, all of our user (1000 or so) could not access their exchange box, we are a TV broadcasting station so you can only imagine the damage when we could not use our mailing system.


initially, we opened a ticket with microsoft and they just kept us on hold for 12 hours (we are in japan so they had to communicate with US and etc which took time), and then they told us its our network infra thats wrong when we kept telling them its not. we asked them to check their envrionment at least once which they did not until 12 hours later.


in the end, it was their exchange server that was the problem, i will copy and paste the whole incident report below


Title: Can't access Exchange

User Impact: Users are unable to access the Exchange Online service.

Current status: We've determined that a recent sync between Exchange Online and Azure Active Directory (AAD) inadvertently resulted in access issues with the Exchange Online service. We've restored the affected environment and updated the Global Location Service (GLS) records, which we believe has resolved the issue. We're awaiting confirmation from your representatives that this issue is resolved.

Scope of impact: Your organization is affected by this event, and this issue impacts all users.

Start time: Friday, October 4, 2019, 4:51 AM
Root cause: A recent Service Intelligence (SI) move inadvertently resulted in access issues with the Exchange Online service.


they wont explain further than what they posted on the incident page but if anyone here is good with microsofts cloud envrionment, can anyone tell me what was the root cause of this? from what i can gather, the AAD and exchange server couldnt sync but they wont tell us what the actual problem is, what the hell is Service intelligence and how does it fix our exchange server when they updated the global location service?


any insight on these report would be more than appreciated


thanks! 444 comments 336 Posted by u/Rocco_Saint 13 hours ago KB4524148 Kills Print Spooler? Thought it was supposed to fix that issue?

I rolled out this patch this weekend to my test group and it appears that some of the workstations this was applied to are having print spooler issues.

Here's the details for the patch.

I'm in the middle of troubleshooting it now, but wanted to reach out and see if anyone else was having issues. 108 comments 316 Posted by u/GrizzlyWhosSteve 1 day ago Finally Learned Docker

I hadn't found a use case for containers in my environment so I had put off learning Docker for a while. But, I'm writing a rails app to simplify/automate some of our administrative tasks. Setting up my different dev and test environments was definitely non trivial, and I plan on onboarding another person or 2 so they can help me out and add it to their resume.

I installed Docker desktop on my Mac, wrote 2 files essentially copied from Docker's website, built it, then ran it. It took a total of 10 minutes to go from zero Docker to fully configured and running. It's really that easy to start using it. So, now I've decided to set up Kubernetes at work this week and see what I can find to do with it.

Edit: leaning towards OKD. Has anyone used it/OpenShift that wants to talk me out of it? 195 comments 189 Posted by u/Reverent 4 days ago Off Topic How to trigger a sysadmin in two words

Vendor Requirements. 401 comments 181 Posted by u/stewardson 6 days ago General Discussion Monday From Hell

Let me tell you about the Monday from hell I encountered yesterday.

I work for xyz corp which is an MSP for IT services. One of the companies we support, we'll call them abc corp .

I come in to work Monday morning and look at my alerts report from the previous day and find that all of the servers (about 12+) at abc corp are showing offline. Manager asks me to go on site to investigate, it's around the corner so nbd .

I get to the client, head over to the server room and open up the KVM console. I switch inputs and see no issues with the ESX hosts. I then switch over to the (for some reason, physical) vCenter server and find this lovely little message:

HELLO, this full-disk encryption.

E-mail: [email protected]

rsrv: [email protected]

Now, I've never seen this before and it looks sus but just in case, i reboot the machine - same message . Do a quick google search and found that the server was hit with an MBR level ransomware encryption. I then quickly switched over to the server that manages the backups and found that it's also encrypted - f*ck.

At this point, I call Mr. CEO and account manager to come on site. While waiting for them, I found that the SANs had also been logged in to and had all data deleted and snapshots deleted off the datastores and the EQL volume was also encrypted - PERFECT!

At this point, I'm basically freaking out. ABC Corp is owned by a parent company who apparently also got hit however we don't manage them *phew*.

Our only saving grace at this point is the offsite backups. I log in to the server and wouldn't ya know it, I see this lovely message:

Last replication time: 6/20/2019 13:00:01

BackupsTech had a script that ran to report on replication status daily and the reports were showing that they were up to date. Obviously, they weren't so at this point we're basically f*cked.

We did eventually find out this originated from parentcompany and that the accounts used were from the old IT Manager that recently left a few weeks ago. Unfortunately, they never disabled the accounts in either domain and the account used was a domain admin account.

We're currently going through and attempting to undelete the VMFS data to regain access to the VM files. If anyone has any suggestions on this, feel free to let me know.

TL;DR - ransomware, accounts not disabled, backups deleted, f*cked. 94 comments Continue browsing in r/sysadmin Subreddit icon r/sysadmin

380k

Members

1.6k

Online

Oct 22, 2008

Cake Day A reddit dedicated to the profession of Computer System Administration. Reddit about careers press advertise blog Using Reddit help Reddit App Reddit premium Reddit gifts Directory Terms | Content policy | Privacy policy | Mod policy Reddit Inc © 2019. All rights reserved 1.2k They say, No more IT or system or server admins needed very soon... 1.2k Subreddit icon r/sysadmin • Posted by u/rdns98 6 days ago They say, No more IT or system or server admins needed very soon...

Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes. 1.0k comments 91% Upvoted What are your thoughts? Log in or Sign up log in sign up Sort by level 1

ntengineer 619 points · 6 days ago

Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.

For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:

"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "

Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.

I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2

PuzzledSwitch 99 points · 6 days ago

my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.

no amount of corporate pressuring or bitching could ever stand up to that. level 3

themastermatt 43 points · 6 days ago

Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4

PuzzledSwitch 33 points · 6 days ago

make a follow-up in email, then.

or, you might have to interject for a moment.

5 more replies level 3

williamfny Jack of All Trades 25 points · 6 days ago

This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4

MaxHedrome 5 points · 6 days ago

Listen to this guy OP

This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.

2 more replies level 2

notechno 34 points · 6 days ago

Not to mention downtime. We have two ISPs in our area. Most of our clients have both in order to keep a fail-over. However, depending on where the client is located, one ISP is fast but goes down every time it rains and the other is solid but slow. Now our only AzureAD customers are those who have so many remote workers that they throw their hands up and deal with the outages as they come. Maybe this works in Europe or South Korea, but this here 'Murica got too many internet holes. level 3

katarh 12 points · 6 days ago

Yup. If you're in a city with fiber it can probably work. If you have even one remote site, and all they have is DSL (or worse, satellite, as a few offices I once supported were literally in the woods when I worked for a timber company) then even Citrix becomes out of the question.

4 more replies

1 more reply level 2

elasticinterests 202 points · 6 days ago

Definitely this, if you know your stuff and can wrap some numbers around it you can regain control of the conversation.

I use my dad as a prime example, he was an electrical engineer for ~40 years, ended up just below board level by the time he retired. He sat in on a product demo once, the kit they were showing off would speed up jointing cable in the road by 30 minutes per joint. My dad asked 3 questions and shut them down:

"how much will it cost to supply all our jointing teams?" £14million

"how many joints do our teams complete each day?" (this they couldn't answer so my dad helped them out) 3

"So are we going to tell the jointers that they get an extra hour and a half hour lunch break or a pay cut?"

Room full of executives that had been getting quite excited at this awesome new investment were suddenly much more interested in showing these guys the door and getting to their next meeting. level 3

Cutriss '); DROP TABLE memes;-- 61 points · 6 days ago

I'm confused a bit by your story. Let's assume they work 8-hour days and so the jointing takes 2.66 hours per operation.

This enhancement will cut that down to 2.16 hours. That's awfully close to enabling a team to increase jointing-per-day from 3 to 4.

That's nearly a 33% increase in productivity. Factoring in overhead it probably is slightly higher.

Is there some reason the workers can't do more than 3 in a day? level 4

slickeddie Sysadmin 87 points · 6 days ago

I think they did 3 as that was the workload so being able to do 4 isn't relevant if there isn't a fourth to do.

That's what I get out of his story anyway.

And also if it was going to be 14 million in costs to equip everyone, the savings have to be there. If adding 1 unit of productivity per day didn't save 14 million in a year or two, it's not really worth it. level 5

Cutriss '); DROP TABLE memes;-- 40 points · 6 days ago

That was basically what I figured was the missing piece - the logistical inability to process 4 units.

As far as the RoI, I had to assume that the number of teams involved and their operational costs had already factored into whether or not 14m was even a price anyone could consider paying. In other words, by virtue of the meeting even happening I figured that the initial costing had not already been laughed out of the room, but perhaps that's a bit too much of an assumption to make. level 6

beer_kimono 14 points · 6 days ago

In my limited experience they slow roll actually pricing anything. Of course physical equipment pricing might be more straightforward, which would explain why his dad got a number instead of a discussion of licensing options. level 5

Lagkiller 7 points · 6 days ago

And also if it was going to be 14 million in costs to equip everyone, the savings have to be there. If adding 1 unit of productivity per day didn't save 14 million in a year or two, it's not really worth it.

That entirely depends. If you have 10 people producing 3 joints a day, with this new kit you could reduce your headcount by 2 and still produce the same, or take on additional workload and increase your production. Not to mention that you don't need to equip everyone with these kits either, you could save them for the projects which needed more daily production thus saving money on the kits and increasing production on an as needed basis.

They story is missing a lot of specifics and while it sounds great, I'm quite certain there was likely a business case to be made.

5 more replies level 4

Standardly 14 points · 6 days ago

He's saying they get 3 done in a day, and the product would save them 30 minutes per joint. That's an hour and a half saved per day, not even enough time to finish a fourth joint, hence the "so do i just give my workers an extra 30 min on lunch"? He just worded it all really weird. level 4

elasticinterests 11 points · 6 days ago

Your ignoring travel time in there, there are also factors to do with outside contractors carrying out the digging and reinstatement works and getting sign off to actually dig the hole in the road in the first place.

There is also the possibility I'm remembering the time wrong... it's been a while! level 5

Cutriss '); DROP TABLE memes;-- 10 points · 6 days ago

Travel time actually works in my favour. If it takes more time to go from job to job, then the impact of the enhancement is magnified because the total time per job shrinks. level 4

wildcarde815 Jack of All Trades 8 points · 6 days ago

Id bet because nobody wants to pay overtime.

1 more reply level 3

say592 4 points · 6 days ago

That seems like a poor example. Why would you ignore efficiency improvements just to give your workers something to do? Why not find another task for them or figure out a way to consolidate the teams some? We fight this same mentality on our manufacturing floors, the idea that if we automate a process someone will lose their job or we wont have enough work for everyone. Its never the case. However, because of the automation improvements we have done in the last 10 years, we are doing 2.5x the output with only a 10% increase in the total number of laborers.

So maybe in your example for a time they would have nothing for these people to do. Thats a management problem. Have them come back and sweep the floor or wash their trucks. Eventually you will be at a point where there is more work, and that added efficiency will save you from needing to put more crews on the road.

2 more replies level 2

SithLordAJ 75 points · 6 days ago

Calling the uneducated people out on what they see as facts can be rewarding.

I wouldnt call them all uneducated. I think what they are is basically brainwashed. They constantly hear from the sales teams of vendors like Microsoft pitching them the idea of moving everything to Azure.

They do not hear the cons. They do not hear from the folks who know their environment and would know if something is a good fit. At least, they dont hear it enough, and they never see it first hand.

Now, I do think this is their fault... they need to seek out that info more, weigh things critically, and listen to what's going on with their teams more. Isolation from the team is their own doing.

After long enough standing on the edge and only hearing "jump!!", something stupid happens. level 3

AquaeyesTardis 18 points · 6 days ago

Apart from performance, what would be some of the downsides of containers? level 4

ztherion Programmer/Infrastructure/Linux 51 points · 6 days ago

There's little downside to containers by themselves. They're just a method of sandboxing processes and packaging a filesystem as a distributable image. From a performance perspective the impact is near negligible (unless you're doing some truly intensive disk I/O).

What can be problematic is taking a process that was designed to run on exactly n dedicated servers and converting it to a modern 2 to n autoscaling deployment that shares hosting with other apps on n a platform like Kubernetes. It's a significant challenge that requires a lot of expertise and maintenance, so there needs to be a clear business advantage to justify hiring at least one additional full time engineer to deal with it. level 5

AirFell85 11 points · 6 days ago

ELI5:

More logistical layers require more engineers to support.

1 more reply

3 more replies level 4

justabofh 33 points · 6 days ago

Containers are great for stateless stuff. So your webservers/application servers can be shoved into containers. Think of containers as being the modern version of statically linked binaries or fat applications. Static binaries have the problem that any security vulnerability requires a full rebuild of the application, and that problem is escalated in containers (where you might not even know that a broken library exists)

If you are using the typical business application, you need one or more storage components for data which needs to be available, possibly changed and access controlled.

Containers are a bad fit for stateful databases, or any stateful component, really.

Containers also enable microservices, which are great ideas at a certain organisation size (if you aren't sure you need microservices, just use a simple monolithic architecture). The problem with microservices is that you replace complexity in your code with complexity in the communications between the various components, and that is harder to see and debug. level 5

Untgradd 6 points · 6 days ago

Containers are fine for stateful services -- you can manage persistence at the storage layer the same way you would have to manage it if you were running the process directly on the host.

6 more replies level 5

malikto44 5 points · 6 days ago

Backing up containers can be a pain, so you don't want to use them for valuable data unless the data is stored elsewhere, like a database or even a file server.

For spinning up stateless applications to take workload behind a load balancer, containers are excellent.

9 more replies

33 more replies level 3

malikto44 3 points · 6 days ago

The problem is that there is an overwhelming din from vendors. Everybody and their brother, sister, mother, uncle, cousin, dog, cat, and gerbil is trying to sell you some pay-by-the-month cloud "solution".

The reason is that the cloud forces people into monthly payments, which is a guarenteed income for companies, but costs a lot more in the long run, and if something happens and one can't make the payments, business is halted, ensuring that bankruptcies hit hard and fast. Even with the mainframe, a company could limp along without support for a few quarters until they could get cash flow enough.

If we have a serious economic downturn, the fact that businesses will be completely shuttered if they can't afford their AWS bill just means fewer companies can limp along when the economy is bad, which will intensify a downturn.

1 more reply

3 more replies level 2

wildcarde815 Jack of All Trades 12 points · 6 days ago

Also if you can't work without cloud access you better have a second link. level 2

pottertown 10 points · 6 days ago

Our company viewed the move to Azure less as a cost savings measure and more of a move towards agility and "right now" sizing of our infrastructure.

Your point is very accurate, as an example our location is wholly incapable of moving moving much to the cloud due to half of us being connnected via satellite network and the other half being bent over the barrel by the only ISP in town. level 2

_The_Judge 27 points · 6 days ago

I'm sorry, but I find management these days around tech wholly inadequate. The idea that you can get an MBA and manage shit you have no idea about is absurd and just wastes everyone elses time for them to constantly ELI5 so manager can do their job effectively. level 2

laserdicks 57 points · 6 days ago

Calling the uneducated people out on what they see as facts can be rewarding

Aaand political suicide in a corporate environment. Instead I use the following:

"I love this idea! We've actually been looking into a similar solution however we weren't able to overcome some insurmountable cost sinkholes (remember: nothing is impossible; just expensive). Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?" level 3

lokko12 71 points · 6 days ago

Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?

No.

...then people rent on /r/sysadmin about stupid investments and say "but i told them". level 4

HORACE-ENGDAHL Jack of All Trades 61 points · 6 days ago

This exactly, you can't compromise your own argument to the extent that it's easy to shoot down in the name of giving your managers a way of saving face, and if you deliver the facts after that sugar coating it will look even worse, as it will be interpreted as you setting them up to look like idiots. Being frank, objective and non-blaming is always the best route. level 4

linuxdragons 13 points · 6 days ago

Yeah, this is a terrible example. If I were his manager I would be starting the paperwork trail after that meeting.

6 more replies level 3

messburg 61 points · 6 days ago

I think it's quite an american thing to do it so enthusiastically; to hurt no one, but the result is so condescending. It must be annoying to walk on egg shells to survive a day in the office.


And this is not a rant against soft skills in IT, at all. level 4

vagrantprodigy07 13 points · 6 days ago

It is definitely annoying. level 4

widowhanzo 27 points · 6 days ago
· edited 6 days ago

We work with Americans and they're always so positive, it's kinda annoying. They enthusiastically say "This is very interesting" when in reality it sucks and they know it.

Another less professional example, one of my (non-american) co-workers always wants to go out for coffee (while we have free and better coffee in the office), and the American coworker is always nice like "I'll go with you but I'm not having any" and I just straight up reply "No. I'm not paying for shitty coffee, I'll make a brew in the office". And that's that. Sometimes telling it as it is makes the whole conversation much shorter :D level 5

superkp 42 points · 6 days ago

Maybe the american would appreciate the break and a chance to spend some time with a coworker away from screens, but also doesn't want the shit coffee?

Sounds quite pleasant, honestly. level 6

egamma Sysadmin 39 points · 6 days ago

Yes. "Go out for coffee" is code for "leave the office so we can complain about management/company/customer/Karen". Some conversations shouldn't happen inside the building. level 7

auru21 5 points · 6 days ago

And complain about that jerk who never joins them

1 more reply level 6

Adobe_Flesh 6 points · 6 days ago

Inferior American - please compute this - the sole intention was consumption of coffee. Therefore due to existence of coffee in office, trip to coffee shop is not sane. Resistance to my reasoning is futile. level 7

superkp 1 point · 6 days ago

Coffee is not the sole intention. If that was the case, then there wouldn't have been an invitation.

Another intention would be the social aspect - which americans are known to love, especially favoring it over doing any actual work in the context of a typical 9-5 corporate job.

I've effectively resisted your reasoning, therefore [insert ad hominem insult about inferior logic].

5 more replies level 5

ITaggie Tier II Support/Linux Admin 10 points · 6 days ago

I mean, I've taken breaks just to get away from the screens for a little while. They might just like being around you.

1 more reply

11 more replies level 3

tastyratz 7 points · 6 days ago

There is still value to tact in your delivery, but, don't slit your own throat. Remind them they hired you for a reason.

"I appreciate the ideas being raised here by management for cost savings and it certainly merits a discussion point. As a business, we have many challenges and one of them includes controlling costs. Bridging these conversations with subject matter experts and management this early can really help fill out the picture. I'd like to try to provide some of that value to the conversation here" level 3

renegadecanuck 2 points · 6 days ago

If it's political suicide to point out potential downsides, then you need to work somewhere else.

Especially since your response, in my experience, won't get anything done (people will either just say "no, that's not an issue", or they'll find your tone really condescending) and will just piss people off.

I worked with someone like that would would always be super passive aggressive in how she brought things up and it pissed me off to no end, because it felt less like bringing up potential issues and more like being belittling. level 4

laserdicks 1 point · 4 days ago

Agreed, but I'm too early on in my career to make that jump. level 3

A_A_A_U_U_U 1 point · 6 days ago

Feels good to get to a point in my career where I can call people our whenever the hell I feel like it. I've got recruiters banging down my door, I couldn't swing a stick without hitting a job offer or three.

Of course you I'm not suggesting problem be combative for no reason and to be tactful about it but if you don't call out fools like that then you're being negligent in your duties. level 4

adisor19 3 points · 6 days ago

This. Current IT market is in our advantage. Say it like it is and if they don't like it, GTFO.

14 more replies level 1

DragonDrew Jack of All Trades 777 points · 6 days ago

"I am resolute in my ability to elevate this collaborative, forward-thinking team into the revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO that outsources IT level 2

OpenScore Sysadmin 529 points · 6 days ago

"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data science and blockchain to achieve business value" level 3

omfgitzfear 472 points · 6 days ago

We're gonna be AGILE level 4

whetu 113 points · 6 days ago

Synergy. level 5

Erok2112 92 points · 6 days ago
Gold

Weird Al even wrote a song about it!

https://www.youtube.com/watch?v=GyV_UG60dD4 level 6

uptimefordays Netadmin 32 points · 6 days ago

It's so good, I hate it. level 7

Michelanvalo 31 points · 6 days ago

I love Al, I've seen him in concert a number of times, Alapalooza was the first CD I ever opened, I own hundreds of dollars of merchandise.

I cannot stand this song because it drives me insane to hear all this corporate shit in one 4:30 space.

4 more replies

8 more replies level 5

geoff1210 9 points · 6 days ago

I can't attend keynotes without this playing in the back of my head

17 more replies level 4

MadManMorbo 58 points · 6 days ago

We recently implemented DevOps practices, Scrum, and sprints have become the norm... I swear to god we spend 60% of our time planning our sprints, and 40% of the time doing the work, and management wonders why our true productivity has fallen through the floor... level 5

Angdrambor 26 points · 6 days ago

Let me guess - they left out the retrospectives because somebody brought up how bad they were fucking it all up? level 6

ValensEtVolens 1 point · 6 days ago

Those should be fairly short too. But how do you improve if you don't apply lessons learned?

Glad I work for a top IT company.
level 5
StormlitRadiance 23 points · 6 days ago

If you spend three whole days out of every five in planning meetings, this is a problem with your meeting planners, not with screm. If these people stay in charge, you'll be stuck in planning hell no matter what framework or buzzwords they try to fling around. level 6

lurker_lurks 15 points · 6 days ago

Scrum is dead, long live Screm! We need to implement it immediately. We must innovate and stay ahead of the curve! level 7

Solaris17 Sysadmin 3 points · 6 days ago

My last company used screm. No 3 day meeting events, 20min of loose direction and we were off to the races. We all came back with parts to different projects. SYNERGY. level 7

JustCallMeFrij 1 point · 6 days ago

First you scream, then you ahh. Now you can screm level 8

lurker_lurks 4 points · 6 days ago

You screm, I screm, we all screm for ice crem. level 7

StormlitRadiance 1 point · 5 days ago

It consists of three managers for every engineer and they all screm all day at a different quartet of three managers and an engineer. level 6

water_mizu 7 points · 6 days ago

Are you saying quantum synergy coupled with block chain neutral intelligence can not be used to expedite artificial intelligence amalgamation into that will metaphor into cucumber obsession?

3 more replies level 5

malikto44 9 points · 6 days ago

I worked at a place where the standup meetings went at least 4-6 hours each day. It was amazing how little got done there. Glad I bailed.

7 more replies level 4

opmrcrab 23 points · 6 days ago

fr agile, FTFY :P level 4

ChristopherBurr 20 points · 6 days ago

Haha, we just fired our Agil scrum masters. Turns out, they couldn't make development faster or streamlined.

I was so tired of seeing all the colored post it notes and white boards set up everywhere.

JasonHenley 8 points · 6 days ago

We prefer to call them Scrum Lords here. level 4

Mr-Shank 85 points · 6 days ago

Agile is cancer...

I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes. 1.0k comments 1.0k Posted by u/bpitts2 3 days ago Rant If I go outside of process to help you for your "urgent" issue, be cool and don't abuse the relationship.

What is it with these people? Someone brought me an "urgent" request (of course there wasn't a ticket), so I said no worries, I'll help you out. Just open a ticket for me so we can track the work and document the conversation. We got that all knocked out and everyone was happy.

So a day or two later, I suddenly get an instant message for yet another "urgent" issue. ... Ok ... Open a ticket, and I'll get it assigned to one of my team members to take a look.

And a couple days later ... he's back and I'm being asked for help troubleshooting an application that we don't own. At least there's a ticket and an email thread... but wtf man.

What the heck man?

This is like when you get a free drink or dessert from your waiter. Don't keep coming back and asking for more free pie. You know damn well you're supposed to pay for pie. Be cool. I'll help you out when you're really in a tight spot, but the more you cry "urgent", the less I care about your issues.

IT folks are constantly looked at as being dicks because we force people to follow the support process, but this is exactly why we have to make them follow the process. 290 comments 833 Posted by u/SpicyTunaNinja 4 days ago Silver Let's talk about mental health and stress

Hey r/Sysadmin , please don't suffer in silence. I know the job can be very difficult at times, especially with competing objectives, tight (or impossible) deadlines, bad bosses and needy end users, but please - always remember that there are ways to manage that stress. Speaking to friends and family regularly to vent, getting a therapist, or taking time off.

Yes, you do have the ability to take personal leave/medical leave if its that bad. No, it doesn't matter what your colleagues or boss will think..and no, you are not a quitter, weak, or a loser if you take time for yourself - to heal mentally, physically or emotionally.

Don't let yourself get to the point that this one IT employee did at the Paris Police headquarters. Ended up taking the lives of multiple others, and ultimately losing his life. https://www.nbcnews.com/news/world/paris-policeman-kills-2-officers-injures-3-others-knife-attack-n1061861

EDIT: Holy Cow! Thanks for the silver and platinum kind strangers. All i wanted to do was to get some more awareness on this subject, and create a reminder that we all deserve happiness and peace of mind. A reminder that hopefully sticks with you for the days and weeks to come.

Work is just one component of life, and not to get so wrapped up and dedicate yourself to the detriment of your health. 302 comments 783 Posted by u/fresh1003 2 days ago By 2025 80% of enterprises will shutdown their data center and move to cloud...do you guys believe this?

By 2025 80% of enterprises will shutdown their data center and move to cloud...do you guys believe this? 995 comments 646 Posted by u/eternalterra 3 days ago Silver Career / Job Related The more tasks I have, the slower I become

Good morning,

We, sysadmins, have times when we don't really have nothing to do but maintenance. BUT, there are times when it seems like chaos comes out of nowhere. When I have a lot of tasks to do, I tend to get slower. The more tasks I have pending, the slower I become. I cannot avoid to start thinking about 3 or 4 different problems at the same time, and I can't focus! I only have 2 years of experiences as sysadmin.

Do you guys experience the same?

Cheers, 321 comments 482 Posted by u/proudcanadianeh 6 days ago General Discussion Cloudflare, Google and Firefox to add support for HTTP/3, shifting away from TCP

Per this article: https://www.techspot.com/news/82111-cloudflare-google-firefox-add-support-http3-shifting-away.html

Not going to lie, this is the first I have heard of http3. Anyone have any insight in what this shift is going to mean on a systems end? Is this a new protocol entirely? 265 comments 557 Posted by u/_sadme_ 8 hours ago Career / Job Related Leaving the IT world...

Hello everyone,

Have you ever wondered if your whole career will be related to IT stuff? I have, since my early childhood. It was more than 30 years ago - in the marvelous world of an 8-bit era. After writing my first code (10 PRINT " my_name " : 20 GOTO 10) I exactly knew what I wanted to do in the future. Now, after spending 18 years in this industry, which is half of my age, I'm not so sure about it.

I had plenty of time to do almost everything. I was writing software for over 100K users and I was covered in dust while drilling holes for ethernet cables in houses of our customers. I was a main network administrator for a small ISP and systems administrator for a large telecom operator. I made few websites and I was managing a team of technical support specialists. I was teaching people - on individual courses on how to use Linux and made some trainings for admins on how to troubleshoot multicast transmissions in their own networks. I was active in some Open Source communities, including running forums about one of Linux distributions (the forum was quite popular in my country) and I was punching endless Ctrl+C/Ctrl+V combos from Stack Overflow. I even fixed my aunt's computer!

And suddenly I realised that I don't want to do this any more. I've completely burnt out. It was like a snap of a finger.

During many years I've collected a wide range of skills that are (or will be) obsolete. I don't want to spend rest of my life maintaining a legacy code written in C or PHP or learning a new language which is currently on top and forcing myself to write in a coding style I don't really like. That's not all... If you think you'll enjoy setting up vlans on countless switches, you're probably wrong. If you think that managing clusters of virtual machines is an endless fun, you'll probably be disappointed. If you love the smell of a brand new blade server and the "click" sound it makes when you mount it into the rack, you'll probably get fed up with it. Sooner or later.

But there's a good side of having those skills. With skills come experience, knowledge and good premonition. And these features don't get old. Remember that!

My employer offered me a position of a project manager and I eagerly agreed to it. It means that I'm leaving the world of "hardcore IT" I'll be doing some other, less crazy stuff. I'm logging out of my console and I'll run Excel. But I'll keep all good memories from all those years. I'd like to thank all of you for doing what you're doing, because it's really amazing. Good luck! The world lies in your hands! 254 comments 450 Posted by u/remrinds 1 day ago General Discussion UPDATE: So our cloud exchange server was down for 17 hours on friday

my original post got deleted because i behaved wrongly and posted some slurs. I apologise for that.


anyway, so, my companie is using Office365 ProPlus and we migrated our on premise exchange server to cloud a while ago, and on friday last week, all of our user (1000 or so) could not access their exchange box, we are a TV broadcasting station so you can only imagine the damage when we could not use our mailing system.


initially, we opened a ticket with microsoft and they just kept us on hold for 12 hours (we are in japan so they had to communicate with US and etc which took time), and then they told us its our network infra thats wrong when we kept telling them its not. we asked them to check their envrionment at least once which they did not until 12 hours later.


in the end, it was their exchange server that was the problem, i will copy and paste the whole incident report below


Title: Can't access Exchange

User Impact: Users are unable to access the Exchange Online service.

Current status: We've determined that a recent sync between Exchange Online and Azure Active Directory (AAD) inadvertently resulted in access issues with the Exchange Online service. We've restored the affected environment and updated the Global Location Service (GLS) records, which we believe has resolved the issue. We're awaiting confirmation from your representatives that this issue is resolved.

Scope of impact: Your organization is affected by this event, and this issue impacts all users.

Start time: Friday, October 4, 2019, 4:51 AM
Root cause: A recent Service Intelligence (SI) move inadvertently resulted in access issues with the Exchange Online service.


they wont explain further than what they posted on the incident page but if anyone here is good with microsofts cloud envrionment, can anyone tell me what was the root cause of this? from what i can gather, the AAD and exchange server couldnt sync but they wont tell us what the actual problem is, what the hell is Service intelligence and how does it fix our exchange server when they updated the global location service?


any insight on these report would be more than appreciated


thanks! 444 comments 336 Posted by u/Rocco_Saint 13 hours ago KB4524148 Kills Print Spooler? Thought it was supposed to fix that issue?

I rolled out this patch this weekend to my test group and it appears that some of the workstations this was applied to are having print spooler issues.

Here's the details for the patch.

I'm in the middle of troubleshooting it now, but wanted to reach out and see if anyone else was having issues. 108 comments 316 Posted by u/GrizzlyWhosSteve 1 day ago Finally Learned Docker

I hadn't found a use case for containers in my environment so I had put off learning Docker for a while. But, I'm writing a rails app to simplify/automate some of our administrative tasks. Setting up my different dev and test environments was definitely non trivial, and I plan on onboarding another person or 2 so they can help me out and add it to their resume.

I installed Docker desktop on my Mac, wrote 2 files essentially copied from Docker's website, built it, then ran it. It took a total of 10 minutes to go from zero Docker to fully configured and running. It's really that easy to start using it. So, now I've decided to set up Kubernetes at work this week and see what I can find to do with it.

Edit: leaning towards OKD. Has anyone used it/OpenShift that wants to talk me out of it? 195 comments 189 Posted by u/Reverent 4 days ago Off Topic How to trigger a sysadmin in two words

Vendor Requirements. 401 comments 181 Posted by u/stewardson 6 days ago General Discussion Monday From Hell

Let me tell you about the Monday from hell I encountered yesterday.

I work for xyz corp which is an MSP for IT services. One of the companies we support, we'll call them abc corp .

I come in to work Monday morning and look at my alerts report from the previous day and find that all of the servers (about 12+) at abc corp are showing offline. Manager asks me to go on site to investigate, it's around the corner so nbd .

I get to the client, head over to the server room and open up the KVM console. I switch inputs and see no issues with the ESX hosts. I then switch over to the (for some reason, physical) vCenter server and find this lovely little message:

HELLO, this full-disk encryption.

E-mail: [email protected]

rsrv: [email protected]

Now, I've never seen this before and it looks sus but just in case, i reboot the machine - same message . Do a quick google search and found that the server was hit with an MBR level ransomware encryption. I then quickly switched over to the server that manages the backups and found that it's also encrypted - f*ck.

At this point, I call Mr. CEO and account manager to come on site. While waiting for them, I found that the SANs had also been logged in to and had all data deleted and snapshots deleted off the datastores and the EQL volume was also encrypted - PERFECT!

At this point, I'm basically freaking out. ABC Corp is owned by a parent company who apparently also got hit however we don't manage them *phew*.

Our only saving grace at this point is the offsite backups. I log in to the server and wouldn't ya know it, I see this lovely message:

Last replication time: 6/20/2019 13:00:01

BackupsTech had a script that ran to report on replication status daily and the reports were showing that they were up to date. Obviously, they weren't so at this point we're basically f*cked.

We did eventually find out this originated from parentcompany and that the accounts used were from the old IT Manager that recently left a few weeks ago. Unfortunately, they never disabled the accounts in either domain and the account used was a domain admin account.

We're currently going through and attempting to undelete the VMFS data to regain access to the VM files. If anyone has any suggestions on this, feel free to let me know.

TL;DR - ransomware, accounts not disabled, backups deleted, f*cked. 94 comments Continue browsing in r/sysadmin Subreddit icon r/sysadmin

380k

Members

1.6k

Online

Oct 22, 2008

Cake Day A reddit dedicated to the profession of Computer System Administration. Reddit about careers press advertise blog Using Reddit help Reddit App Reddit premium Reddit gifts Directory Terms | Content policy | Privacy policy | Mod policy Reddit Inc © 2019. All rights reserved 1.2k They say, No more IT or system or server admins needed very soon... 1.2k Subreddit icon r/sysadmin • Posted by u/rdns98 6 days ago They say, No more IT or system or server admins needed very soon...

Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes. 1.0k comments 91% Upvoted What are your thoughts? Log in or Sign up log in sign up Sort by level 1

ntengineer 619 points · 6 days ago

Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.

For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:

"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "

Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.

I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2

PuzzledSwitch 99 points · 6 days ago

my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.

no amount of corporate pressuring or bitching could ever stand up to that. level 3

themastermatt 43 points · 6 days ago

Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4

PuzzledSwitch 33 points · 6 days ago

make a follow-up in email, then.

or, you might have to interject for a moment.

5 more replies level 3

williamfny Jack of All Trades 25 points · 6 days ago

This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4

MaxHedrome 5 points · 6 days ago

Listen to this guy OP

This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.

2 more replies level 2

notechno 34 points · 6 days ago

Not to mention downtime. We have two ISPs in our area. Most of our clients have both in order to keep a fail-over. However, depending on where the client is located, one ISP is fast but goes down every time it rains and the other is solid but slow. Now our only AzureAD customers are those who have so many remote workers that they throw their hands up and deal with the outages as they come. Maybe this works in Europe or South Korea, but this here 'Murica got too many internet holes. level 3

katarh 12 points · 6 days ago

Yup. If you're in a city with fiber it can probably work. If you have even one remote site, and all they have is DSL (or worse, satellite, as a few offices I once supported were literally in the woods when I worked for a timber company) then even Citrix becomes out of the question.

4 more replies

1 more reply level 2

elasticinterests 202 points · 6 days ago

Definitely this, if you know your stuff and can wrap some numbers around it you can regain control of the conversation.

I use my dad as a prime example, he was an electrical engineer for ~40 years, ended up just below board level by the time he retired. He sat in on a product demo once, the kit they were showing off would speed up jointing cable in the road by 30 minutes per joint. My dad asked 3 questions and shut them down:

"how much will it cost to supply all our jointing teams?" £14million

"how many joints do our teams complete each day?" (this they couldn't answer so my dad helped them out) 3

"So are we going to tell the jointers that they get an extra hour and a half hour lunch break or a pay cut?"

Room full of executives that had been getting quite excited at this awesome new investment were suddenly much more interested in showing these guys the door and getting to their next meeting. level 3

Cutriss '); DROP TABLE memes;-- 61 points · 6 days ago

I'm confused a bit by your story. Let's assume they work 8-hour days and so the jointing takes 2.66 hours per operation.

This enhancement will cut that down to 2.16 hours. That's awfully close to enabling a team to increase jointing-per-day from 3 to 4.

That's nearly a 33% increase in productivity. Factoring in overhead it probably is slightly higher.

Is there some reason the workers can't do more than 3 in a day? level 4

slickeddie Sysadmin 87 points · 6 days ago

I think they did 3 as that was the workload so being able to do 4 isn't relevant if there isn't a fourth to do.

That's what I get out of his story anyway.

And also if it was going to be 14 million in costs to equip everyone, the savings have to be there. If adding 1 unit of productivity per day didn't save 14 million in a year or two, it's not really worth it. level 5

Cutriss '); DROP TABLE memes;-- 40 points · 6 days ago

That was basically what I figured was the missing piece - the logistical inability to process 4 units.

As far as the RoI, I had to assume that the number of teams involved and their operational costs had already factored into whether or not 14m was even a price anyone could consider paying. In other words, by virtue of the meeting even happening I figured that the initial costing had not already been laughed out of the room, but perhaps that's a bit too much of an assumption to make. level 6

beer_kimono 14 points · 6 days ago

In my limited experience they slow roll actually pricing anything. Of course physical equipment pricing might be more straightforward, which would explain why his dad got a number instead of a discussion of licensing options. level 5

Lagkiller 7 points · 6 days ago

And also if it was going to be 14 million in costs to equip everyone, the savings have to be there. If adding 1 unit of productivity per day didn't save 14 million in a year or two, it's not really worth it.

That entirely depends. If you have 10 people producing 3 joints a day, with this new kit you could reduce your headcount by 2 and still produce the same, or take on additional workload and increase your production. Not to mention that you don't need to equip everyone with these kits either, you could save them for the projects which needed more daily production thus saving money on the kits and increasing production on an as needed basis.

They story is missing a lot of specifics and while it sounds great, I'm quite certain there was likely a business case to be made.

5 more replies level 4

Standardly 14 points · 6 days ago

He's saying they get 3 done in a day, and the product would save them 30 minutes per joint. That's an hour and a half saved per day, not even enough time to finish a fourth joint, hence the "so do i just give my workers an extra 30 min on lunch"? He just worded it all really weird. level 4

elasticinterests 11 points · 6 days ago

Your ignoring travel time in there, there are also factors to do with outside contractors carrying out the digging and reinstatement works and getting sign off to actually dig the hole in the road in the first place.

There is also the possibility I'm remembering the time wrong... it's been a while! level 5

Cutriss '); DROP TABLE memes;-- 10 points · 6 days ago

Travel time actually works in my favour. If it takes more time to go from job to job, then the impact of the enhancement is magnified because the total time per job shrinks. level 4

wildcarde815 Jack of All Trades 8 points · 6 days ago

Id bet because nobody wants to pay overtime.

1 more reply level 3

say592 4 points · 6 days ago

That seems like a poor example. Why would you ignore efficiency improvements just to give your workers something to do? Why not find another task for them or figure out a way to consolidate the teams some? We fight this same mentality on our manufacturing floors, the idea that if we automate a process someone will lose their job or we wont have enough work for everyone. Its never the case. However, because of the automation improvements we have done in the last 10 years, we are doing 2.5x the output with only a 10% increase in the total number of laborers.

So maybe in your example for a time they would have nothing for these people to do. Thats a management problem. Have them come back and sweep the floor or wash their trucks. Eventually you will be at a point where there is more work, and that added efficiency will save you from needing to put more crews on the road.

2 more replies level 2

SithLordAJ 75 points · 6 days ago

Calling the uneducated people out on what they see as facts can be rewarding.

I wouldnt call them all uneducated. I think what they are is basically brainwashed. They constantly hear from the sales teams of vendors like Microsoft pitching them the idea of moving everything to Azure.

They do not hear the cons. They do not hear from the folks who know their environment and would know if something is a good fit. At least, they dont hear it enough, and they never see it first hand.

Now, I do think this is their fault... they need to seek out that info more, weigh things critically, and listen to what's going on with their teams more. Isolation from the team is their own doing.

After long enough standing on the edge and only hearing "jump!!", something stupid happens. level 3

AquaeyesTardis 18 points · 6 days ago

Apart from performance, what would be some of the downsides of containers? level 4

ztherion Programmer/Infrastructure/Linux 51 points · 6 days ago

There's little downside to containers by themselves. They're just a method of sandboxing processes and packaging a filesystem as a distributable image. From a performance perspective the impact is near negligible (unless you're doing some truly intensive disk I/O).

What can be problematic is taking a process that was designed to run on exactly n dedicated servers and converting it to a modern 2 to n autoscaling deployment that shares hosting with other apps on n a platform like Kubernetes. It's a significant challenge that requires a lot of expertise and maintenance, so there needs to be a clear business advantage to justify hiring at least one additional full time engineer to deal with it. level 5

AirFell85 11 points · 6 days ago

ELI5:

More logistical layers require more engineers to support.

1 more reply

3 more replies level 4

justabofh 33 points · 6 days ago

Containers are great for stateless stuff. So your webservers/application servers can be shoved into containers. Think of containers as being the modern version of statically linked binaries or fat applications. Static binaries have the problem that any security vulnerability requires a full rebuild of the application, and that problem is escalated in containers (where you might not even know that a broken library exists)

If you are using the typical business application, you need one or more storage components for data which needs to be available, possibly changed and access controlled.

Containers are a bad fit for stateful databases, or any stateful component, really.

Containers also enable microservices, which are great ideas at a certain organisation size (if you aren't sure you need microservices, just use a simple monolithic architecture). The problem with microservices is that you replace complexity in your code with complexity in the communications between the various components, and that is harder to see and debug. level 5

Untgradd 6 points · 6 days ago

Containers are fine for stateful services -- you can manage persistence at the storage layer the same way you would have to manage it if you were running the process directly on the host.

6 more replies level 5

malikto44 5 points · 6 days ago

Backing up containers can be a pain, so you don't want to use them for valuable data unless the data is stored elsewhere, like a database or even a file server.

For spinning up stateless applications to take workload behind a load balancer, containers are excellent.

9 more replies

33 more replies level 3

malikto44 3 points · 6 days ago

The problem is that there is an overwhelming din from vendors. Everybody and their brother, sister, mother, uncle, cousin, dog, cat, and gerbil is trying to sell you some pay-by-the-month cloud "solution".

The reason is that the cloud forces people into monthly payments, which is a guarenteed income for companies, but costs a lot more in the long run, and if something happens and one can't make the payments, business is halted, ensuring that bankruptcies hit hard and fast. Even with the mainframe, a company could limp along without support for a few quarters until they could get cash flow enough.

If we have a serious economic downturn, the fact that businesses will be completely shuttered if they can't afford their AWS bill just means fewer companies can limp along when the economy is bad, which will intensify a downturn.

1 more reply

3 more replies level 2

wildcarde815 Jack of All Trades 12 points · 6 days ago

Also if you can't work without cloud access you better have a second link. level 2

pottertown 10 points · 6 days ago

Our company viewed the move to Azure less as a cost savings measure and more of a move towards agility and "right now" sizing of our infrastructure.

Your point is very accurate, as an example our location is wholly incapable of moving moving much to the cloud due to half of us being connnected via satellite network and the other half being bent over the barrel by the only ISP in town. level 2

_The_Judge 27 points · 6 days ago

I'm sorry, but I find management these days around tech wholly inadequate. The idea that you can get an MBA and manage shit you have no idea about is absurd and just wastes everyone elses time for them to constantly ELI5 so manager can do their job effectively. level 2

laserdicks 57 points · 6 days ago

Calling the uneducated people out on what they see as facts can be rewarding

Aaand political suicide in a corporate environment. Instead I use the following:

"I love this idea! We've actually been looking into a similar solution however we weren't able to overcome some insurmountable cost sinkholes (remember: nothing is impossible; just expensive). Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?" level 3

lokko12 71 points · 6 days ago

Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?

No.

...then people rent on /r/sysadmin about stupid investments and say "but i told them". level 4

HORACE-ENGDAHL Jack of All Trades 61 points · 6 days ago

This exactly, you can't compromise your own argument to the extent that it's easy to shoot down in the name of giving your managers a way of saving face, and if you deliver the facts after that sugar coating it will look even worse, as it will be interpreted as you setting them up to look like idiots. Being frank, objective and non-blaming is always the best route. level 4

linuxdragons 13 points · 6 days ago

Yeah, this is a terrible example. If I were his manager I would be starting the paperwork trail after that meeting.

6 more replies level 3

messburg 61 points · 6 days ago

I think it's quite an american thing to do it so enthusiastically; to hurt no one, but the result is so condescending. It must be annoying to walk on egg shells to survive a day in the office.


And this is not a rant against soft skills in IT, at all. level 4

vagrantprodigy07 13 points · 6 days ago

It is definitely annoying. level 4

widowhanzo 27 points · 6 days ago
· edited 6 days ago

We work with Americans and they're always so positive, it's kinda annoying. They enthusiastically say "This is very interesting" when in reality it sucks and they know it.

Another less professional example, one of my (non-american) co-workers always wants to go out for coffee (while we have free and better coffee in the office), and the American coworker is always nice like "I'll go with you but I'm not having any" and I just straight up reply "No. I'm not paying for shitty coffee, I'll make a brew in the office". And that's that. Sometimes telling it as it is makes the whole conversation much shorter :D level 5

superkp 42 points · 6 days ago

Maybe the american would appreciate the break and a chance to spend some time with a coworker away from screens, but also doesn't want the shit coffee?

Sounds quite pleasant, honestly. level 6

egamma Sysadmin 39 points · 6 days ago

Yes. "Go out for coffee" is code for "leave the office so we can complain about management/company/customer/Karen". Some conversations shouldn't happen inside the building. level 7

auru21 5 points · 6 days ago

And complain about that jerk who never joins them

1 more reply level 6

Adobe_Flesh 6 points · 6 days ago

Inferior American - please compute this - the sole intention was consumption of coffee. Therefore due to existence of coffee in office, trip to coffee shop is not sane. Resistance to my reasoning is futile. level 7

superkp 1 point · 6 days ago

Coffee is not the sole intention. If that was the case, then there wouldn't have been an invitation.

Another intention would be the social aspect - which americans are known to love, especially favoring it over doing any actual work in the context of a typical 9-5 corporate job.

I've effectively resisted your reasoning, therefore [insert ad hominem insult about inferior logic].

5 more replies level 5

ITaggie Tier II Support/Linux Admin 10 points · 6 days ago

I mean, I've taken breaks just to get away from the screens for a little while. They might just like being around you.

1 more reply

11 more replies level 3

tastyratz 7 points · 6 days ago

There is still value to tact in your delivery, but, don't slit your own throat. Remind them they hired you for a reason.

"I appreciate the ideas being raised here by management for cost savings and it certainly merits a discussion point. As a business, we have many challenges and one of them includes controlling costs. Bridging these conversations with subject matter experts and management this early can really help fill out the picture. I'd like to try to provide some of that value to the conversation here" level 3

renegadecanuck 2 points · 6 days ago

If it's political suicide to point out potential downsides, then you need to work somewhere else.

Especially since your response, in my experience, won't get anything done (people will either just say "no, that's not an issue", or they'll find your tone really condescending) and will just piss people off.

I worked with someone like that would would always be super passive aggressive in how she brought things up and it pissed me off to no end, because it felt less like bringing up potential issues and more like being belittling. level 4

laserdicks 1 point · 4 days ago

Agreed, but I'm too early on in my career to make that jump. level 3

A_A_A_U_U_U 1 point · 6 days ago

Feels good to get to a point in my career where I can call people our whenever the hell I feel like it. I've got recruiters banging down my door, I couldn't swing a stick without hitting a job offer or three.

Of course you I'm not suggesting problem be combative for no reason and to be tactful about it but if you don't call out fools like that then you're being negligent in your duties. level 4

adisor19 3 points · 6 days ago

This. Current IT market is in our advantage. Say it like it is and if they don't like it, GTFO.

14 more replies level 1

DragonDrew Jack of All Trades 777 points · 6 days ago

"I am resolute in my ability to elevate this collaborative, forward-thinking team into the revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO that outsources IT level 2

OpenScore Sysadmin 529 points · 6 days ago

"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data science and blockchain to achieve business value" level 3

omfgitzfear 472 points · 6 days ago

We're gonna be AGILE level 4

whetu 113 points · 6 days ago

Synergy. level 5

Erok2112 92 points · 6 days ago
Gold

Weird Al even wrote a song about it!

https://www.youtube.com/watch?v=GyV_UG60dD4 level 6

uptimefordays Netadmin 32 points · 6 days ago

It's so good, I hate it. level 7

Michelanvalo 31 points · 6 days ago

I love Al, I've seen him in concert a number of times, Alapalooza was the first CD I ever opened, I own hundreds of dollars of merchandise.

I cannot stand this song because it drives me insane to hear all this corporate shit in one 4:30 space.

4 more replies

8 more replies level 5

geoff1210 9 points · 6 days ago

I can't attend keynotes without this playing in the back of my head

17 more replies level 4

MadManMorbo 58 points · 6 days ago

We recently implemented DevOps practices, Scrum, and sprints have become the norm... I swear to god we spend 60% of our time planning our sprints, and 40% of the time doing the work, and management wonders why our true productivity has fallen through the floor... level 5

Angdrambor 26 points · 6 days ago

Let me guess - they left out the retrospectives because somebody brought up how bad they were fucking it all up? level 6

ValensEtVolens 1 point · 6 days ago

Those should be fairly short too. But how do you improve if you don't apply lessons learned?

Glad I work for a top IT company.
level 5
StormlitRadiance 23 points · 6 days ago

If you spend three whole days out of every five in planning meetings, this is a problem with your meeting planners, not with screm. If these people stay in charge, you'll be stuck in planning hell no matter what framework or buzzwords they try to fling around. level 6

lurker_lurks 15 points · 6 days ago

Scrum is dead, long live Screm! We need to implement it immediately. We must innovate and stay ahead of the curve! level 7

Solaris17 Sysadmin 3 points · 6 days ago

My last company used screm. No 3 day meeting events, 20min of loose direction and we were off to the races. We all came back with parts to different projects. SYNERGY. level 7

JustCallMeFrij 1 point · 6 days ago

First you scream, then you ahh. Now you can screm level 8

lurker_lurks 4 points · 6 days ago

You screm, I screm, we all screm for ice crem. level 7

StormlitRadiance 1 point · 5 days ago

It consists of three managers for every engineer and they all screm all day at a different quartet of three managers and an engineer. level 6

water_mizu 7 points · 6 days ago

Are you saying quantum synergy coupled with block chain neutral intelligence can not be used to expedite artificial intelligence amalgamation into that will metaphor into cucumber obsession?

3 more replies level 5

malikto44 9 points · 6 days ago

I worked at a place where the standup meetings went at least 4-6 hours each day. It was amazing how little got done there. Glad I bailed.

7 more replies level 4

opmrcrab 23 points · 6 days ago

fr agile, FTFY :P level 4

ChristopherBurr 20 points · 6 days ago

Haha, we just fired our Agil scrum masters. Turns out, they couldn't make development faster or streamlined.

I was so tired of seeing all the colored post it notes and white boards set up everywhere. level 5

JasonHenley 8 points · 6 days ago

We prefer to call them Scrum Lords here.

1 more reply level 4

Mr-Shank 85 points · 6 days ago

Agile is cancer... level 5

Skrp 66 points · 6 days ago

It doesn't have to be. But oftentimes it is, yes. level 6

Farren246 74 points · 6 days ago

Agile is good. "Agile" is very very bad. level 7

nineteen999 55 points · 6 days ago
· edited 6 days ago

Everyone says this, meaning "the way I do Agile is good, the way everyone else does it sucks. Buy my Agile book! Or my Agile training course! Only $199.99". level 8

fariak 54 points · 6 days ago

There are different ways to do Agile? From the past couple of places I worked at I thought Agile was just standing in a corner for 5 minutes each morning. Do some people sit? level 9

nineteen999 45 points · 6 days ago

Wait until they have you doing "retrospectives" on a Friday afternoon with a bunch of alcohol involved. By Monday morning nobody remembers what the fuck they retrospected about on Friday. level 10

fariak 51 points · 6 days ago

Now that's a scrum Continue this thread

6 more replies level 9

Ryuujinx DevOps Engineer 25 points · 6 days ago

No, that's what it's supposed to look like. A quick 'Is anyone blocked? Does anyone need anything/can anyone chip in with X? Ok get back to it'

What it usually looks like is a round table 'Tell us what you're working on' that takes at least 30, and depending on team size, closer to an hour. level 10

become_taintless 13 points · 6 days ago

our weekly 'stand-up' is often 60-90 minutes long, because they treat it like not only a roundtable discussion about what you're working on, but an opportunity to hash out every discussion to death, in front of C-levels.

also, the C-levels are at our 'stand-ups', because of course Continue this thread

3 more replies

8 more replies level 8

togetherwem0m0 12 points · 6 days ago

To me agile is an unfortunate framework to confront and dismantle a lot of hampering low value business processes. I call it a "get-er-done" framework. But yes theres not all Rose's and sunshine in agile. But it's important to destroy processes that make delivering value impossible

1 more reply level 7

PublicyPolicy 9 points · 6 days ago

Haha. all the places I worked with agile.

We gotta do agile.

But we set how much work gets done and when. Oh you are behind schedule. No problem. No unit test and no testing for you. Can't fall behind.

Then CIO. Guess what, we moved the up december deadline to September. Be agile! It's already been promised. We just have to pivot fuckers!

11 more replies level 5

Thameus We are Pakleds make it go 8 points · 6 days ago

"That's not real Agile" level 6

pioto 36 points · 6 days ago

No true Scotsman Scrum Master level 5

Thameus We are Pakleds make it go 8 points · 6 days ago

"That's not real Agile" level 6

pioto 36 points · 6 days ago

No true Scotsman Scrum Master

1 more reply level 5

StormlitRadiance 3 points · 6 days ago

Psychotic middle managers will always have their little spastic word salad, no matter what those words are. level 6

make_havoc 2 points · 6 days ago

Why? Why? Why is it that I can only give you one upvote? You need a thousand for this truth bomb! level 5

sobrique 2 points · 6 days ago

Like all such things - it's a useful technique, that turns into a colossal pile of wank if it's misused. This is true of practically every buzzword laden methodology I've seen introduced in the last 20 years. level 5

Angdrambor 2 points · 6 days ago

Fro me, the fact that my team is moderately scrummy is a decent treatment for my ADHD. The patterns are right up there with ritalin in terms of making me less neurologically crippled. level 5

corsicanguppy DevOps Zealot 1 point · 6 days ago

The 'fr' on the front isn't usually pronounced level 4

Thangleby_Slapdiback 3 points · 6 days ago

Christ I hate that word. level 4

NHarvey3DK 2 points · 6 days ago

I think we've moved on to AI level 4

blaze13541 1 point · 6 days ago

I think I'm going to snap if I have one more meting that discusses seamless migrations and seamless movement across a complex multi forest non-standardized network.

pooley92 1 point · 6 days ago

Try the business bullshit generator https://www.atrixnet.com/bs-generator.html level 4

pooley92 1 point · 6 days ago

Or try the tech bullshit generator https://www.makebullshit.com/

unixwasright 49 points · 6 days ago

Do we not still need to get the word "paradigm" in there somewhere? level 4

wallybeavis 36 points · 6 days ago

Last time I tried shifting some paradigms, I threw out my back. level 5

jackology 19 points · 6 days ago

Pivot yourself. level 6

EViLTeW 23 points · 6 days ago

If this doesn't work, circle back around and do the needful.

[Oct 06, 2019] Weird Al Yankovic - Mission Statement

Highly recommended!
This song seriously streamlined my workflow.
Oct 06, 2019 | www.youtube.com

FanmaR , 4 years ago

Props to the artist who actually found a way to visualize most of this meaningless corporate lingo. I'm sure it wasn't easy to come up with everything.

Maxwelhse , 3 years ago

He missed "sea change" and "vertical integration". Otherwise, that was pretty much all of the useless corporate meetings I've ever attended distilled down to 4.5 minutes. Oh, and you're getting laid off and/or no raises this year.

VenetianTemper , 4 years ago

From my experiences as an engineer, never trust a company that describes their product with the word "synergy".

Swag Mcfresh , 5 years ago

For those too young to get the joke, this is a style parody of Crosby, Stills & Nash, a folk-pop super-group from the 60's. They were hippies who spoke out against corporate interests, war, and politics. Al took their sound (flawlessly), and wrote a song in corporate jargon (the exact opposite of everything CSN was about). It's really brilliant, to those who get the joke.

112steinway , 4 years ago

Only in corporate speak can you use a whole lot of words while saying nothing at all.

Jonathan Ingersoll , 3 years ago

As a business major this is basically every essay I wrote.

A.J. Collins , 3 years ago

"The company has undergone organization optimization due to our strategy modification, which includes empowering the support to the operation in various global markets" - Red 5 on why they laid off 40 people suddenly. Weird Al would be proud.

meanmanturbo , 3 years ago

So this is basically a Dilbert strip turned into a song. I approve.

zyxwut321 , 4 years ago

In his big long career this has to be one of the best songs Weird Al's ever done. Very ambitious rendering of one of the most ambitious songs in pop music history.

teenygozer , 3 years ago

This should be played before corporate meetings to shame anyone who's about to get up and do the usual corporate presentation. Genius as usual, Mr. Yankovic!

Dunoid , 4 years ago

Maybe I'm too far gone to the world of computer nerds, but "Cloud Computing" seems like it should have been in the song somewhere.

Snoo Lee , 4 years ago

The "paradigm shift" at the end of the video / song is when the corporation screws everybody at the end. Brilliantly done, Al.

A Piece Of Bread , 3 years ago

Don't forget to triangulate the automatonic business monetizer to create exceptional synergy.

GeoffryHawk , 3 years ago

There's a quote it goes something like: A politician is someone who speaks for hours while saying nothing at all. And this is exactly it and it's brilliant.

Sefie Ezephiel , 4 months ago

From the current Gamestop earnings call "address the challenges that have impacted our results, and execute both deliberately and with urgency. We believe we will transform the business and shape the strategy for the GameStop of the future. This will be driven by our go-forward leadership team that is now in place, a multi-year transformation effort underway, a commitment to focusing on the core elements of our business that are meaningful to our future, and a disciplined approach to capital allocation."" yeah Weird Al totally nailed it

Phil H , 6 months ago

"People who enjoy meetings should not be put in charge of anything." -Thomas Sowell

Laff , 3 years ago

I heard "monetize our asses" for some reason...

Brett Naylor , 4 years ago

Excuse me, but "proactive" and "paradigm"? Aren't these just buzzwords that dumb people use to sound important? Not that I'm accusing you of anything like that. [pause] I'm fired, aren't I?~George Meyer

Mark Kahn , 4 years ago

Brilliant social commentary, on how the height of 60's optimism was bastardized into corporate enthusiasm. I hope SteveJjobs got to see this.

Mark , 4 years ago

That's the strangest "Draw My Life" I've ever seen.

Δ , 17 hours ago

I watch this at least once a day to take the edge of my job search whenever I have to decipher fifteen daily want-ads claiming to seek "Hospitality Ambassadors", "Customer Satisfaction Specialists", "Brand Representatives" and "Team Commitment Associates" eventually to discover they want someone to run a cash register and sweep up.

Mike The SandbridgeKid , 5 years ago

The irony is a song about Corporate Speak in the style of tie-died, hippie-dippy CSN (+/- )Y four-part harmony. Suite Judy Blue Eyes via Almost Cut My Hair filtered through Carry On. "Fantastic" middle finger to Wall Street,The City, and the monstrous excesses of Unbridled Capitalism.

Geetar Bear , 4 years ago (edited)

This reminds me of George carlin so much

Vaugn Ripen , 2 years ago

If you understand who and what he's taking a jab at, this is one of the greatest songs and videos of all time. So spot on. This and Frank's 2000 inch tv are my favorite songs of yours. Thanks Al!

Joolz Godfree , 4 years ago

hahaha, "Client-Centric Solutions...!" (or in my case at the time, 'Customer-Centric' solutions) now THAT's a term i haven't heard/read/seen in years, since last being an office drone. =D

Miles Lacey , 4 years ago

When I interact with this musical visual medium I am motivated to conceptualize how the English language can be better compartmentalized to synergize with the client-centric requirements of the microcosmic community focussed social entities that I administrate on social media while interfacing energetically about the inherent shortcomings of the current socio-economic and geo-political order in which we co-habitate. Now does this tedium flow in an effortless stream of coherent verbalisations capable of comprehension?

Soufriere , 5 years ago

When I bought "Mandatory Fun", put it in my car, and first heard this song, I busted a gut, laughing so hard I nearly crashed. All the corporate buzzwords! (except "pivot", apparently).

[Oct 06, 2019] This talk of going serverless or getting rid of traditional IT admins has gotten very old. In some ways it is true, but in many ways it is greatly exaggerated. There will always be a need for onsite technical support

Oct 06, 2019 | www.reddit.com

remi_in_2016_LUL NOC/SOC Analyst 109 points · 4 days ago

I agree with the sentiment. This talk of going serverless or getting rid of traditional IT admins has gotten very old. In some ways it is true, but in many ways it is greatly exaggerated. There will always be a need for onsite technical support. There are still users today that cannot plug in a mouse or keyboard into a USB port. Not to mention layer 1 issues; good luck getting your cloud provider to run a cable drop for you. Besides, who is going to manage your cloud instances? They don't just operate and manage themselves.

TLDR; most of us aren't going anywhere.

[Oct 26, 2017] Amazon.com Customer reviews Extreme Programming Explained Embrace Change

Rapid Development by Steve McConnell is an older and better book. The Mythical Man-Month remains valuable book as well, albeit dated.
Notable quotes:
"... Having a customer always available on site would mean that the customer in question is probably a small, expendable fish in his organization and is unlikely to have any useful knowledge of its business practices. ..."
"... Unit testing code before it is written means that one would have to have a mental picture of what one is going to write before writing it, which is difficult without upfront design. And maintaining such tests as the code changes would be a nightmare. ..."
"... Programming in pairs all the time would assume that your topnotch developers are also sociable creatures, which is rarely the case, and even if they were, no one would be able to justify the practice in terms of productivity. I won't discuss why I think that abandoning upfront design is a bad practice; the whole idea is too ridiculous to debate ..."
"... Both book and methodology will attract fledgling developers with its promise of hacking as an acceptable software practice and a development universe revolving around the programmer. It's a cult, not a methodology, were the followers shall find salvation and 40-hour working weeks ..."
"... Two stars for the methodology itself, because it underlines several common sense practices that are very useful once practiced without the extremity. ..."
"... The second is the dictatorial social engineering that eXtremity mandates. I've actually tried the pair programming - what a disaster. ..."
"... I've also worked with people who felt that their slightest whim was adequate reason to interfere with my work. That's what Beck institutionalizes by saying that any request made of me by anyone on the team must be granted. It puts me completely at the mercy of anyone walking by. The requisite bullpen physical environment doesn't work for me either. I find that the visual and auditory distraction make intense concentration impossible. ..."
"... One of the things I despise the most about the software development culture is the mindless adoption of fads. Extreme programming has been adopted by some organizations like a religious dogma. ..."
Oct 26, 2017 | www.amazon.com

Mohammad B. Abdulfatah on February 10, 2003

Programming Malpractice Explained: Justifying Chaos

To fairly review this book, one must distinguish between the methodology it presents and the actual presentation. As to the presentation, the author attempts to win the reader over with emotional persuasion and pep talk rather than with facts and hard evidence. Stories of childhood and comradeship don't classify as convincing facts to me.

A single case study-the C3 project-is often referred to, but with no specific information (do note that the project was cancelled by the client after staying in development for far too long).

As to the method itself, it basically boils down to four core practices:

  1. Always have a customer available on site.
  2. Unit test before you code.
  3. Program in pairs.
  4. Forfeit detailed design in favor of incremental, daily releases and refactoring.

If you do the above, and you have excellent staff on your hands, then the book promises that you'll reap the benefits of faster development, less overtime, and happier customers. Of course, the book fails to point out that if your staff is all highly qualified people, then the project is likely to succeed no matter what methodology you use. I'm sure that anyone who has worked in the software industry for sometime has noticed the sad state that most computer professionals are in nowadays.

However, assuming that you have all the topnotch developers that you desire, the outlined methodology is almost impossible to apply in real world scenarios. Having a customer always available on site would mean that the customer in question is probably a small, expendable fish in his organization and is unlikely to have any useful knowledge of its business practices.

Unit testing code before it is written means that one would have to have a mental picture of what one is going to write before writing it, which is difficult without upfront design. And maintaining such tests as the code changes would be a nightmare.

Programming in pairs all the time would assume that your topnotch developers are also sociable creatures, which is rarely the case, and even if they were, no one would be able to justify the practice in terms of productivity. I won't discuss why I think that abandoning upfront design is a bad practice; the whole idea is too ridiculous to debate.

Both book and methodology will attract fledgling developers with its promise of hacking as an acceptable software practice and a development universe revolving around the programmer. It's a cult, not a methodology, were the followers shall find salvation and 40-hour working weeks.

Experience is a great teacher, but only a fool would learn from it alone. Listen to what the opponents have to say before embracing change, and don't forget to take the proverbial grain of salt.

Two stars out of five for the presentation for being courageous and attempting to defy the standard practices of the industry. Two stars for the methodology itself, because it underlines several common sense practices that are very useful once practiced without the extremity.

wiredweird HALL OF FAME TOP 1000 REVIEWER on May 24, 2004
eXtreme buzzwording

Maybe it's an interesting idea, but it's just not ready for prime time.

Parts of Kent's recommended practice - including aggressive testing and short integration cycle - make a lot of sense. I've shared the same beliefs for years, but it was good to see them clarified and codified. I really have changed some of my practice after reading this and books like this.

I have two broad kinds of problem with this dogma, though. First is the near-abolition of documentation. I can't defend 2000 page specs for typical kinds of development. On the other hand, declaring that the test suite is the spec doesn't do it for me either. The test suite is code, written for machine interpretation. Much too often, it is not written for human interpretation. Based on the way I see most code written, it would be a nightmare to reverse engineer the human meaning out of any non-trivial test code. Some systematic way of ensuring human intelligibility in the code, traceable to specific "stories" (because "requirements" are part of the bad old way), would give me a lot more confidence in the approach.

The second is the dictatorial social engineering that eXtremity mandates. I've actually tried the pair programming - what a disaster. The less said the better, except that my experience did not actually destroy any professional relationships. I've also worked with people who felt that their slightest whim was adequate reason to interfere with my work. That's what Beck institutionalizes by saying that any request made of me by anyone on the team must be granted. It puts me completely at the mercy of anyone walking by. The requisite bullpen physical environment doesn't work for me either. I find that the visual and auditory distraction make intense concentration impossible.

I find revival tent spirit of the eXtremists very off-putting. If something works, it works for reasons, not as a matter of faith. I find much too much eXhortation to believe, to go ahead and leap in, so that I will eXperience the wonderfulness for myself. Isn't that what the evangelist on the subway platform keeps saying? Beck does acknowledge unbelievers like me, but requires their exile in order to maintain the group-think of the X-cult.
Beck's last chapters note a number of exceptions and special cases where eXtremism may not work - actually, most of the projects I've ever encountered.

There certainly is good in the eXtreme practice. I look to future authors to tease that good out from the positively destructive threads that I see interwoven.

A customer on May 2, 2004
A work of fiction

The book presents extreme programming. It is divided into three parts:
(1) The problem
(2) The solution
(3) Implementing XP.

The problem, as presented by the author, is that requirements change but current methodologies are not agile enough to cope with this. This results in customer being unhappy. The solution is to embrace change and to allow the requirements to be changed. This is done by choosing the simplest solution, releasing frequently, refactoring with the security of unit tests.

The basic assumption which underscores the approach is that the cost of change is not exponential but reaches a flat asymptote. If this is not the case, allowing change late in the project would be disastrous. The author does not provide data to back his point of view. On the other hand there is a lot of data against a constant cost of change (see for example discussion of cost in Code Complete). The lack of reasonable argumentation is an irremediable flaw in the book. Without some supportive data it is impossible to believe the basic assumption, nor the rest of the book. This is all the more important since the only project that the author refers to was cancelled before full completion.

Many other parts of the book are unconvincing. The author presents several XP practices. Some of them are very useful. For example unit tests are a good practice. They are however better treated elsewhere (e.g., Code Complete chapter on unit test). On the other hand some practices seem overkill. Pair programming is one of them. I have tried it and found it useful to generate ideas while prototyping. For writing production code, I find that a quiet environment is by far the best (see Peopleware for supportive data). Again the author does not provide any data to support his point.

This book suggests an approach aiming at changing software engineering practices. However the lack of supportive data makes it a work of fiction.
I would suggest reading Code Complete for code level advice or Rapid Development for management level advice.

A customer on November 14, 2002
Not Software Engineering.

Any Engineering discipline is based on solid reasoning and logic not on blind faith. Unfortunately, most of this book attempts to convince you that Extreme programming is better based on the author's experiences. A lot of the principles are counterintuitive and the author exhorts you just try it out and get enlightened. I'm sorry but these kind of things belong in infomercials not in s/w engineering.

The part about "code is the documentation" is the scariest part. It's true that keeping the documentation up to date is tough on any software project, but to do away with documentation is the most ridiculous thing I have heard.

It's like telling people to cut of their noses to avoid colds. Yes we are always in search of a better software process. Let me tell you that this book won't lead you there.

Philip K. Ronzone on November 24, 2000
The "gossip magazine diet plans" style of programming.

This book reminds me of the "gossip magazine diet plans", you know, the vinegar and honey diet, or the fat-burner 2000 pill diet etc. Occasionally, people actually lose weight on those diets, but, only because they've managed to eat less or exercise more. The diet plans themselves are worthless. XP is the same - it may sometimes help people program better, but only because they are (unintentionally) doing something different. People look at things like XP because, like dieters, they see a need for change. Overall, the book is a decently written "fad diet", with ideas that are just as worthless.

A customer on August 11, 2003
Hackers! Salvation is nigh!!

It's interesting to see the phenomenon of Extreme Programming happening in the dawn of the 21st century. I suppose historians can explain such a reaction as a truly conservative movement. Of course, serious software engineering practice is hard. Heck, documentation is a pain in the neck. And what programmer wouldn't love to have divine inspiration just before starting to write the latest web application and so enlightened by the Almighty, write the whole thing in one go, as if by magic? No design, no documentation, you and me as a pair, and the customer too. Sounds like a hacker's dream with "Imagine" as the soundtrack (sorry, John).
The Software Engineering struggle is over 50 years old and it's only logical to expect some resistance, from time to time. In the XP case, the resistance comes in one of its worst forms: evangelism. A fundamentalist cult, with very little substance, no proof of any kind, but then again if you don't have faith you won't be granted the gift of the mystic revelation. It's Gnosticism for Geeks.
Take it with a pinch of salt.. well, maybe a sack of salt. If you can see through the B.S. that sells millions of dollars in books, consultancy fees, lectures, etc, you will recognise some common-sense ideas that are better explained, explored and detailed elsewhere.

Ian K. VINE VOICE on February 27, 2015
Long have I hated this book

Kent is an excellent writer. He does an excellent job of presenting an approach to software development that is misguided for anything but user interface code. The argument that user interface code must be gotten into the hands of users to get feedback is used to suggest that complex system code should not be "designed up front". This is simply wrong. For example, if you are going to deploy an application in the Amazon Cloud that you want to scale, you better have some idea of how this is going to happen. Simply waiting until your application falls over and fails is not an acceptable approach.

One of the things I despise the most about the software development culture is the mindless adoption of fads. Extreme programming has been adopted by some organizations like a religious dogma.

Engineering large software systems is one of the most difficult things that humans do. There are no silver bullets and there are no dogmatic solutions that will make the difficult simple.

Anil Philip on March 24, 2005
not found - the silver bullet

Maybe I'm too cynical because I never got to work for the successful, whiz-kid companies; Maybe this book wasn't written for me!

This book reminds me of Jacobsen's "Use Cases" book of the 1990s. 'Use Cases' was all the rage but after several years, we slowly learned the truth: Uses Cases does not deal with the architecture - a necessary and good foundation for any piece of software.

Similarly, this book seems to be spotlighting Testing and taking it to extremes.

'the test plan is the design doc'

Not True. The design doc encapsulates wisdom and insight

a picture that accurately describes the interactions of the lower level software components is worth a thousand lines of code-reading.

Also present is an evangelistic fervor that reminds me of the rah-rah eighties' bestseller, "In Search Of Excellence" by Peters and Waterman. (Many people have since noted that most of the spotlighted companies of that book are bankrupt twenty five years later).

Lastly, I noted that the term 'XP' was used throughout the book, and the back cover has a blurb from an M$ architect. Was it simply coincidence that Windows shares the same name for its XP release? I wondered if M$ had sponsored part of the book as good advertising for Windows XP! :)

[Oct 18, 2017] The frenzy is deliberately and I would say almost scientifically engineered by very bright marketing people in software vendors. Savvy IT organizations maintain their focus

Chasing recent fad and risking the organization assets (systems, processes, people, reputations) for the sake of advancing your goals is a clear-cut characteristic of a broken ecosystem.
Feb 01, 2028 | www.itskeptic.org
The change madness is getting worse with every passing year .

The demands for change being placed on corporate IT are plain ridiculous. As a consequence we are breaking IT. In pursuit of absurd project commitments we are eating ourselves .

And the hysteria reaches fever pitch as people extrapolate trends into the future linearly or worse still exponentially. This is such bad scientific thinking that it shouldn't be worthy of debate, but the power of critical thought is a scarce resource

Rob England (The IT Skeptic) -> Dierdre Popov , March 7, 2013 3:43 AM

A broken management and governance system, a broken value system, and a broken culture.

But even in the best and healthiest organisations, there are plenty of rogues; psychopaths (and milder sociopaths) who are never going to care about anyone but themselves. They soar in management (and they're drawn to the power); they look good to all measures and controls except a robust risk management system - it is the last line of defense.

Rob England (The IT Skeptic) -> Simon Kent , February 28, 2013 5:06 AM

...I'm saying there is a real limit to how fast humans can change: how fast we can change our behaviours, our attitudes, our processes, our systems. We need to accept that the technology is changing faster than society, our IT sector, our organisations, our teams, ourselves can change.

I'm saying there is a social and business backlash already to the pace of change. We're standing in the ruins of an economy that embraced fast change.

I'm saying there are real risks to the pace of change, and we currently live in a culture that thinks writing risks down means you can then ignore them, or that if you can't ignore them you can always hedge them somehow.

We have to slow down a bit. perhaps "Slow IT" is the wrong name but it was catchy. I'm not saying go slooooow. We've somehow sustained a pretty impressive pace for decades. But clearly it can't go much faster, if at all, and all these demands that it must go faster are plain silly. It just can't. There's bits falling off, people burning out, smoking shells of projects everywhere.

I'm not saying stop, but I am saying ease off a little, calm down, stop panicking, stop this desperate headlong rush. You are right Simon that mindfulness is a key element: we all need time to think. Let the world keep up.

Fustbariclation , February 27, 2013 10:03 PM

Yes, Rob, short-termism is certainly bad news, and rushing to achieve short-term goals without thinking about them in the larger context is a good indication of disaster ahead.

It's easy to mistake activity for progress.

Wdpowel , March 14, 2013 10:06 AM

Much of the zeitgeist that drives the frenzy you describe is generated by vendors especially those with software in their portfolio. Software has more margin that hardware or service. As a result they have more marketing budget. With that budget they invest and spend a lot of time and effort to figure out exactly how to generate the frenzy with a new thing that you must have. They have to do this to keep market interest in the products. That is actually what their job is.

The frenzy is deliberately and I would say almost scientifically engineered by very very bright marketing people in software vendors. Savvy IT organizations are aware of that distinction and maintain their focus on enabling their business to be successful. IT as Utility, On Demand, SOA, Cloud, ..... Software vendors will not and should not stop doing that - that is what keeps them in business and generates profits that enable new innovation. The onus is on the buyer to understand that whatever the latest technology is, does not provide the answer for how they will improve business performance. Improving business performance is the burden that only the organization can bear.

[Aug 29, 2017] The quickie guide to continuous delivery in DevOps

This is pretty idiotic: "But wait -- Isn't speed the key to all software development? These days, companies routinely require their developers to update or add features once per day, week, or month. This was unheard of back in the day, even in the era of agile software development ."
And now example buss words infused nonsense: ""DevOps is a concept, an idea, a life philosophy," says Gottfried Sehringer, chief marketing officer at XebiaLabs , a software delivery automation company. "It's not really a process or a toolset, or a technology." And another one: ..." "In an ideal world, you would push a button to release every few seconds," Sehringer says. But this is not an ideal world, and so people plug up the process along the way."... "
I want to see sizable software product with the release every few seconds. Even for a small and rapidly evolving web site scripts should be released no more frequently then daily.
Notable quotes:
"... Even if you're a deity of software bug-squashing, how can you -- or any developer or operations specialist -- deliver high-quality, "don't break anything" code when you have to build and release that fast? Everyone has their own magic bullet. "Agile -- " cries one crowd. " Continuous build -- " yells another. " Continuous integration -- " cheers a third. ..."
"... Automation has obvious returns on investment. "You can make sure it's good in pre-production and push it immediately to production without breaking anything, and then just repeat, repeat, repeat, over and over again," says Sehringer. ..."
"... In other words, you move delivery through all the steps in a structured, repeatable, automated way to reduce risk and increase the speed of releases and updates. ..."
Aug 29, 2017 | insights.hpe.com
The quickie guide to continuous delivery in DevOps

In today's world, you have to develop and deliver almost in the same breath. Here's a quick guide to help you figure out which continuous delivery concepts will help you breathe easy, and which are only hot air. Developers are always under pressure to produce more and release software faster, which encourages the adoption of new concepts and tools. But confusing buzzwords obfuscate real technology and business benefits, particularly when a vendor has something to sell. That makes it hard to determine what works best -- for real, not just as a marketing phrase -- in the continuous flow of build and deliver processes. This article gives you the basics of continuous delivery to help you sort it all out.

To start with, the terms apply to different parts of the same production arc, each of which are automated to different degrees:

With continuous deployment, "a developer's job typically ends at reviewing a pull request from a teammate and merging it to the master branch," explains Marko Anastasov in a blog post . "A continuous integration/continuous deployment service takes over from there by running all tests and deploying the code to production, while keeping the team informed about [the] outcome of every important event."

However, knowing the terms and their definitions isn't enough to help you determine when and where it is best to use each. Because, of course, every shop is different.

It would be great if the market clearly distinguished between concepts and tools and their uses, as they do with terms like DevOps. Oh, wait.

"DevOps is a concept, an idea, a life philosophy," says Gottfried Sehringer, chief marketing officer at XebiaLabs , a software delivery automation company. "It's not really a process or a toolset, or a technology."

But, alas, industry terms are rarely spelled out that succinctly. Nor are they followed with hints and tips on how and when to use them. Hence this guide, which aims to help you learn when to use what.

Choose your accelerator according to your need for speed

But wait -- Isn't speed the key to all software development? These days, companies routinely require their developers to update or add features once per day, week, or month. This was unheard of back in the day, even in the era of agile software development .

That's not the end of it; some businesses push for software updates to be faster still. "If you work for Amazon, it might be every few seconds," says Sehringer.

Even if you're a deity of software bug-squashing, how can you -- or any developer or operations specialist -- deliver high-quality, "don't break anything" code when you have to build and release that fast? Everyone has their own magic bullet. "Agile -- " cries one crowd. " Continuous build -- " yells another. " Continuous integration -- " cheers a third.

Let's just cut to the chase on all that, shall we?

"Just think of continuous as 'automated,'" says Nate Berent-Spillson, senior delivery director at Nexient , a software services provider. "Automation is driving down cost and the time to develop and deploy."

Well, frack, why don't people just say automation?

Add to the idea of automation the concepts of continuous build, continuous delivery, continuous everything, which are central to DevOps, and we find ourselves talking in circles. So, let's get right to sorting all that out.

... ... ...

Rinse. Repeat, repeat, repeat, repeat (the point of automation in DevOps)

Automation has obvious returns on investment. "You can make sure it's good in pre-production and push it immediately to production without breaking anything, and then just repeat, repeat, repeat, over and over again," says Sehringer.

In other words, you move delivery through all the steps in a structured, repeatable, automated way to reduce risk and increase the speed of releases and updates.

In an ideal world, you would push a button to release every few seconds," Sehringer says. But this is not an ideal world, and so people plug up the process along the way.

A company may need approval for an application change from its legal department. "Some companies are heavily regulated and may need additional gates to ensure compliance," notes Sehringer. "It's important to understand where these bottlenecks are." The ARA software should improve efficiencies and ensure the application is released or updated on schedule.

"Developers are more familiar with continuous integration," he says. "Application release automation is more recent and thus less understood."

... ... ...

Pam Baker has written hundreds of articles published in leading technology, business and finance publications including InformationWeek, Institutional Investor magazine, CIO.com, NetworkWorld, ComputerWorld, IT World, Linux World, and more. She has also authored several analytical studies on technology, eight books -- the latest of which is Data Divination: Big Data Strategies -- and an award-winning documentary on paper-making. She is a member of the National Press Club, Society of Professional Journalists and the Internet Press Guild.

[Aug 28, 2017] Could AI Transform Continuous Delivery Development

Notable quotes:
"... It's basically a bullshit bingo post where someone repeats a buzzword without any knowledge of the material behind it ..."
"... continuous delivery == constant change ..."
"... This might be good for developers, but it's a nightmare for the poor, bloody, customers. ..."
"... However, I come at it from the other side, the developers just push new development out and production support is responsible for addressing the mess, it is horrible, there is too much disconnect between developers and their resulting output creating consistent outages. The most successful teams follow the mantra "Eat your own dog food" , developers who support the crap they push ..."
"... But do you know who likes Continuous Delivery? Not the users. The users hate stuff changing for the sake of change, but trying to convince management seems an impossible task. ..."
"... some of us terrified what 'continious delivery' means in the context of software in the microcontroller of a health care device, ..."
"... It is a natural consequence of a continuous delivery, emphasis on always evolving and changing and that the developer is king and no one can question developer opinion. Developer decides it should move, it moves. No pesky human testers to stand up and say 'you confused the piss out of us' to make them rethink it. No automatic test is going to capture 'confuse the piss out of the poor non-developer users'. ..."
"... It's amazing how common this attitude has become. It's aggressively anti-customer, and a big part of the reason for the acceleration of the decline of software quality over the past several years. ..."
"... All I know is that, as a user, rapid-release or continuous delivery has been nothing but an enormous pain in the ass to me and I wish it would die the horrible death it deserves already. ..."
Aug 28, 2017 | developers.slashdot.org

Anonymous Coward writes:

Re: Score: , Insightful)

Yeah, this is an incredibly low quality article. It doesn't specify what it means by what AI should do, doesn't specify which type of AI, doesn't specify why AI should be used, etc. Junk article.

It's basically a bullshit bingo post where someone repeats a buzzword without any knowledge of the material behind it.

xbytor ( 215790 ) , Sunday August 27, 2017 @04:00PM ( #55093989 ) Homepage
buzzwords Score: , Funny)

> a new paradigm shift.

I stopped reading after this.

cyber-vandal ( 148830 ) writes:
Re: buzzwords

Not enough leveraging core competencies through blue sky thinking and synergistic best of breed cloud machine learning for you?

sycodon ( 149926 ) , Sunday August 27, 2017 @04:10PM ( #55094039 )
Same Old Thing Score: , Insightful)

Holy Fuck.

Continuous integration Prototyping Incremental development Rapid application development Agile development Waterfall development Spiral development

Now, introducing, "Continuous Delivery"...or something.

Here is the actual model, a model that will exist for the next 1,000 years.

1. Someone (or something) gathers requirement. 2. They get it wrong. 3. They develop the wrong thing that doesn't even work they way they thought it should 4. The project leader is canned 5. The software is implemented by an outside vendor, with all the flaws. 6. The software operates finally after 5 years of modifications to both the software and the workflows (to match the flaws in the software). 7. As soon as it's all running properly and everyone is trained, a new project is launched to redo it, "the right way". 8. Goto 1

AmazingRuss ( 555076 ) writes:
Re:

If everyone is stupid, no one is.

ColdWetDog ( 752185 ) writes:
Re:

No no. We got rid of line numbers a long time ago.

Graydyn Young ( 2835695 ) writes:
Re:

+1 Depressing

Tablizer ( 95088 ) writes:
AI meets Hunger Games

It's a genetic algorithm where YOU are the population being flushed out each cycle.

TheStickBoy ( 246518 ) writes:
Re:
Here is the actual model, a model that will exist for the next 1,000 years.

1. Someone (or something) gathers requirement. 2. They get it wrong. 3. They develop the wrong thing that doesn't even work they way they thought it should 4. The project leader is canned 5. The software is implemented by an outside vendor, with all the flaws. 6. The software operates finally after 5 years of modifications to both the software and the workflows (to match the flaws in the software). 7. As soon as it's all running properly and everyone is trained, a new project is launched to redo it, "the right way". 8. Goto 1

You just accurately described a 6 year project within our organization....and it made me cry Does this model have a name? an urban dictionary name? if not it needs one.

alvinrod ( 889928 ) , Sunday August 27, 2017 @04:15PM ( #55094063 )
Re:buzzwords Score: , Insightful)

Yeah, maybe there's something useful in TFA, but I'm not really inclined to go looking based on what was in the summary. At no point, did the person being quoted actually say anything of substance.

It's just buzzword soup with a dash of new technologies thrown in.

Five years ago they would have said practically the same words, but just talked about utilizing the cloud instead of AI.

I'm also a little skeptical of any study published by a company looking to sell you what the study has just claimed to be great. That doesn't mean its a complete sham, but how hard did they look for other explanations why some companies are more successful than others?

phantomfive ( 622387 ) writes:
Re:

At first I was skeptical, but I read some online reviews of it, and it looks pretty good [slashdot.org]. All you need is some AI and everything is better.

Anonymous Coward writes:
I smell Bullshit Bingo...

that's all, folks...

93 Escort Wagon ( 326346 ) writes:
Meeting goals

I notice the targets are all set from the company's point of view... including customer satisfaction. However it's quite easy to meet any goal, as long as you set it low enough.

Companies like Comcast or Qwest objectively have abysmal customer satisfaction ratings; but they likely meet their internal goal for that metric. I notice, in their public communications, they always use phrasing along the lines of "giving you an even better customer service experience" - again, the trick is to set the target low and

petes_PoV ( 912422 ) , Sunday August 27, 2017 @05:56PM ( #55094339 )
continuous delivery == constant change ( Score: , Insightful)

This might be good for developers, but it's a nightmare for the poor, bloody, customers.

Any professional outfit will test a new release (in-house or commercial product) thoroughly before letting it get anywhere close to an environment where their business is at stake.

This process can take anywhere from a day or two to several months, depending on the complexity of the operation, the scope of the changes, HOW MANY (developers note: not if any ) bugs are found and whether any alterations to working practices have to be introduced.

So to have developers lob a new "release" over the wall at frequent intervals is not useful, it isn't clever, nor does it save (the users) any money or speed up their acceptance. It just costs more in integration testing, floods the change control process with "issues" and means that when you report (again, developers: not if ) problems, it is virtually impossible to describe exactly which release you are referring to and even more impossible for whoever fixes the bugs to produce the same version to fix and then incorporate those fixes into whatever happens to be the latest version - that hour. Even more so when dozens of major corporate customers are ALL reporting bugs with each new version they test.

SethJohnson ( 112166 ) writes:
Re:

Any professional outfit will test a new release (in-house or commercial product) thoroughly before letting it get anywhere close to an environment where their business is at stake. This process can take anywhere from a day or two to several months, depending on the complexity of the operation, the scope of the changes, HOW MANY (developers note: not if any) bugs are found and whether any alterations to working practices have to be introduced.

I wanted to chime in with a tangible anecdote to support your

Herkum01 ( 592704 ) writes:
Re:

I can sympathize with that few, of it appearing to have too many developers focused upon deployment/testing then actual development.

However, I come at it from the other side, the developers just push new development out and production support is responsible for addressing the mess, it is horrible, there is too much disconnect between developers and their resulting output creating consistent outages. The most successful teams follow the mantra "Eat your own dog food" , developers who support the crap they push

JohnFen ( 1641097 ) writes:
Re:
This might be good for developers

It's not even good for developers.

AmazingRuss ( 555076 ) writes:
"a new paradigm shift."

Another one?

sethstorm ( 512897 ) writes:
Let's hope not.

AI is enough of a problem, why make it worse?

bobm ( 53783 ) writes:
According to one study

One study, well then I'm sold.

But do you know who likes Continuous Delivery? Not the users. The users hate stuff changing for the sake of change, but trying to convince management seems an impossible task.

angel'o'sphere ( 80593 ) writes:
Re:

Why should users not like it? If you shop on amazon you don't know if a specific feature you notice today came there via continuous delivery or a more traditional process.

Junta ( 36770 ) writes:
Re:

The crux of the problem is that we (in these discussions and the analysts) describe *all* manner of 'software development' as the same thing. Whether it's a desktop application, an embedded microcontroller in industrial equipment, a web application for people to get work done, or a webapp to let people see the latest funny cat video.

Then we start talking past each other, some of us terrified what 'continious delivery' means in the context of software in the microcontroller of a health care device, others t

angel'o'sphere ( 80593 ) writes:
Re:

Well, 'continuous delievery' is a term with a defined meaning. And releasing phone apps with unwanted UI/functionality in rapid succession is not part of that definition. Continuous delievery basically only is the next logical step after continuous integration. You deploy the new functionallity automatically (or with a click of a button) when certain test criteria are met. Usually on a subset of your nodes so only a subset of your customers sees it. If you have crashes on those nodes or customer complaints you

JohnFen ( 1641097 ) writes:
Re:
You deploy the new functionallity automatically (or with a click of a button) when certain test criteria are met. Usually on a subset of your nodes so only a subset of your customers sees it. If you have crashes on those nodes or customer complaints you roll back.

Why do you consider this to be a good thing? It's certainly not for those poor customers who were chosen to be involuntary beta testers, and it's also not for the rest of the customers who have to deal with software that is constantly changing underneath them.

Junta ( 36770 ) writes:
Re:
'continuous delievery' is a term with a defined meaning. And releasing phone apps with unwanted UI/functionality in rapid succession is not part of that definition.

It is a natural consequence of a continuous delivery, emphasis on always evolving and changing and that the developer is king and no one can question developer opinion. Developer decides it should move, it moves. No pesky human testers to stand up and say 'you confused the piss out of us' to make them rethink it. No automatic test is going to capture 'confuse the piss out of the poor non-developer users'.

If you have crashes on those nodes or customer complaints you roll back.

Note that a customer with a choice is likely to just go somewhere else rather than use your software.

manu0601 ( 2221348 ) writes:
AI written paper

I suspect that article was actually written by an AI. That would explain why it makes so little sense to human mind.

4wdloop ( 1031398 ) writes:
IT what?

IT in my company does network, Windows, Office and Virus etc. type of work. Is this what they talk about? Anyway, it's been long outsourced to IT (as in "Indian" technology)...

Comrade Ogilvy ( 1719488 ) writes:
For some businesses maybe but...

I recently interviewed at a couple of the new fangled big data marketing startups that correlate piles of stuff to help target ads better, and they were continuously deploying up the wazoo. In fact, they had something like zero people doing traditional QA.

It was not totally insane at all. But they did have a blaze attitude about deployments -- if stuff don't work in production they just roll back, and not worry about customer input data being dropped on the floor. Heck, they did not worry much about da

JohnFen ( 1641097 ) writes:
Re:
But they did have a blaze attitude about deployments -- if stuff don't work in production they just roll back, and not worry about customer input data being dropped on the floor.

It's amazing how common this attitude has become. It's aggressively anti-customer, and a big part of the reason for the acceleration of the decline of software quality over the past several years.

Njovich ( 553857 ) writes:
No

You want your deployment system to be predictable, and as my old AI professor used to say, intelligent means hard to predict. You don't want AI for systems that just have to do the exact same thing reliably over and over again.

angel'o'sphere ( 80593 ) writes:
Summary sounds retarded

A continuous delivery pipeline has as much AI as a nematode has natural intelligence ... probably even less.

Junta ( 36770 ) writes:
In other words...

Analyst who understands neither software development nor AI proceeds to try to sound insightful about both.

JohnFen ( 1641097 ) writes:
All I know is

All I know is that, as a user, rapid-release or continuous delivery has been nothing but an enormous pain in the ass to me and I wish it would die the horrible death it deserves already.

jmcwork ( 564008 ) writes:
Every morning: git update; make install

As long as customers are comfortable with doing this, I do not see a problem. Now, that will require that developers keep making continuous,

[Jun 17, 2017] How containers and DevOps transformed Duke Universitys IT department by Chris Collins

Jun 16, 2017 | opensource.com

...At Duke University's Office of Information Technology (OIT), we began looking at containers as a way to achieve higher density from the virtualized infrastructure used to host websites. Virtual machine (VM) sprawl had started to become a problem. We favored separating each client's website onto its own VM for both segregation and organization, but steady growth meant we were managing more servers than we could handle. As we looked for ways to lower management overhead and make better use of resources, Docker hit the news, and we began to experiment with containerization for our web applications.

For us, the initial investigation of containers mirrors a shift toward a DevOps culture.

Where we started

When we first looked into container technology, OIT was highly process driven and composed of monolithic applications and a monolithic organizational structure. Some early forays into automation were beginning to lead the shift toward a new cultural organization inside the department, but even so, the vast majority of our infrastructure consisted of "pet" servers (to use the pets vs. cattle analogy). Developers created their applications on staging servers designed to match production hosting environments and deployed by migrating code from the former to the latter. Operations still approached hosting as it always had: creating dedicated VMs for individual services and filing manual tickets for monitoring and backups. A service's lifecycle was marked by change requests, review boards, standard maintenance windows, and lots of personal attention.

A shift in culture

As we began to embrace containers, some of these longstanding attitudes toward development and hosting began to shift a bit. Two of the larger container success stories came from our investigation into cloud infrastructure. The first project was created to host hundreds of R-Studio containers for student classes on Microsoft Azure hosts, breaking from our existing model of individually managed servers and moving toward "cattle"-style infrastructure designed for hosting containerized applications.

The other was a rapid containerization and deployment of the Duke website to Amazon Web Services while in the midst of a denial-of-service attack, dynamically creating infrastructure and rapidly deploying services.

The success of these two wildly nonstandard projects helped to legitimize containers within the department, and more time and effort was put into looking further into their benefits and those of on-demand and disposable cloud infrastructure, both on-premises and through public cloud providers.

It became apparent early on that containers lived within a different timescale from traditional infrastructure. We started to notice cases where short-lived, single-purpose services were created, deployed, lived their entire lifecycle, and were decommissioned before we completed the tickets created to enter them into inventory, monitoring, or backups. Our policies and procedures were not able to keep up with the timescales that accompanied container development and deployment.

In addition, humans couldn't keep up with the automation that went into creating and managing the containers on our hosts. In response, we began to develop more automation to accomplish usually human-gated processes. For example, the dynamic migration of containers from one host to another required a change in our approach to monitoring. It is no longer enough to tie host and service monitoring together or to submit a ticket manually, as containers are automatically destroyed and recreated on other hosts in response to events.

Some of this was in the works for us already-automation and container adoption seem to parallel one another. At some point, they become inextricably intertwined.

As containers continued to grow in popularity and OIT began to develop tools for container orchestration, we tried to further reinforce the "cattle not pets" approach to infrastructure. We limited login of the hosts to operations staff only (breaking with tradition) and gave all hosts destined for container hosting a generic name. Similar to being coached to avoid naming a stray animal in an effort to prevent attachment, servers with generic names became literally forgettable. Management of the infrastructure itself became the responsibility of automation, not humans, and humans focused their efforts on the services inside the containers.

Containers also helped to usher continuous integration into our everyday workflows. OIT's Identity Management team members were early adopters and began to build Kerberos key distribution centers (KDCs) inside containers using Jenkins, building regularly to incorporate patches and test the resulting images. This allowed the team to catch breaking builds before they were pushed out onto production servers. Prior to that, the complexity of the environment and the widespread impact of an outage made patching the systems a difficult task.

Embracing continuous deployment

Since that initial use case, we've also embraced continuous deployment. There is a solid pattern for every project that gets involved with our continuous integration/continuous deployment (CI/CD) system. Many teams initially have a lot of hesitation about automatically deploying when tests pass, and they tend to build checkpoints requiring human intervention. However, as they become more comfortable with the system and learn how to write good tests, they almost always remove these checkpoints.

Within our container orchestration automation, we use Jenkins to patch base images on a regular basis and rebuild all the child images when the parent changes. We made the decision early that the images could be rebuilt and redeployed at any time by automated processes. This meant that any code included in the branch of the git repository used in the build job would be included in the image and potentially deployed without any humans involved. While some developers initially were uncomfortable with this, it ultimately led to better development practices: Developers merge into the production branch only code that is truly ready to be deployed.

This practice facilitated rebuilding container images immediately when code is merged into the production branch and allows us to automatically deploy the new image once it's built. At this point, almost every project using the automatic rebuild has also enabled automated deployment.

Looking ahead

Today the adoption of both containers and DevOps is still a work in progress for OIT.

Internally we still have to fight the entropy of history even as we adopt new tools and culture. Our biggest challenge will be convincing people to break away from the repetitive break-fix mentality that currently dominates their jobs and to focus more on automation. While time is always short, and the first step always daunting, in the long run adopting automation for day-to-day tasks will free them to work on more interesting and complex projects.

Thankfully, people within the organization are starting to embrace working in organized or ad hoc groups of cross-discipline members and developing automation together. This will definitely become necessary as we embrace automated orchestration and complex systems. A group of talented individuals who possess complementary skills will be required to fully manage the new environments.

[May 19, 2017] IT ops doesnt matter. Really by Dale Vile

Notable quotes:
"... All of the hype around software and developers, which tends to significantly skew even the DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil. Indeed, some go so far as to make the case for a 'NoOps' world in which the public cloud magically takes care of everything downstream once developers have 'innovated' and 'created'. ..."
"... This kind of view comes about from people looking through the wrong end of the telescope. Turn the thing around and look up close at what goes on in the world of ops, and you get a much better sense of perspective. Teams operating in this space are not just there to deploy the next custom software release and make sure it runs quickly and robustly - in fact that's often a relatively small part of what they do. ..."
"... And coming back to operations, you are sadly mistaken if you think that the public cloud makes all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has made things more complex and unpredictable from an integration and management perspective. ..."
"... There are all kinds of valid reasons to keep an application sitting on your own infrastructure anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not forget the simple fact that running things in the cloud is often more expensive over the longer term. ..."
Dec 19, 2016 | theregister.co.uk

Get real – it's not all about developers and DevOps

Listen to some DevOps evangelists talk, and you would get the impression that IT operations teams exist only to serve the needs of developers. Don't get me wrong, software development is a good competence to have in-house if your organisation depends on custom applications and services to differentiate its business.

As an ex-developer, I appreciate the value of being able to deliver something tailored to a specific need, even if it does pain me to see the shortcuts too often taken nowadays due to ignorance of some of the old disciplines, or an obsession with time-to-market above all else.

But before this degenerates into an 'old guy' rant about 'youngsters today', let's get back to the point that I really want to make.

All of the hype around software and developers, which tends to significantly skew even the DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil. Indeed, some go so far as to make the case for a 'NoOps' world in which the public cloud magically takes care of everything downstream once developers have 'innovated' and 'created'.

This kind of view comes about from people looking through the wrong end of the telescope. Turn the thing around and look up close at what goes on in the world of ops, and you get a much better sense of perspective. Teams operating in this space are not just there to deploy the next custom software release and make sure it runs quickly and robustly - in fact that's often a relatively small part of what they do.

This becomes obvious when you recognize how much stuff runs in an Enterprise IT landscape - software packages enabling core business processes, messaging, collaboration and workflow platforms keeping information flowing, analytics environments generating critical business insights, and desktop and mobile estates serving end user access needs - to name but a few.

Vital operations

There's then everything required to deal with security, data protection, compliance and other aspects of risk. Apart from the odd bit of integration and tailoring work - the need for which is diminishing with modern 'soft-coded', connector-driven solutions - very little of all this has anything to do with development and developers.

A big part of the rationale for modernising your application landscape and migrating to the latest flexible and open software packages and platforms is to eradicate the need for coding wherever you can. Code is expensive to build and maintain, and the same can often be achieved today through software switches, policy-driven workflow, drag-and-drop interface design, and so on. Sensible IT teams only code when they absolutely have to.

And coming back to operations, you are sadly mistaken if you think that the public cloud makes all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has made things more complex and unpredictable from an integration and management perspective.

There are all kinds of valid reasons to keep an application sitting on your own infrastructure anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not forget the simple fact that running things in the cloud is often more expensive over the longer term.

Against this background, an 'appropriate' level of custom development and the selective use of cloud services will be the way forward for most organisations, all underpinned by a well-run data centre environment acting as the hub for hybrid delivery. This is the approach that tends to be taken by the most successful enterprise IT teams, and the element that makes particularly high achievers stand out is agile and effective IT operations.

This isn't just to support any DevOps agenda you might have; it is demonstrably a key enabler across the board. Of course if you work in operations, you will know already intuitively know all this. But if you want some ammunition to spell it out to others who need enlightenment, take a look at our research report entitled IT Ops and a Digital Business Enabler; more than just keeping the lights on . This is based on input from 400 Senior European IT professionals. ®

Paul Smith
I think this is one fad that has run its course. If nothing else, the one thing that cloud has brought to the software world is the separation of software from the environment it runs in, and since the the Ops side of DevOps is all about the integration of the platform and software, what you end up with in a cloudy world is a lot of people looking for a new job.
Anonymous Coward

For decades developers have been ignored by infrastructure vendors because the decision makers buying infrastructure sit in the infrastructure teams. Now with the cloud etc vendors realize they will lose supporters within these teams.

So instead - infrastructure vendors target developers to become their next fanboys.

E.g. Dear developer, you won't need to speak to your infrastructure admins anymore to setup a development environment. Now you can automate, orchestrate the provisioning of your containerized development environment at the push of a button. Blah blah blah, but you have to buy our storage.

I remember the days when every DBA wanted RAID10 just because thats what the whitepaper recommended. By that time storage technology had long moved on, but the DBA still talked about Full Stripe Writes.

Now with DevOps you'll have Developers influencing infrastructure decisions, because they just learned about snapshots. And yes - it has to be all flash - and designed from the ground up by millenials that eat avocado.

John 104
Re: DevOps was never supposed to replace Operations

Yes, DevOps isn't about replacing Ops. But try telling that to the powers that be. It is sold and seen as a cost cutting measure.

As for devs learning Ops and vice versa, there are very few on both sides who really understand what it takes to do the others job. I have a very high regard for Devs, but when it comes to infra, they are, as a whole, very incompetent. Just like I'm incompetent in Dev. can't have one without the other. I feel that in time, the pendulum will swing away from cloud as execs and accountants realize how it isn't really saving any money.

The real question is: Will there be any qualified operations engineers available or will they all have retired out or have found work elsewhere. It isn't easy to be an ops engineer, takes a lot of experience to get there, and qualified candidates are hard to come by. Let's face it, in today's world, its a dying breed.

John 104
Very Nice

Nice of you to point out what us in Ops have known all along. I'm afraid it will fall on deaf ears, though. Until the executives who constantly fall for the new shiny are made to actually examine business needs and processes and make business decisions based on said.

Our laughable move to cloud here involved migrating off of on prem Exchange to O365. The idea was to free up our operations team to allow us to do more in house projects. Funny thing is, it takes more management of the service than we ever did on premises. True, we aren't maintaining the Exchange infra, but now we have SQL servers, DCs, ADFS, etc, to maintain in the MS cloud to allow authentication just to use the product. And because mail and messaging is business critical, we have to have geographically disparate instances of both. And the cost isn't pretty. Yay cloud.

[May 17, 2017] Talk of tech innovation is bullsht. Shut up and get the work done – says Linus Torvalds

May 17, 2017 | theregister.co.uk

Linus Torvalds believes the technology industry's celebration of innovation is smug, self-congratulatory, and self-serving. The term of art he used was more blunt: "The innovation the industry talks about so much is bullshit," he said. "Anybody can innovate. Don't do this big 'think different'... screw that. It's meaningless.

In a deferential interview at the Open Source Leadership Summit in California on Wednesday, conducted by Jim Zemlin, executive director of the Linux Foundation, Torvalds discussed how he has managed the development of the Linux kernel and his attitude toward work.

"All that hype is not where the real work is," said Torvalds. "The real work is in the details."

Torvalds said he subscribes to the view that successful projects are 99 per cent perspiration, and one per cent innovation.

As the creator and benevolent dictator of the open-source Linux kernel , not to mention the inventor of the Git distributed version control system, Torvalds has demonstrated that his approach produces results. It's difficult to overstate the impact that Linux has had on the technology industry. Linux is the dominant operating system for servers. Almost all high-performance computing runs on Linux. And the majority of mobile devices and embedded devices rely on Linux under the hood.

The Linux kernel is perhaps the most successful collaborative technology project of the PC era. Kernel contributors, totaling more than 13,500 since 2005, are adding about 10,000 lines of code, removing 8,000, and modifying between 1,500 and 1,800 daily, according to Zemlin. And this has been going on – though not at the current pace – for more than two and a half decades.

"We've been doing this for 25 years and one of the constant issues we've had is people stepping on each other's toes," said Torvalds. "So for all of that history what we've done is organize the code, organize the flow of code, [and] organize our maintainership so the pain point – which is people disagreeing about a piece of code – basically goes away."

The project is structured so people can work independently, Torvalds explained. "We've been able to really modularize the code and development model so we can do a lot in parallel," he said.

Technology plays an obvious role but process is at least as important, according to Torvalds.

"It's a social project," said Torvalds. "It's about technology and the technology is what makes people able to agree on issues, because ... there's usually a fairly clear right and wrong."

But now that Torvalds isn't personally reviewing every change as he did 20 years ago, he relies on a social network of contributors. "It's the social network and the trust," he said. "...and we have a very strong network. That's why we can have a thousand people involved in every release."

The emphasis on trust explains the difficulty of becoming involved in kernel development, because people can't sign on, submit code, and disappear. "You shoot off a lot of small patches until the point where the maintainers trust you, and at that point you become more than just a guy who sends patches, you become part of the network of trust," said Torvalds.

Ten years ago, Torvalds said he told other kernel contributors that he wanted to have an eight-week release schedule, instead of a release cycle that could drag on for years. The kernel developers managed to reduce their release cycle to around two and half months. And since then, development has continued without much fuss.

"It's almost boring how well our process works," Torvalds said. "All the really stressful times for me have been about process. They haven't been about code. When code doesn't work, that can actually be exciting ... Process problems are a pain in the ass. You never, ever want to have process problems ... That's when people start getting really angry at each other." ®

[May 17, 2017] So your client's under-spent on IT for decades and lives in fear of an audit

Notable quotes:
"... Most of us use some form of desired state solution already. Desired state solutions basically involve an OS agent that gets a config from a centralized location and applies the relevant configuration to the operating system and/or applications. ..."
May 17, 2017 | theregister.co.uk
12 May 2017 at 14:56, Trevor Pott Infrastructure as code is a buzzword frequently thrown out alongside DevOps and continuous integration as being the modern way of doing things. Proponents cite benefits ranging from an amorphous "agility" to reducing the time to deploy new workloads. I have an argument for infrastructure as code that boils down to "cover your ass", and have discovered it's not quite so difficult as we might think.

... ... ...

None of this is particularly surprising. When you have an environment where each workload is a pet , change is slow, difficult, and requires a lot of testing. Reverting changes is equally tedious, and so a lot of planning goes into making sure than any given change won't cascade and cause knock-on effects elsewhere.

In the real world this is really the result of two unfortunate aspects of human nature. First: everyone hates doing documentation, so it's highly unlikely that in an unstructured environment every change from the last refresh was documented. The second driver of chaos and problems is that there are few things more permanent than a temporary fix.

When you don't have the budget for the right hardware, software or services you make do. When something doesn't work you "innovate" a solution. When that breaks something, you patch it. You move from one problem to the next, and if you're not careful, you end up with something so fragile that if you breathe on it, it falls over. At this point, you burn it all down and restart from scratch.

This approach to IT is fine - if you have 5, 10 or even 50 workloads. A single techie can reasonably be expected to keep that all in their head, know their network and solve any problems they encounter. Unfortunately, 50 workloads is today restricted to only the smallest of shops. Everyone else is juggling too many workloads to be playing the pets game any more.

Most of us use some form of desired state solution already. Desired state solutions basically involve an OS agent that gets a config from a centralized location and applies the relevant configuration to the operating system and/or applications. Microsoft's group policy can be considered a really primitive version of this, with System Center being a more powerful but miserable to use example. The modern friendly tools being Puppet, Chef, Saltstack, Ansible and the like.

Once you have desired state configs in place we're no longer beating individual workloads into shape, or checking them manually for deviation from design. If all does what it says on the tin, configurations are applied and errors thrown if they can't be. Usually there is some form of analysis software to determine how many of what is out of compliance. This is a big step forward.

... ... ...

This article is sponsored by HPE.

[May 16, 2017] The Technocult Soleil Wiki Fandom powered by Wikia

May 16, 2017 | soleil.wikia.com
The Technocult, also known as the Machine cult is the semi-offical name given by The Church of the Crossed Heart to followers of the Mechanicum faith who supply and maintain virtually all of the church's technology, engineering and industry.

Although they serve with the Church of the Crossed Heart they have their own version of worship that differs substantially in theology and ritualistic forms from that of The Twelve Angels . Instead the Technocult worships a deity they call the Machine god or Omnissiah. The Technocult believes that knowledge is divine and comes only form the Omnissiah thus making any objects that demonstrate the application of knowledge , i.e machinery, or contain it (books) holy in the eyes/optical implants of the Techcult. The Technocult regard organic flesh as weak and imperfect, with the Rot being veiwed as a divine message from the Omnissah demonstrating its weakness, thus making its removal and replacement by mechanical, bionic parts a sacred process that brings them closer to their god with many of its older members having very little of their original bodies remaining.

The date of the cults formation is unknown, or a closely guarded secret...

[May 16, 2017] 10 Things I Hate About Agile Development!

May 16, 2017 | www.allaboutagile.com

1. Saying you're doing Agile just cos you're doing daily stand-ups. You're not doing agile. There is so much more to agile practices than this! Yet I'm surprised how often I've heard that story. It really is remarkable.

... ... ....

3. Thinking that agile is a silver bullet and will solve all your problems. That's so naiive, of course it won't! Humans and software are a complex mix with any methodology, let alone with an added dose of organisational complexity. Agile development will probably help with many things, but it still requires a great deal of skill and there is no magic button.

... ... ...

8. People who use agile as an excuse for having no process or producing no documentation. If documents are required or useful, there's no reason why an agile development team shouldn't produce them. Just not all up-front; do it as required to support each feature or iteration. JFDI (Just F'ing Do It) is not agile!

David, 23 February 2010 at 1:21 am

So agree on number 1. Following "Certified" Scrum Master training (prior to the exam requirement), a manager I know now calls every regular status meeting a "scrum", regardless of project or methodology. Somehow the team is more agile as a result.

Ironically he pulled up another staff member for "incorrectly" using the term retrospective.

Andy Till, 23 February 2010 at 9:28 am

I can think of far worse, how about pairing with the guy in the office who is incapable of compromise?

Steve Watson, 13 May 2010 at 10:06 am

Kelly

Good list!

I like number 9 as I find with testing people think that they no longer need to write proper test cases and scripts – a list of confirmations on a user story will do. Well, if its a simple change I guess you can dispense with test scripts, but if its something more complex then there is no reason NOT to write scripts. If you have a reasonably large team of people who could execute the tests, they can follow the test steps and validate against the expected results. It also means that you can sensibly lump together test cases and cover them with one test.

If you dont think about how you will execute them and just tackle them one by one off the confirmations list, you miss the opportunity to run one test and cover many separate cases, saving time.

I always find test scripts useful if someone different re-runs a test, as they then follow the same process as before. This is why we automate regression so the tests are executed the same each time.

John Quincy, 24 October 2011 at 12:02 am

I am not a fan of agile. Unless you have a small group of developers who are in perfect sync with each other at all times, this "one size fits all" methodology is destructive and downright dangerous. I have personally witnessed a very good company go out of business this year because they transformed their development shop from a home-grown iterative methodology to SCRUM. The team was required to abide by the SCRUM rules 100%. They could not keep up with customer requirements and produced bug filled releases that were always late. These developers went from fun, friendly, happy people (pre-SCRUM) [who NEVER missed a date] to bitter, sarcastic, hard to be around 'employees'. When the writing was on the wall a couple of months back, the good ones got the hell out of there, and the company could not recover.

Some day, I'm convinced that Beck through Thomas will proclaim that the Agile Manifesto was all a big practical joke that got out of control.

This video pretty much lays out the one and only reason why management wants to implement Agile:

http://www.youtube.com/watch?v=nvks70PD0Rs

grumpasaurus, 9 February 2014 at 4:30 pm

It's a cycle of violence when a project claims to be Agile just because of standups and iterations and don't think about resolving the core challenges they've had to begin with. People are left still battling said challenges and then say that Agile sucks.

[May 15, 2017] Wall Street Journal Enterprises Are Not Ready for DevOps, but May Not Survive Without It by Abel Avram

Notable quotes:
"... while DevOps is appealing to startups, there are important stumbling blocks in the path of DevOps adoption within the enterprise. ..."
"... The tools needed to implement a DevOps culture are lacking. While some of the tools can be provided by vendors and others can be created within the enterprise, a process which takes a long period of time, "there is a marathon of organizational change and restructuring that must occur before such tools could ever be bought or built." ..."
Jun 06, 2014 | www.infoq.com
Rachel Shannon-Solomon suggests that most enterprises are not ready for DevOps, while Gene Kim says that they must make themselves ready if they want to survive.

Rachel Shannon-Solomon, a venture associate at At Work-Bench, has recently written a blog post for The Wall Street Journal entitled DevOps Is Great for Startups, but for Enterprises It Won't Work-Yet , arguing that while DevOps is appealing to startups, there are important stumbling blocks in the path of DevOps adoption within the enterprise.

While acknowledging that large companies such as Google and Facebook benefit from implementing DevOps, and that "there is no lack of appetite to experiment with DevOps practices" within "Fortune 500s and specifically financial services firms", Shannon-Solomon remarks that "there are few true change agents within enterprise IT willing to affect DevOps implementations."

Shehas come to this conclusion basedon "conversations with startup founders, technology incumbents offering DevOps solutions, and technologists within large enterprises."

Shannon-Solomon brings four arguments to support her position:

Shannon-Solomonends her post wondering "how long will it be until enterprises are forced to accept that they must accelerate their experiments with DevOps" and hoping that "more individual change agents within large organizations may emerge" in the future.

[May 15, 2017] Why Your Users Hate Agile

No methodology can substitute good engineers who actually talk to and work with each other. Good engineers can benefit from a better software development methodology, but even the best software development methodology is powerless to convert mediocre developers into stars.
Notable quotes:
"... disorganized and never-ending ..."
"... Agile is to proper software engineering what Red Bull Flugtag is to proper aeronautic engineering.... ..."
"... As TFA points out, that always works fine when your requirements are *all* known an are completely static. That rarely happens in most fields. ..."
"... The problem with Agile is that it gives too much freedom to the customer to change their mind late in the project and make the developers do it all over again. ..."
"... If you are delivering to customer requests you will always be a follower and never succeed. You need to anticipate what the customers need. ..."
"... It frequently is. It doesn't matter what methodology you use -- if you change major features/priorities at the last minute it will cost multiple times as much. Yet frequently customers expect it to be cheap because "we're agile". And by accepting that change will happen you don't push the customers to make important decisions early, ensuring that major changes will happen, instead of just being possible. ..."
"... The problem with all methodologies, or processes, or whatever today's buzzword is, is that too many people want to practice them in their purest form. Excessive zeal in using any one approach is the enemy of getting things done. ..."
"... On a sufficiently large project, some kind of upfront design is necessary. ..."
"... If you insist on spinning back every little change to a monstrously detailed Master Design Document, you'll move at a snail's pace. As much as I hate the buzzword "design patterns", some pattern is highly desirable. ..."
"... there is no substitute for good engineers who actually talk to and work with each other. ..."
"... If you don't trust those people to make intelligent decisions (including about when things do have to be passed up) then you've either got the wrong people or a micromanagement fetish ..."
"... The problem the article refers to about an upfront design being ironclad promises is tough. Some customers will work with you, and others will get their lawyers and "systems" people to waste your time complaining about every discrepancy, ..."
"... In defense everything has to meet spec, but it doesn't have to work. ..."
"... There is absolutely no willingness to make tradeoffs as the design progresses and you find out what's practical and necessary and what's not. ..."
"... I'll also admit that there is a tendency to get sloppy in software specs because it is easier to make changes. Hardware, with the need to order materials, have things fabbed, tape out a chip, whatever, imposes a certain discipline that's lacking when you know you can change the source code at anytime. Being both, I'm not saying this is because hardware engineers are virtuous and software engineers are sloppy, but because engineers are human (at least some of them). ..."
"... Impressive stuff, and not unique to the space shuttle. Fly-by-wire systems are the same way. You're talking DO-178B [wikipedia.org] Level A stuff. It works, and it's very very expensive. If it was only 10x the cost of normal software development I'd be amazed. I agree that way too much software is poorly planned and implemented crap, and part of the reason is that nobody wants realistic cost estimates or to make the difficult decisions about what it's supposed to do up-front. But what you're talking about is aerospace quality. You couldn't afford a car or even a dishwasher made to those standards. ..."
Jun 05, 2013 | Slashdot

"What developers see as iterative and flexible, users see as disorganized and never-ending.

This article discusses how some experienced developers have changed that perception. '... She's been frustrated by her Agile experiences - and so have her clients.

"There is no process. Things fly all directions, and despite SVN [version control] developers overwrite each other and then have to have meetings to discuss why things were changed. Too many people are involved, and, again, I repeat, there is no process.' The premise here is not that Agile sucks - quite to the contrary - but that developers have to understand how Agile processes can make users anxious, and learn to respond to those fears. Not all those answers are foolproof.

For example: 'Detailed designs and planning done prior to a project seems to provide a "safety net" to business sponsors, says Semeniuk. "By providing a Big Design Up Front you are pacifying this request by giving them a best guess based on what you know at that time - which is at best partial or incorrect in the first place." The danger, he cautions, is when Big Design becomes Big Commitment - as sometimes business sponsors see this plan as something that needs to be tracked against.

"The big concern with doing a Big Design up front is when it sets a rigid expectation that must be met, regardless of the changes and knowledge discovered along the way," says Semeniuk.' How do you respond to user anxiety from Agile processes?"

Shinobi

Agile summed up (Score:5, Funny)

Agile is to proper software engineering what Red Bull Flugtag is to proper aeronautic engineering....

Nerdfest

Re: doesn't work

As TFA points out, that always works fine when your requirements are *all* known an are completely static. That rarely happens in most fields.

Even in the ones where it does it's usually just management having the balls to say "No, you can give us the next bunch of additions and changes when this is delivered, we agreed on that". It frequently ends up delivering something less than useful.

MichaelSmith

Re: doesn't work (Score:5, Insightful)

The problem with Agile is that it gives too much freedom to the customer to change their mind late in the project and make the developers do it all over again.

ArsonSmith

Re: doesn't work (Score:4, Insightful)

...but they can be trusted to say what is most important to them at the time.

No they can't. If you are delivering to customer requests you will always be a follower and never succeed. You need to anticipate what the customers need. As with the I guess made up quote attributed to Henry Ford, "If I listened to my customers I'd have been trying to make faster horses." Whether he said it or not, the statement is true. Customers know what they have and just want it to be faster/better/etc you need to find out what they really need.

AuMatar

Re: doesn't work (Score:5, Insightful)

It frequently is. It doesn't matter what methodology you use -- if you change major features/priorities at the last minute it will cost multiple times as much. Yet frequently customers expect it to be cheap because "we're agile". And by accepting that change will happen you don't push the customers to make important decisions early, ensuring that major changes will happen, instead of just being possible.

ebno-10db

Re: doesn't work (Score:5, Interesting)

"Proper software engineering" doesn't work.

You're right, but you're going to the other extreme. The problem with all methodologies, or processes, or whatever today's buzzword is, is that too many people want to practice them in their purest form. Excessive zeal in using any one approach is the enemy of getting things done.

On a sufficiently large project, some kind of upfront design is necessary. Spending too much time on it or going into too much detail is a waste though. Once you start to implement things, you'll see what was overlooked or why some things won't work as planned. If you insist on spinning back every little change to a monstrously detailed Master Design Document, you'll move at a snail's pace. As much as I hate the buzzword "design patterns", some pattern is highly desirable. Don't get bent out of shape though when someone has a good reason for occasionally breaking that pattern or, as you say, you'll wind up with 500 SLOC's to add 2+2 in the approved manner.

Lastly, I agree that there is no substitute for good engineers who actually talk to and work with each other. Also don't require that every 2 bit decision they make amongst themselves has to be cleared, or even communicated, to the highest levels. If you don't trust those people to make intelligent decisions (including about when things do have to be passed up) then you've either got the wrong people or a micromanagement fetish. Without good people you'll never get anything decent done, but with good people you still need some kind of organization.

The problem the article refers to about an upfront design being ironclad promises is tough. Some customers will work with you, and others will get their lawyers and "systems" people to waste your time complaining about every discrepancy, without regard to how important it is. Admittedly bad vendors will try and screw their customers with "that doesn't matter" to excuse every screw-up and bit of laziness. For that reason I much prefer working on in-house projects, where "sure we could do exactly what we planned" gets balanced with the cost and other tradeoffs.

The worst example of those problems is defense projects. As someone I used to work with said: In defense everything has to meet spec, but it doesn't have to work. In the commercial world specs are flexible, but it has to work.

If you've ever worked in that atmosphere you'll understand why every defense project costs a trillion dollars. There is absolutely no willingness to make tradeoffs as the design progresses and you find out what's practical and necessary and what's not. I'm not talking about meeting difficult requirements if they serve a purpose (that's what you're paid for) but being unwilling to compromise on any spec that somebody at the beginning of the project pulled out of their posterior and obviously doesn't need to be so stringent. An elephant is a mouse built to government specifications.

Ok, you can get such things changed, but it requires 10 hours from program managers for every hour of engineering. Conversely, don't even think about offering a feature or capability that will be useful and easy to implement, but is not in the spec. They'll just start writing additional specs to define it and screw you by insisting you meet those.

As you might imagine, I'm very happy to be back in the commercial world.

Anonymous Coward

Re: doesn't work (Score:2, Interesting)

You've fallen into the trap of using their terminology. As soon as 'the problem' is defined in terms of 'upfront design', you've already lost half the ideological battle.

'The problem' (with methodology) is that people want to avoid the difficult work of thinking hard about the business/customer's problem and coming up with solutions that meet all their needs. But there isn't a substitute for thinking hard about the problem and almost certainly never will be.

The earlier you do that hard thinking about the customer's problems that you are trying to solve the cheaper, faster and better quality the result will be. Cheaper? Yes, because bugfixing that is done later in the project is a lot more expensive (as numerous software engineering studies have shown) Faster? Yes, because there's less rework. (Also, since there is usually a time = money equivalency, you can't have it done cheap unless it is also done fast. Higher quality? Yes, because you don't just randomly stumble across quality. Good design trumps bad design every single time.

... ... ...

ebno-10db

Re: doesn't work (Score:4, Interesting)

Until the thing is built or the software is shipped there are many options and care should be taken that artificial administrative constraints don't remove too many of them.

Exactly, and as someone who does both hardware and software I can tell you that that's better understood by Whoever Controls The Great Spec in hardware than in software. Hardware is understood to have physical constraints, so not every change is seen as the result of a screw-up. It's a mentality.

I'll also admit that there is a tendency to get sloppy in software specs because it is easier to make changes. Hardware, with the need to order materials, have things fabbed, tape out a chip, whatever, imposes a certain discipline that's lacking when you know you can change the source code at anytime. Being both, I'm not saying this is because hardware engineers are virtuous and software engineers are sloppy, but because engineers are human (at least some of them).

ebno-10db

Re: doesn't work (Score:2)

http://www.fastcompany.com/28121/they-write-right-stuff

This is my evidence that "proper software engineering" *can* work. The fact that most businesses (and their customers) are willing to save money by accepting less from their software is not the fault of software engineering. We could and did build buildings much faster than we do today, if you are willing to make more mistakes and pay more in human lives. If established industries and their customers began demanding software at that higher standard and were willing to pay for it like it was real engineering, then maybe it would happen more often.

Impressive stuff, and not unique to the space shuttle. Fly-by-wire systems are the same way. You're talking DO-178B [wikipedia.org] Level A stuff. It works, and it's very very expensive. If it was only 10x the cost of normal software development I'd be amazed. I agree that way too much software is poorly planned and implemented crap, and part of the reason is that nobody wants realistic cost estimates or to make the difficult decisions about what it's supposed to do up-front. But what you're talking about is aerospace quality. You couldn't afford a car or even a dishwasher made to those standards.

donscarletti

Re: doesn't work (Score:3)

260 people maintaining 420,000 lines of code, written to precise externally provided specifications that change once every few years.

This is fine for NASA, but if you want something that does roughly what you need before your competitors come up with something better, you'd better find some better programmers.

[May 15, 2017] DevOps Fact or Fiction

May 15, 2017 | blog.appdynamics.com

In light of all the hype, we have created a DevOps parody Series – DevOps: Fact or Fiction .

For those of you who did not see, in October we created an entirely separateblog(inspired by this ) – however decided that it is relevant enough to transform into a series on the AppDynamics Blog . The series will point out the good, the bad, and the funny about IT and DevOps. Don't take anything too seriously – it's nearly 100% stereotypes : ).

Stay tuned for more DevOps: Fact or Fiction to come. Here we go

[May 15, 2017] How DevOps is Killing the Developer by Jeff Knupp

Notable quotes:
"... Start-ups taught us this. Good developers can be passable DBAs, if need be. They make decent testers, "deployment engineers", and whatever other ridiculous term you'd like to use. Their job requires them to know much of the domain of "lower" roles. There's one big problem with this, and hopefully by now you see it: It doesn't work in the opposite direction. ..."
"... An example will make this more clear. My dad is a dentist running his own practice. He employs a secretary, hygienist, and dental assistant. Under some sort of "DentOps" movement, my dad would be making appointments and cleaning people's teeth while trying to find time to drill cavities, perform root canals, etc. My dad can do all of the other jobs in his office, because he has all the specialized knowledge required to do so. But no one, not even all of his employees combined, can do his job. ..."
"... Such a movement does a disservice to everyone involved, except (of course) employers. What began as an experiment aimed at increasing software quality has become a farce, where the most talented employees are overworked (while doing less, less useful work) and lower-level positions simply don't exist. ..."
"... you're right. Pure DevOps is no more efficient or sensible than pure Agile (or the pure Extreme Programming or the pure Structured Programming that preceeded it). The problem is purists and ideological zealotry not the particular brand of religion in question. Insistence on adherence to dogma is the problem as it prevents adoption of flexible, 'fit for purpose' solutions. Exposure to all the alternatives is good. Insisting that one hammer is ideal for every sort of nail we have and ever will have is not. ..."
"... There are developers who have a decent set of skills outside of development in QA, Operations, DB Admin, Networking, etc. Equally so, there are operations engineers who have a decent set of skills outside of operations in QA, Development, DB Admin, networking, etc. Extend this to QA and other disciplines. What I have never seen is one person who can perform all those jobs outside of their main discipline with the same level of professionalism, experience and acumen that each of those roles require to do it well at an Enterprise/World Class level. ..."
"... I prefer to think of DevOps as more of a full-stack team concept. Applying the full-stack principle at the individual levels is not sustainable, as you point out. ..."
"... DevOps roles are strictly automation focused, at least according to all job specifications I see on the internet. They don't need any development skills at all. To me it looks like a new term for what we used to call IT Operations, but more scripting/automation focused. DevOps engineer will need to know Puppet, Chef, Ansible, OS management, public cloud management, know how to set up monitoring, logging and all that stuff usual sysadmin used to do but in the modern world. In fact I used to apply for DevOps roles but quickly changed my mind as it turned out no companies need a person wearing many hats, it has absolutely nothing to do with creating software. Am I wrong? ..."
Apr 15, 2014 | jeffknupp.com
How 'DevOps' is Killing the Developer

There are two recent trends I really hate: DevOps and the notion of the "full-stack" developer. The DevOps movement is so popular that I may as well say I hate the x86 architecture or monolithic kernels. But it's true: I can't stand it. The underlying cause of my pain? This fact: not every company is a start-up, though it appears that every company must act as though they were. DevOps

"DevOps" is meant to denote a close collaboration and cross-pollination between what were previously purely development roles, purely operations roles, and purely QA roles. Because software needs to be released at an ever-increasing rate, the old "waterfall" develop-test-release cycle is seen as broken. Developers must also take responsibility for the quality of the testing and release environments.

The increasing scope of responsibility of the "developer" (whether or not that term is even appropriate anymore is debatable) has given rise to a chimera-like job candidate: the "full-stack" developer. Such a developer is capable of doing the job of developer, QA team member, operations analyst, sysadmin, and DBA. Before you accuse me of hyperbole, go back and read that list again. Is there any role in the list whose duties you wouldn't expect a "full-stack" developer to be well versed in?

Where did these concepts come from? Start-ups, of course (and the Agile methodology). Start-ups are a peculiar beast and need to function in a very lean way to survive their first few years. I don't deny this . Unfortunately, we've taken the multiple technical roles that engineers at start-ups were forced to play due to lack of resources into a set of minimum qualifications for the role of "developer".

Many Hats

Imagine you're at a start-up with a development team of seven. You're one year into development of a web applications that X's all the Y's and things are going well, though it's always a frantic scramble to keep everything going. If there's a particularly nasty issue that seems to require deep database knowledge, you don't have the liberty of saying "that's not my specialty," and handing it off to a DBA team to investigate. Due to constrained resources, you're forced to take on the role of DBA and fix the issue yourself.

Now expand that scenario across all the roles listed earlier. At any one time, a developer at a start-up may be acting as a developer, QA tester, deployment/operations analyst, sysadmin, or DBA. That's just the nature of the business, and some people thrive in that type of environment. Somewhere along the way, however, we tricked ourselves into thinking that because, at any one time, a start-up developer had to take on different roles he or she should actually be all those things at once.

If such people even existed , "full-stack" developers still wouldn't be used as they should. Rather than temporarily taking on a single role for a short period of time, then transitioning into the next role, they are meant to be performing all the roles, all the time . And here's what really sucks: most good developers can almost pull this off.

The Totem Pole

Good developers are smart people. I know I'm going to get a ton of hate mail, but there is a hierarchy of usefulness of technology roles in an organization. Developer is at the top, followed by sysadmin and DBA. QA teams, "operations" people, release coordinators and the like are at the bottom of the totem pole. Why is it arranged like this?

Because each role can do the job of all roles below it if necessary.

Start-ups taught us this. Good developers can be passable DBAs, if need be. They make decent testers, "deployment engineers", and whatever other ridiculous term you'd like to use. Their job requires them to know much of the domain of "lower" roles. There's one big problem with this, and hopefully by now you see it: It doesn't work in the opposite direction.

A QA person can't just do the job of a developer in a pinch, nor can a build-engineer do the job of a DBA. They never acquired the specialized knowledge required to perform the role. And that's fine. Like it or not, there are hierarchies in every organization, and people have different skill sets and levels of ability. However, when you make developers take on other roles, you don't have anyone to take on the role of development!

An example will make this more clear. My dad is a dentist running his own practice. He employs a secretary, hygienist, and dental assistant. Under some sort of "DentOps" movement, my dad would be making appointments and cleaning people's teeth while trying to find time to drill cavities, perform root canals, etc. My dad can do all of the other jobs in his office, because he has all the specialized knowledge required to do so. But no one, not even all of his employees combined, can do his job.

Such a movement does a disservice to everyone involved, except (of course) employers. What began as an experiment aimed at increasing software quality has become a farce, where the most talented employees are overworked (while doing less, less useful work) and lower-level positions simply don't exist.

And this is the crux of the issue. All of the positions previously held by people of various levels of ability are made redundant by the "full-stack" engineer. Large companies love this, as it means they can hire far fewer people to do the same amount of work. In the process, though, actual development becomes a vanishingly small part of a developer's job . This is why we see so many developers that can't pass FizzBuzz: they never really had to write any code. All too common a question now, can you imagine interviewing a chef and asking him what portion of the day he actually devotes to cooking?

Jack of All Trades, Master of None

If you are a developer of moderately sized software, you need a deployment system in place. Quick, what are the benefits and drawbacks of the following such systems: Puppet, Chef, Salt, Ansible, Vagrant, Docker. Now implement your deployment solution! Did you even realize which systems had no business being in that list?

We specialize for a reason: human beings are only capable of retaining so much knowledge. Task-switching is cognitively expensive. Forcing developers to take on additional roles traditionally performed by specialists means that they:

What's more, by forcing developers to take on "full-stack" responsibilities, they are paying their employees far more than the market average for most of those tasks. If a developer makes 100K a year, you can pay four developers 100K per year to do 50% development and 50% release management on a single, two-person task. Or, simply hire a release manager at, say, 75K and two developers who develop full-time. And notice the time wasted by developers who are part time release-managers but don't always have releases to manage.

Don't Kill the Developer

The effect of all of this is to destroy the role of "developer" and replace it with a sort of "technology utility-player". Every developer I know got into programming because they actually enjoyed doing it (at one point). You do a disservice to everyone involved when you force your brightest people to take on additional roles.

Not every company is a start-up. Start-ups don't make developers wear multiple hats by choice, they do so out of necessity. Your company likely has enough resource constraints without you inventing some. Please, don't confuse "being lean" with "running with the fewest possible employees". And for God's sake, let developers write code!


Enno 2 years ago
Some background... I started life as a dev (30years ago), have mostly been doing sysadmin and project tech lead sorts of work for the last 15. I've always assumed the DevOps movement was resulting in sub-par development and sub-par sysadmin/ops precisely because people were timesharing their concerns.

But what it does bring to the party is a greater level of awareness of the other guys problems. There's nothing quite like being rung out of bed at 3am to motivate a developer to improve his products logging to make supporting it easier. Similarly the admin exposed to the vagaries of promoting things into production in a supportable, repeatable, deterministic manner quickly learns to appreciate the issues there. So DevOps has served a purpose and has offered benefits to the organisations that signed on for it.

But, you're right. Pure DevOps is no more efficient or sensible than pure Agile (or the pure Extreme Programming or the pure Structured Programming that preceeded it). The problem is purists and ideological zealotry not the particular brand of religion in question. Insistence on adherence to dogma is the problem as it prevents adoption of flexible, 'fit for purpose' solutions. Exposure to all the alternatives is good. Insisting that one hammer is ideal for every sort of nail we have and ever will have is not.

Zakaria ANBARI -> Enno 2 years ago
totally agree with you !
DevOps Reaper 2 years ago
I'm very disappointed to see this kind of rubbish. It's this type of egocentric thinking and generalization that the developer is an omniscient deity requiring worshiping and pampering that prevents DevOps from being successful. Based on the tone and your perspective it sounds like you've been doing DevOps wrong.

A developer role alone is not the linchpin that keeps DevOps humming - instead it's the respect that each team member holds for each discipline and each team member's area of expertise, the willingness of the entire team to own the product, feature delivery and operational stability end to end, to leverage each others skills and abilities, to not blame Dev or Ops or QA for failure, and to share knowledge.

There are developers who have a decent set of skills outside of development in QA, Operations, DB Admin, Networking, etc. Equally so, there are operations engineers who have a decent set of skills outside of operations in QA, Development, DB Admin, networking, etc. Extend this to QA and other disciplines. What I have never seen is one person who can perform all those jobs outside of their main discipline with the same level of professionalism, experience and acumen that each of those roles require to do it well at an Enterprise/World Class level.

If you're a developer doing QA and operations, you're doing it because you have to, but there should be no illusion that you're as good in alternate roles as someone trained and experienced in those disciplines. To do so is a disservice to yourself and your organization that signs your paycheck. If you're in this situation and you'd prefer making a difference rather than spewing complains, I would recommend talking to your manager and above about changing their skewed vision of DevOps. If they aren't open to communication, collaboration, experimentation and continual improvement, then their DevOps vision is dysfunctional and they're not supporting DevOps from the top down. Saying your DevOps and not doing it is *almost* more egregious than saying the developer is the top of a Totem Pole of existence.

spunky brewster -> DevOps Reaper a month ago
he prefaced it with 'crybabies please ignore' It's his opinion. That everyone but the lower totem pole people agree with so.. agree to disagree. I also don't think being at the bottom of the totem pole is a big f'in deal. If you're getting paid.. embrace it! So many other ways to enjoy life! The top dog people have all the pressure and die young! 99% of the people on earth dont know the difference between one nerd and another. And other nerds are always going to be egomaniacs who will find some way to justify their own superiority no matter what your achievements. So this kind of posturing is a waste of time.
Pramod 2 years ago
Amen to that!!
carlivar 2 years ago
I think there's a problem with your definition of DevOps. It doesn't mean developers have to be "full-stack" or do ops stuff. And it doesn't mean "act like a startup." It simply means, at its basis, that Developers and Operations work well together and do not have any communication barriers. This is why I hate DevOps as a title or department, because DevOps is a culture.

Let's take your DentOps example. The dentist has 3 support staff. What if they rarely spoke to the dentist? What if they were on different floors of the building? What if the dentist wrote an email about how teeth should be cleaned and wasn't available to answer questions or willing to consider feedback? What if once in a while the dentist needed to understand enough about the basics of appointment scheduling to point out problems with the system? Maybe appointments are being scheduled too close together. Would the patients get backed up throughout the day because that's the secretary's problem? Of course not. Now we'd be getting into a more accurate analogy to DevOps. If anything a dentist's office is ALREADY "DentOps" and the whole point of DevOps is to make the dev/ops interaction work in a logical culture that other industries (like dentists) already use!

StillMan -> carlivar 2 years ago
I would tend to agree with some of that. Being able to trouble shoot network issues using monitoring tools like Fiddler is a good thing to be aware of. I can also see a lot of companies using it as a way to make one person do everything. Moreover, there are probably folks out there that perpetuate that behavior by taking on the machismo argument.

By saying that if I can do it that you should be able to do it too or else you're not as good of a developer as I am. I have never heard anyone outright claim this, but I've seen this attitude time and time again from ambitious analysts looking to get a leg up, a pay raise, and a way to template their values on the rest of the team. One of the first things that you're taught as a dev is that you can't hope to know it all.

Your responsibility first and foremost as a developer is the stability and reliability of your code and the services that you provide. In some industries this is literally a matter of life and death(computers in your car, mission critical medical systems). It doesn't work for everyplace.

spunky brewster -> carlivar a month ago

I wouldn't want to pay a receptionist 200k a year like a dentist though. Learn to hire better receptionists. Even a moderately charming woman can create more customer loyalty, and cheaper, than the best dentist in the world. I want my dentist to keep quiet and have a steady hand. I want my receptionist to engage me and acknolwedge my existence.

I want my secretary to be a multitasking master. I want my dentist not to multitask at all - OUCH!

Ole Hauris -> Sørensen 2 years ago
Good points, I tend to agree. I prefer to think of DevOps as more of a full-stack team concept. Applying the full-stack principle at the individual levels is not sustainable, as you point out.

The full-stack DevOps team will have team members with primary skills in either of the traditional specialties, and will, over time, develop decent secondary skills. But the value is not in people constantly content switching - that actually kills efficiency. The value is in developers understanding and developing an open relationship with testing and operations - and vice versa. And this cooperation is inhibited by putting people in separate teams with conflicting goals. DevOps in practice is not a despecialization. It's bringing the specialists together.

ceposta Ole Hauris Sørensen 2 years ago +1.

The more isolated or silo'd developers become, the less they realize what constitutes delivering software, and the more problems are contributed to the IT process of test/build/release/scale/monitor, etc. Writing code is a small fraction of that delivery process. I've written about the success of devops and microservices that touches on this stuff because they're highly related. The future success of devops/microservices/cloud/etc isn't related to technology insofar as it is culture: http://blog.christianposta....

Thanks for the post!

Julio 2 years ago

Interesting points were raised. Solid arguments. I identified with this text and my current difficulties as developer.

Cody 2 years ago
Great article and you're definitely describing one form of dysfunctional organisation where DevOps, Agile, Full Stack, and every other $2 word has been corrupted to become a cost cutting justification; cramming more work onto people who aren't skilled for it, and eho end up not having any time to do what they were hired as experts for!

But I'd also agree with other posters that it's a little developer centric. I'm a terrible programmer and a great DBA. I can tell you most programmers who try to be DBAs are equally terrible. It's definitely not "doing the job of the receptionist" 😄

And we shouldn't forget what DevOps is meant to be about; teams making sure nobody gets called at night to fix each other's messes. That means neither developers with shitty deployments straight to production nor operations letting the disks silently fill because "if it ain't C: it ain't our problem."

Zac Smith 6 months ago
I know of 0 developers that can manage a network of any appreciable scale.

In cloud and large enterprise networks, if there were a totem (which there isn't) using your methodology would place the dev under the network engineer. Their software implements the protocol and configuration intent of the NE. Good thing the whole concept is a pile of rubbish. I think you fell into the trap you called out which is thinking at limited scale.

spunky brewster Zac Smith a month ago
It's true. We can all create LAN's at home but I wouldn't dare f with a corporate network and risk shutting down amazon for a day. Which seems to happen quite a bit.... maybe they're DEVOPPING a bit too much.
David Rawk Zac Smith 6 months ago
I tend to agree.. they are not under, but beside.. Both require a heap of skill and that includes coding.. but vastly different code.
Wilmer 9 months ago
Jeff Knupp is to one side of the spectrum. DevOps Reaper is to the other side.

Enno is more attune to what is really going on. So I won't repeat any of those arguments.

However I will ask you to put me in a box. What am I?

I graduated as a Computer Engineer (hybrid between Electrical Engineering and Computer Science). I don't say that anymore as companies have no idea as to what that means. So I called myself a Digital Electronics and Software Engineer for a while. The repeated question was all too often: "So what are you, software or hardware?"
I spent my first few years working down from board design, writing VHDL and Verilog, to embedded software in C and C++, then algorithms in optimization with the CUDA framework in C, with C++ wrappers and C# for the logic tier. Then worked another few year in particle physics with C++ compute engines with x86 assembly declarations for speed and C# for WPF UIs.

After that I went to work for a wind turbines company as system architect where is was mostly embedded and programming ARM Cortex microprocessors, high power electronics controls, custom service and diagnostics tools in C#. Real-time web based dashboards with Angular, Bootrap, and the likes for a good looking web app.
Nowadays I'm working with mobile first web applications that have a massive backend to power them. It is mostly a .NET stack form Entity Framework, to .NET WebAPI, to Angular power font ends. This company is not a start up but it is a small company. Therefore I wear the many hats. I introduced the new software life cycle with includes continuous integration and continuous deployment. Yes, I manage build servers, build tools, I develop, I'm QA, I'm a tester, I'm a DBA., I'm the deployment and configuration manager.

If you are wondering I have resorted to start calling a full stack developer. It has that edgy sound that companies like to hear. I'm still a young developer. I've only been developing for 10 years.

In my team we are all "Jack of all Trades" and "Masters of Many". We switch tasks and hats because it is fun and keep everyone from getting bored/stuck. Our process is called "Best practices that work for this team".

So, I think of myself as a software engineer. I think I'm a developer. I think I'm DevOps, I think I'm QA.

ישראל פרוכטר Wilmer 8 months ago
I join you with the lack of title, we don't need those titles (only when HR people are involved, and we need to kind of fake our persona anyhow...)
Matt King a year ago
Lets start with that DevOps didn't come from startups. It came from Boeing mainly, and a few other major blue chip IT shops, investing heavily in systems management technology around the turn of the century. The goal at the time was simply to change the ratio of servers to IT support personnel, and the re-thinking and re-organizing of development and operations into one organization with one set of common goals. The 'wearing many hats' thing you discuss is a feature of startups, but that feature is independent of siloed or integrated organizations.

I prefer the 'sportzing' analogy of basketball and football. Football has specialist teams that are largely functionally independent because they focus on distinct goals. Basketball has specialist positions, but the whole team is focused on the same goals. I'm not saying one sport is better than the other. I am saying the basketball mentality works better in the IT environment. Delivering the product or service to the customer is the common goal that everyone should be thinking about, and how the details of their job fits into that overall picture. It sounds like to me you are really saying "Hey, its my job and only my job to think about how it all fits together and works"

Secondly, while it is pretty clear that the phrase 'full stack engineer' is about as useful as "Cloud Computing", your perspective that somehow developers are the 'top' of the tree able to do any job is very mistaken. There are key contributors from every specialty who have that ability, and more useful names for them are things like "10x", or "T-shaped". Again, you are describing a real situation, but correlating it with unrelated associations. It is just as likely, and just as valuable, to find an information architect who can also code, or a systems admin that can also diagnose database performance, or an electrician that can also hang sheetrock. Those people do fit your analogy of 'being on top', because they are not siloed and stovepiped into just their speciality.

The DevOps mindset fosters this way of thinking, instead of the old and outdated specialist way of thinking you are defending. Is it possible your emotional reaction is fear based against the possibility that your relative value will decrease if others start thinking outside their boxes?

Interesting to note that Agile also started at Boeing, but 10 years earlier. I live in the startup world of Seattle, but know my history and realize that much of what appears new is actually just 'new to you'(or me), and that most of cutting edge technology and thinking is just combining ideas from other industries in new ways.

BosnianDolphin 2 years ago

Agree on most points, but nobody needs DBA - unless it is some massive project. DBA people should pick up new skills fast.

Safespace Scooter 2 years ago
The problem is that developers are trained to crank out code and hope that QA teams will find problems, many times not even sure how to catch holes. DevOps trains people to think critically and do both. It isn't killing developers, it is making them look like noobs while phasing them out.
strangedays Safespace Scooter a year ago
Yeah, good luck with that attitude. Your company's gonna have a good'ole time looking for and keeping new developer talent. Because as we all know, smart people love working with dummies. I'd love to see 'your QA' team work on our 'spatial collision algorithm' and make our devs "look like noob". You sound like most middle management schmucks.
Manachi 2 years ago
Fantastic article! I was going to start honing in on the points I particularly agree with but all of it is just spot on. Great post.
Ralli Soph 9 days ago
Funniest article so far on full stack. It's a harsh reality for devs, because were asked to do everything, know everything, so how can you really believe QA or DBA can do the job of someone like that? There is a crazy amount of hours a fullstack dev invests in aquiring that kind of knowledge, not to mention some people are also talented at their job. Imagine trying to tell the QA to do that? Maybe for a few hours someone can be a backup just in case something happens, but really it's like replacing the head surgeon.
spunky brewster a month ago
The best skill you can learn in your coding career is your next career. Noone wants a 45 year old coder.

I see so much time wasted learning every new thing when you should just be plugging away to get the job done, bank the $$, and move on. All your accumulated skills will be worthless in a decade or so, and your entire knowledge useless in 2 decades. My ability to turn a wrench is what's keeping me from the poor house. And I have a engineering degree from UIUC! I also don't mind. Think about a 100 week as a plumber with OT in a reasonably priced neighborhood, vs a coder. Who do you think is making more? Now I'm not saying you cant survive into your 50's programming, but typically they get retired forcefully, and permanently.. by a heart attack!

But rambling aside.. the author makes a good point and i think is the future of big companies in tech. The current model is driven by temporary factors. Ideally you'd have a specialized workforce. But I think that as a programmer you are in constant fear of being obsolete so you don't want to be pigeon-holed. It's just not mathematically possible to have that 10,000 hour mastery in 50 different areas.. unless you are Bill Murray in Groundhog Day.

hrmilo 3 months ago
A developer who sees himself at the top of a pyramid. Not surprising, your myopic and egotistical view. I laugh at people who code a few SELECT statements and think they can fill the DBA role. HA HA HA. God, the arrogance. "Well it worked on my machine." - How many sys admins have heard this out of a developers mouth. Unfortunately, projects get stuck with supporting such issues because that very ego has led the developer too far down the road to turn back. They had no common sense or modesty to call on the knowledge of their Sys Ops team to help design the application. I interview jobs candidates all the time calling themselves full stack simply because they compliment their programming language of choice with a mere smattering of knowledge in client-side technologies and can write a few SQL queries. Most developers have NO PERCEPTION of the myriad intricacies it takes to get an application from their unabated desktop with its FULL ADMIN perms and "unlimited resources", through a staging/QA environment, and eventually to the securely locked down production system with limited and perhaps shared or hosted limited resources. Respect of your support teams, communication and coordination, and the knowledge that you do not know it all. THAT'S being Full Stack and DevOps sir.
spunky brewster hrmilo a month ago
there's always that one query that noone can do in a way that takes less than 2 hours until u pass it off to a real DBA.. its the 80/20 rule basically. I truly dont believe 'full stack' exists. It's an illusion. There's always something that suffers.

The real problem is smart people are in such demand we're forced to adapt to this tribal pre-civilization hodgepodge. Once the industry matures, it'll disappear. Until then they will think they re-invented the wheel.

Ivan Gavrilyuk 6 months ago I

'm confused here. DevOps roles are strictly automation focused, at least according to all job specifications I see on the internet. They don't need any development skills at all. To me it looks like a new term for what we used to call IT Operations, but more scripting/automation focused. DevOps engineer will need to know Puppet, Chef, Ansible, OS management, public cloud management, know how to set up monitoring, logging and all that stuff usual sysadmin used to do but in the modern world. In fact I used to apply for DevOps roles but quickly changed my mind as it turned out no companies need a person wearing many hats, it has absolutely nothing to do with creating software. Am I wrong?

Mario Bisignani Ivan Gavrilyuk 4 months ago

It depends on what you mean with development skills. Have you ever tried to automate the deployment of a large web application? In fact the scripts that automate the deployment of large scalable web applications are pretty complex softwares which require in-depth thinking and should follow all the important principles a good developers should know: components isolation, scalability, maintainability, extensibility, etc..
Valentin Georgiev 10 months ago
1k% agree with this article!
TechZilla a year ago
Successful DevOps doesn't mean a full stack developer does it all, that's only true for a broken company that succeeded despite bad organization. For example, Twitter's Dev only culture is downright sick, and ONLY works because they are in the tech field. Mind you, I still believe personally that it works for them DESPITE its unbalanced structure. In other words, bad DevOps means the Dev has no extra resources and just more requirements, yea that sucks!....

BUT, on the flip,

Infrastructure works with QA/Build to define supportable deployment standards, they gotta learn all the automatic bits and practice using them. Now Devs have to package all their applications properly, in the formats supported by QABuild's CI and Repositories (that 'working just fine' install script definitly doesn't count). BUT the Dev's get pre-made CI-ready examples, and if needed, code-migration assistance from the QA/Build team. Pretty soon they learn how to package that type of app, like a J2EE Maven EAR, or a Webdeploy to IIS.... and the rest should be hadled for them, as automaticlly as possible by the proactive operations teams.

Make sense? This is how its supposed to work, It sounds like your left alone in a terrible Dev Only/heavy world. The key to DevOps that is great, and everybody likes, vs. more work... is having a very balanced work flow between the teams, and making sure the pass-off points are VERY well defined. Essentially it requires management that cut the responsibility properly, so they have a shared interest in collaborating. In a Dev heavy organization, the Devs can just throw garbage over the wall, and operations has to react to constant problems... they start to hate each other and ..... Dev managers get the idea that they can cut out ops if they do "DevOps", so then they throw it all at you like right now.

Adi Chiru a year ago

I see in this post so much rubbish and narrow mindedness, so much of the exact stuff that is killing any type of companies. In the last 10 years I had many roles that required of me, as a system engineer, to come in and straighten out all kind of really bad compromises developers did just to make stuff work.

The role never shows the level of intelligence or capabilities. I've seen so many situations in the last 10 years when smart people with the wrong attitude and awareness are too smart for anyone's good and limited people still providing more value than a very smart ones acting as if he is too smart to even have a conversation about anything.

This post is embarrassing for you Jeff, I am sorry for you man.... you just don't get it!

Max Börebäck a year ago
A developer do not have to do full stack, the developer can continue with development, but has to adopt some things for packaging,testing, and how it is operated.
Operations can continue with operations, but has to know how things are built and packaged.
Developers and operations needs to share things like use the same application server for example. Developer needs to understand how it is operated to make sure that the code is written in a proper way. Operations needs to adopt in the need for fast delivery and be able to support a controlled way of deploying daily into production.
Here is a complementing post I have around the topic
http://bit.ly/1r3iVff

Peperud a year ago
Very much agree with you Jeff. I've been thinking along these lines for awhile now...
Lenny Joseph a year ago
I will share my experience , I started off my career teaching Programming which included database programming (Oracle-pl/sql, SQL Server-transact sql) which gave good insights into database internals which landed me into DBA world for last 10 years . During these 10 years where i have worked in technology companies regarded as top-notch , I have seen very smart Developers writing excellent application codes but missing out on writing optimized piece to interact with the database. Hence, I think each Job has a scale and professionals of any group can not do what the top professionals of other group can do. I have seen Developers with fairly good database internals knowledge and I have dbas writing code for their automation which can compares well with features of some commercial database products like TOAD. So , generalization like this does not hold.
ceposta a year ago
BTW.. "efficiency" is not the goal... "

DevOps and the Myth of Efficiency, Part I

http://blog.christianposta....

DG • a year ago
The idea that there is a hierachy of usefulness is bunk. Most developers are horrible at operations because they dislike it. Most sysadmins and DBAs are horrible at coding because they dislike it. People gravitate to what interests them and a disinterested person does a much poorer job than an interested one. DevOps aims to combine roles by removing barriers, but there are costs to quality that no one likes to talk about. Using your hierarchy example most doctors could obtain their RN but they would not make good nurses.
Lana Boltneva 2 years ago
So true! I offer you to check this 6 best practices in DevOps too http://intersog.com/blog/ag...
Jonathan McAllister 2 years ago
This is an excellent article on the general concepts of DevOps and the DevOps movement. It helps to identify the cultural shifts required to facilitate proper DevOps implementations. I also write about DevOps.. I authored a book on implementing CI, CD and DevOps related functions within an organization and it was recently published. The book is aptly titled Mastering Jenkins ( http://www.masteringjenkins... ) and aims to codify not only the architectural implementations and requirements of DevOps but the cultural shift needed to propery advocate for the proper adoption of DevOps practices. Let me know what you think.
Chris Kavanagh 2 years ago
I agree. Although I'm not in the business (yet), I will be soon. What I've noticed just playing around with Vagrant and Chef, Puppet, Ansible is the great amount of time to try and master just one of these provisioners. I can't imagine being responsible for all these roles you spoke of in the article. How can one possibly master all of them, and be good at any of them?
Sarika Mehta 2 years ago
hmmm.... users & business see as one application.... for them how it was developed, deployed does not matter.... IT is an enabler by definition... so DevOps is mostly about that... giving one view to the customer; quick changes, stable changes, stable application....

Frankly, DevOps is not about developers or testers... it is about the right architecture, right framework... developers/testers anyways do what is there in the script... DevOps is just a new scripts to them.

For right DevOps, you need right framework. architecture for the whole of the program; you need architecture which is built end to end and not in silos...

Hanut Singh 2 years ago
Quite the interesting read. Having worked as a "Full Stack" Developer , I totally agree with you. Well done sir. My hat is tipped.
Masood 2 years ago
Software Developer write code that business/customer use
Test Developer write test code to test SUT
Release Developer write code to automate release process
Infrastructure developer write code to create infrastructure automatically.
Performance Developer writes code to performance test the SUT
Security Developer writes code to scan the SUT for security
Database Developer write code for DB

So which developer are you thinking DevOps going to kill?

Today's TDD world, a developer (it could be anyone above) needs to get out of their comfort zone to makes sure they write a testable, releasable, deployable, performable, security complaint and maintainable code.

DevOps brings all this roles together to collaborate and deliver.

Daniel 2 years ago
You deserve an award.....
Mash -> Dick 2 years ago
Why wouldn't they be? What are the basic responsibilities that make for a passable DBA and which of those responsibilities cannot be done by a good developer? Say a good developer has just average experience writing stored procs, analyzing query performance, creating (or choosing not to create, for performance reasons) indexes, constraints and triggers, configuring database access rights, setting up regular backups, regular maintenance (ex. rebuilding indexes to avoid fragmentation)... just to name a few.

I'm sure there's several responsibilities that DBA's have that developers would have very little to no experience in, but we're talking about making for a passable DBA. Developers may not be as good at the job as someone who specializes in it for a living, but the author's wording seems to have been chosen very carefully.

SuperQ Mash a year ago

Yup, I see lots of people trying to defend the DBA as a thing, just like people keep trying to defend the traditional sysadmin as a thing. I started my career as a sysadmin in the 90s, but times have changed and I don't call myself a sysadmin anymore, because that's not what I do.

Now I'm a Systems Engineer/SRE. My mode of working isn't slamming software together, but engineering automation to do it for me.

But I also do QA, Data storage performance analysis, networking, and [have a] deep knowledge of the applications I support.

[May 15, 2017] 10 Things I Hate About DevOps

Notable quotes:
"... The Emergence of the "DevOps' DevOp", a pseudo intellectual loudly spewing theories about distantly unrelated fields that are entirely irrelevant and are designed to make them feel more intelligent and myself more inadequate ..."
"... "The Copenhagen interpretation certainly applies to DevOps" ..."
"... "I'm modeling the relationship between Dev and Ops using quantum entanglement, with a focus on relative quantum superposition - it's the only way to look at it. Why aren't you?" ..."
"... Enterprise Architects. They used to talk about the "Enterprise Continuum". Now they talk about "The Delivery Continuum" or the "DevOps Continuum". ..."
May 15, 2017 | www.upguard.com
DevOps and I sort of have a love/hate relationship. DevOps is near and dear to our heart here at UpGuard and there are plenty of things that I love about it . Love it or hate it, there is little doubt that it is here to stay. I've enjoyed a great deal of success thanks to agile software development and DevOps methods, but here are 10 things I hate about DevOps!

#1 Everyone thinks it's about Automation.

#2 "True" DevOps apparently have no processes - because DevOps takes care of that.

#3 The Emergence of the "DevOps' DevOp", a pseudo intellectual loudly spewing theories about distantly unrelated fields that are entirely irrelevant and are designed to make them feel more intelligent and myself more inadequate:

"The Copenhagen interpretation certainly applies to DevOps"

"I'm modeling the relationship between Dev and Ops using quantum entanglement, with a focus on relative quantum superposition - it's the only way to look at it. Why aren't you?"

#4 Enterprise Architects. They used to talk about the "Enterprise Continuum". Now they talk about "The Delivery Continuum" or the "DevOps Continuum". How about talking about the business guys?

#5 Heroes abound with tragic statements like "It took 3 days to automate everything.. it's great now!" - Clearly these people have never worked in a serious enterprise.

#6 No-one talks about Automation failure...it's everywhere. i.e Listen for the words "Pockets of Automation". Adoption of technology, education and adaptation of process is rarely mentioned (or measured).

#7 People constantly pointing to Etsy, Facebook & Netflix as DevOps. Let's promote the stories of companies that better represent the market at large.

#8 Tech hipsters discounting, or underestimating, Windows sysadmins. There are a lot of them and they better represent the Enterprise than many of the higher profile blowhards.

#9 The same hipsters saying their threads have filled up with DevOps tweets where there were none before.

#10 I've never heard of a Project Manager taking on DevOps. I intend on finding one.

What do you think - did I miss anything? Rants encouraged ;-) Please add your comments.

[May 15, 2017] Why I hate DevOps

Notable quotes:
"... DevOps. The latest software development fad. ..."
"... Continuous Delivery (CD), the act of small, frequent, releases was defined in detail by Jez Humble and Dave Farley in their book – Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. The approach makes a lot of sense and encourages a number of healthy behaviors in a team. ..."
"... The problem is we now have teams saying they're doing DevOps. By that they mean is they make small, frequent, releases to production AND the developers are working closely with the Ops team to get things out to production and to keep them running. ..."
"... Well, the problem is the name. We now have a term "DevOps" to describe the entire build, test, release approach. The problem is when you call something DevOps anyone who doesn't identify themselves as a dev or as Ops automatically assumes they're not part of the process. ..."
May 15, 2017 | testingthemind.wordpress.com
DevOps. The latest software development fad. Now you can be Agile, use Continuous Delivery, and believe in DevOps.

Continuous Delivery (CD), the act of small, frequent, releases was defined in detail by Jez Humble and Dave Farley in their book – Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. The approach makes a lot of sense and encourages a number of healthy behaviors in a team. For example, frequent releases more or less have to be small. Small releases are easier to understand, which in turn increases our chances of building good features, but also our chances of testing for the right risks. If you do run into problems during testing then it's pretty easy to work out the change that caused them, reducing the time to debug and fix issues.

Unfortunately, along with all the good parts of CD we have a slight problem. The book focused on the areas which were considered to be the most broken, and unfortunately that led to the original CD description implying "Done" meant the code was shipped to production. As anyone who has ever worked on software will know, running code in production also requires a fair bit of work.

So, teams started adopting CD but no one was talking about how the Ops team fitted into the release cycle. Everything from knowing when production systems were in trouble, to reliable release systems was just assumed to be fully functional, and unnecessary for explanation.

To try to plug the gap DevOps rose up.

Now, just to make things even more confusing. Dave Farley later said that not talking about Ops was an omission and CD does include the entire development and release cycle, including running in production. So DevOps and CD have some overlap there.

DevOps does take a slightly different angle on the approach than CD. The emphasis for DevOps is on the collaboration rather than the process. Silos should be actively broken down to help developers understand systems well enough to be able to write good, robust and scalable code.

So far so good.

The problem is we now have teams saying they're doing DevOps. By that they mean is they make small, frequent, releases to production AND the developers are working closely with the Ops team to get things out to production and to keep them running.

Sounds good. So what's the problem?

Well, the problem is the name. We now have a term "DevOps" to describe the entire build, test, release approach. The problem is when you call something DevOps anyone who doesn't identify themselves as a dev or as Ops automatically assumes they're not part of the process.

Seriously, go and ask your designers what they think of DevOps. Or how about your testers. Or Product Managers. Or Customer Support.

And that's a problem.

We've managed to take something that is completely dependant on collaboration, and trust, and name it in a way that excludes a significant number of people. All of the name suggestions that arise when you mention this are just ridiculous. DevTestOps? BusinessDevTestOps? DesignDevOps? Aside from just being stupid names these continue to exclude anyone who doesn't have these words in their title.

So do I hate DevOps? Well no, not the practice. I think we should always be thinking about how things will actually work in production. We need an Ops team to help us do that so it makes total sense to have them involved in the process. Just take care with that name.

Is there a solution? Well, in my mindwe're still talking about collaboration above all else. Thinking about CD as "Delivery on demand" also makes more sense to me. We, the whole team, should be ready to deliver working software to the customer when they want it. By being aware of the confusion, and exclusion that some of these names create we can hopefully bring everyone into the project before it's too late.

[May 15, 2017] Hype Cycle for DevOps, 2016

May 15, 2017 | www.gartner.com
Hype Cycle for DevOps, 2016

DevOps initiatives include a range of technologies and methodologies spanning the software delivery process. IT leaders and DevOps practitioners should proactively understand the readiness and capabilities of technology to identify the most appropriate choices for their specific DevOps initiative.

Table of Contents

[May 15, 2017] The Phoenix Project (novel)

May 15, 2017 | en.wikipedia.org

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win (2013) is the third book by Gene Kim. The business novel tells the story of an IT manager who has ninety days to rescue an over-budget and late IT initiative, code-named The Phoenix Project. The book was co-authored by Kevin Behr and George Spafford and published by IT Revolution Press in January 2013.[1][2]

Background

The novel is thought of as the modern day version of The Goal by Eliyahu M. Goldratt.[3] The novel describes the problems that almost every IT organization faces, and then shows the practices of how to solve the problems, improve the lives of those who work in IT and be recognized for helping the business win.[1] The goal of the book is to show that a truly collaborative approach between IT and business is possible.[4]

Synopsis

The novel tells the story of Bill, the IT manager at Parts Unlimited.[4][5][6] The company's new IT initiative, code named Phoenix Project, is critical to the future of Parts Unlimited, but the project is massively over budget and very late. The CEO wants Bill to report directly to him and fix the mess in ninety days or else Bill's entire department will be outsourced. With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow, streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.[7][8]

Reception

The book has been called a "must read" for IT professionals and quickly reached #1 in its Amazon.com categories.[9][10] The Phoenix Project was featured on 800 CEO Reads Top 25: What Corporate America Is Reading for June, 2013.[11] InfoQ stated, "This book will resonate at one point or another with anyone who's ever worked in IT."[4] Jeremiah Shirk, Integration & Infrastructure Manager at Kansas State University, said of the book: "Some books you give to friends, for the joy of sharing a great novel. Some books you recommend to your colleagues and employees, to create common ground. Some books you share with your boss, to plant the seeds of a big idea. The Phoenix Project is all three."[4] Other reviewers were more skeptical, including the IT Skeptic "Fictionalising allows you to paint an idealised picture, and yet make it seem real, plausible... Sorry but it is all too good to be true... none of the answers are about people or culture or behaviour. They're about tools and techniques and processes." [12] Jez Humble (author of Continuous Delivery) said "unlike real life, there aren't many experiments in the book that end up making things worse..."

[May 15, 2017] 8 DevOps Myths Debunked - DZone DevOps

May 15, 2017 | dzone.com

In a recent webinar, XebiaLabs VP of DevOps Strategy Andrew Phillips sat down with Atos Global Thought Leader in DevOps Dick van der Sar to separate the facts from the fiction. Their findings: most myths come attached with a small piece of fact and vice versa.

1. DevOps Is Developers Doing Operations: Myth

An integral part of DevOps' automation component involves a significant amount of code. This causes people to believe Developers do most of the heavy lifting in the equation. In reality, what ends up happening is due to the amount of infrastructure as Code, Ops begin to look a lot like Dev.

2. Projects Are Dead: Myth

Projects are an ongoing process of evolving systems and failures. To think they can just be handed off to maintenance forever after completion is simply incorrect. This is only true for tightly scoped software needs, including systems built for specific events. When you adopt DevOps and Agile, you are replacing traditional project-based approaches with a focus on product lifecycles.

3. DevOps Doesn't Work in Complex Environments: Myth

DevOps is actually made to thrive in complex environments. The only instance in which it doesn't work is when unrealistic and/or inappropriate goals are set for the enterprise. Complex environments typically suffer due to lack of communication about the state of, and changes to, the interconnected systems. DevOps, on the other hand, encourages communication and collaboration that prevent these issues from arising.

4. It's Hard to Sell DevOps to the Business: Myth

The benefits to DevOps are closely tied benefiting the business. However, that's hard to believe when you pitch adopting DevOps as a plan to "stop working on features and sink a lot of your money into playing with shiny new IT tech." Truth is, DevOps is going to impact the entire enterprise. This may be the source of resistance, but as long as you find the balance between adoption and disruption, you will experience a successful transition.

5. Agile Is for Lazy Engineers: Myth

DevOps prides itself on eliminating unnecessary overhead. Through automation, your enterprise can see a reduction in documentation, meetings, and even manual tasks, giving team members more time to focus on more important priorities. You know your team is running successfully if their productivity increases.

Nonetheless, DevOps does not come without its own form of "boring" processes, including test plans or code audits. Agile may eliminate waste but that doesn't include the tedious yet necessary aspects.

6. If You Can't Code, You Have No Chance in DevOps: Fact

This is only afact because the automation side of DevOps is all Infrastructure as Code (IaC). This typically requires some sort of software development skill such as modularization, automated testing, and Continuous Integration (CI) as IaC. Regardless of scale, automating anything will require, at the very least, software development skills.

7. Managers Disappear: Myth

Rather than disappear, managers take a different role with DevOps. In fact, they are still a necessity to the team. Managers are tasked with the responsibility of keeping the entire DevOps team on track. Classic management tasks may seem to disappear but only because the role is changing to be more focused on empowerment.

8. DevOps or Die: Fact!

Many of today's market leaders already have some sort of advanced DevOps structure in place. As industries incorporate IT further into their business, we will begin to see DevOps as a basic necessity to the modern business and those that can't adapt will simply fall behind.

That being said, you shouldn't think of DevOps as the magic invincibility potion that will keep your enterprise failure free. Rather, DevOps can prevent many types of failure, but there will always be environment specific threats unique to every organization that DevOps can't rescue you from. Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs .

[May 15, 2017] DevOps Fact vs Fiction

May 15, 2017 | vassit.co.uk
Out of these misunderstandings several common myths have been created. Acceptance of these myths misleads business further.

Here are some of the most common myths and the facts that debunk them.

Myth 1: DevOps needs agile.

Although DevOps and agile are terms frequently used together, they are a long way away from being synonymous with one another. Agile development refers to a method of software delivery that builds software incrementally, whereas DevOps refers not only to a method of delivery but to a culture, which when adopted, results in many business benefits , including faster software delivery.

DevOps processes can help to compliment agile development, but it is not reliant on agile and can support a range of operation models such as

For optimum results, full adoption of the DevOps philosophy is necessary.

Myth 2: DevOps can't work with legacy.

DevOps is often regarded as a modern concept that helps forward-thinking businesses innovate. Although this is true, it can also help those organisations with long-established, standard IT practices. In fact, with legacy applications there are usually big advantages to DevOps adoption.

Managing legacy care and bringing new software to market quickly; blending stability and agility, is a frequently encountered problem in this new era of digital transformation. Bi-modal IT is an approach where Mode 1 refers to legacy systems focussed on stability, and Mode 2 refers to agile IT focussed on rapid application delivery. DevOps principles are often included exclusively within Mode 2, but automation and collaboration can also be used with success within Mode 1 to increase delivery speed whilst ensuring stability.

Myth 3: DevOps is only for continuous delivery.

DevOps doesn't (necessarily) imply continuous delivery. The aim of a DevOps culture is to increase the delivery frequency of an organisation, often from quarterly/monthly to daily releases or more, and improve their ability to respond to changes in the market.

While continuous delivery relies heavily on automation and is aimed at agile and lean thinking organisations, unlike DevOps it is not reliant on a shared culture which enhances collaboration. Gartner summed up the distinction with a report that stated that: "DevOps is not a market, but a tool-centric philosophy that supports a continuous delivery value chain."

Myth 4: DevOps requires new tools.

As with the implementation of any new concept or idea, a common misconception about DevOps adoption is that new toolsets, and skills are required. Though the provision of appropriate and relevant tools can aid adoption, organisations are by no means required to replace tools and processes they use to produce software.

DevOps enables organisations to deliver new capabilities more easily, and bring new software into production more rapidly in order to respond to market changes. It is not strictly reliant on new tools to get this job done.

Myth 5: DevOps is a skill.

The rapid growth of the DevOps movement has resulted in huge demand for professionals who are skilled within the methodology. However, this fact is often misconstrued to suggest that DevOps is itself a skill – this is not the case.

DevOps is a culture – one that needs to be fully adopted throughout an entire organisation for optimum results, and one that is best supported with appropriate and relevant tools.

Myth 6: DevOps is software.

Understanding that DevOps adoption can be better facilitated with software is important, however, maybe more so is understanding that they are not one and the same. Although it is true that there is a significant amount of DevOps software available on the market today, purchasing a specific ad-hoc DevOps product, or even suite of products, will not make your business 'DevOps'.

The DevOps methodology is the communication, collaboration and automation of your development and operations functions, and as described above, is required to be adopted by an entire organisation to achieve optimum results. The software and tools available will undoubtedly reduce the strain of adoption on your business but conscious adoption is required for your business to fully reach the potential that DevOps offers.

Conclusion

Like any new and popular term, people have somewhat confused and sometimes contradictory or partial impressions of what DevOps is and how it works.

DevOps is a philosophy which enables businesses to automate their processes and work more collaboratively to achieve a common goal and deliver software more rapidly.

At VASSIT we help organisations to successfully implement DevOps, click here to learn how we made DevOps a reality at TSB bank Sabadell

[Mar 20, 2017] It sucks to be right - blog dot lusis

These are just a few things that jumped out at me (and annoyed me)

However, there are teams at Netflix that do traditional Operations, and teams that do DevOps as well.

Ops is ops is ops. No matter what you call it, Operations is operations.

Notice that we didn't use the typical DevOps tools Puppet or Chef to create builds at runtime

There's no such thing as a "DevOps tool". People were using CFengine, Puppet and Chef long before DevOps was even a term. These are configuration management tools. In fact Adrian has even said they use Puppet in their legacy datacenter:

yet he seems to make the distinction between the ops guys there and the "devops" guys (whatever those are).

There is no ops organization involved in running our cloud…

Just because you outsourced it, doesn't mean it doesn't exist. Oh and it's not your cloud. It's Amazon's.

Reading between the lines

Actually this doesn't take much reading between the lines. It's out there in plain sight:

In reality we had the usual complaints about how long it took to get new capacity, the lack of consistency across supposedly identical systems, and failures in Oracle, in the SAN and the networks, that took the site down too often for too long.

We tried bringing in new ops managers, and new engineers, but they were always overwhelmed by the fire fighting needed to keep the current systems running.

This is largely because the people making decisions are development managers, who have been burned repeatedly by configuration bugs in systems that were supposed to be identical.

The developers used to spend hours a week in meetings with Ops discussing what they needed, figuring out capacity forecasts and writing tickets to request changes for the datacenter.

There is no ops organization involved in running our cloud, no need for the developers to interact with ops people to get things done, and less time spent actually doing ops tasks than developers would spend explaining what needed to be done to someone else.

I'm glad to see this spelled out in such detail. This is what I've been telling people semi-privately for a while now. Because Netflix had such a terrible experience with its operations team, they went to the opposite extreme and disintermediated them.

Imagine you were scared as a kid by a clown. Now imagine you have kids of your own. You hate clowns. You had a bad experience with clowns. But it's your kid's birthday party so here you are making baloon animals, telling jokes and doing silly things to entertain the kids.Just because you aren't wearing makeup doesn't make you any less of a clown. You're doing clown shit. Through the eyes of the kids, you're a clown. Deal with it. Netflix is still doing operations. What should be telling and frightening to operations teams everywhere is this:

The Netflix response to poorly run operations that can't service the business is going to become the norm and not the exception. Evolve or die.

Please note that I don't lay all the blame on the Netflix operations team. I would love to hear the flipside of this story from someone who was there originally when the streaming initiative started. It would probably be full of stories we've heard before - no resources, misalignment of incentives and a whole host of others.

Adrian, thank you for writing the blog post. I hope it serves as a warning to those who come. Hopefully someday you'll be able to see a clown again and not get scared ;)

[Mar 20, 2017] What we mean by "operations," and how it's changed over the years by Mike Loukides

June 7, 2012

Adrian Cockcroft's article about NoOps at Netflix ignited a controversy that has been smouldering for some months. John Allspaw's detailed response to Adrian's article makes a key point: What Adrian described as "NoOps" isn't really. Operations doesn't go away. Responsibilities can, and do, shift over time, and as they shift, so do job descriptions. But no matter how you slice it, the same jobs need to be done, and one of those jobs is operations. What Adrian is calling NoOps at Netflix isn't all that different from Operations at Etsy. But that just begs the question: What do we mean by "operations" in the 21st century? If NoOps is a movement for replacing operations with something that looks suspiciously like operations, there's clearly confusion. Now that some of the passion has died down, it's time to get to a better understanding of what we mean by operations and how it's changed over the years.

At a recent lunch, John noted that back in the dawn of the computer age, there was no distinction between dev and ops. If you developed, you operated. You mounted the tapes, you flipped the switches on the front panel, you rebooted when things crashed, and possibly even replaced the burned out vacuum tubes. And you got to wear a geeky white lab coat. Dev and ops started to separate in the '60s, when programmer/analysts dumped boxes of punch cards into readers, and "computer operators" behind a glass wall scurried around mounting tapes in response to IBM JCL. The operators also pulled printouts from line printers and shoved them in labeled cubbyholes, where you got your output filed under your last name.

The arrival of minicomputers in the 1970s and PCs in the '80s broke down the wall between mainframe operators and users, leading to the system and network administrators of the 1980s and '90s. That was the birth of modern "IT operations" culture. Minicomputer users tended to be computing professionals with just enough knowledge to be dangerous. (I remember when a new director was given the root password and told to "create an account for yourself" … and promptly crashed the VAX, which was shared by about 30 users). PC users required networks; they required support; they required shared resources, such as file servers and mail servers. And yes, BOFH ("Bastard Operator from Hell") serves as a reminder of those days. I remember being told that "no one" else is having the problem you're having - and not getting beyond it until at a company meeting we found that everyone was having the exact same problem, in slightly different ways. No wonder we want ops to disappear. No wonder we wanted a wall between the developers and the sysadmins, particularly since, in theory, the advent of the personal computer and desktop workstation meant that we could all be responsible for our own machines.

But somebody has to keep the infrastructure running, including the increasingly important websites. As companies and computing facilities grew larger, the fire-fighting mentality of many system administrators didn't scale. When the whole company runs on one 386 box (like O'Reilly in 1990), mumbling obscure command-line incantations is an appropriate way to fix problems. But that doesn't work when you're talking hundreds or thousands of nodes at Rackspace or Amazon. From an operations standpoint, the big story of the web isn't the evolution toward full-fledged applications that run in the browser; it's the growth from single servers to tens of servers to hundreds, to thousands, to (in the case of Google or Facebook) millions. When you're running at that scale, fixing problems on the command line just isn't an option. You can't afford letting machines get out of sync through ad-hoc fixes and patches. Being told "We need 125 servers online ASAP, and there's no time to automate it" (as Sascha Bates encountered) is a recipe for disaster.

The response of the operations community to the problem of scale isn't surprising. One of the themes of O'Reilly's Velocity Conference is "Infrastructure as Code." If you're going to do operations reliably, you need to make it reproducible and programmatic. Hence virtual machines to shield software from configuration issues. Hence Puppet and Chef to automate configuration, so you know every machine has an identical software configuration and is running the right services. Hence Vagrant to ensure that all your virtual machines are constructed identically from the start. Hence automated monitoring tools to ensure that your clusters are running properly. It doesn't matter whether the nodes are in your own data center, in a hosting facility, or in a public cloud. If you're not writing software to manage them, you're not surviving.

Furthermore, as we move further and further away from traditional hardware servers and networks, and into a world that's virtualized on every level, old-style system administration ceases to work. Physical machines in a physical machine room won't disappear, but they're no longer the only thing a system administrator has to worry about. Where's the root disk drive on a virtual instance running at some colocation facility? Where's a network port on a virtual switch? Sure, system administrators of the '90s managed these resources with software; no sysadmin worth his salt came without a portfolio of Perl scripts. The difference is that now the resources themselves may be physical, or they may just be software; a network port, a disk drive, or a CPU has nothing to do with a physical entity you can point at or unplug. The only effective way to manage this layered reality is through software.

So infrastructure had to become code. All those Perl scripts show that it was already becoming code as early as the late '80s; indeed, Perl was designed as a programming language for automating system administration. It didn't take long for leading-edge sysadmins to realize that handcrafted configurations and non-reproducible incantations were a bad way to run their shops. It's possible that this trend means the end of traditional system administrators, whose jobs are reduced to racking up systems for Amazon or Rackspace. But that's only likely to be the fate of those sysadmins who refuse to grow and adapt as the computing industry evolves. (And I suspect that sysadmins who refuse to adapt swell the ranks of the BOFH fraternity, and most of us would be happy to see them leave.) Good sysadmins have always realized that automation was a significant component of their job and will adapt as automation becomes even more important. The new sysadmin won't power down a machine, replace a failing disk drive, reboot, and restore from backup; he'll write software to detect a misbehaving EC2 instance automatically, destroy the bad instance, spin up a new one, and configure it, all without interrupting service. With automation at this level, the new "ops guy" won't care if he's responsible for a dozen systems or 10,000. And the modern BOFH is, more often than not, an old-school sysadmin who has chosen not to adapt.

James Urquhart nails it when he describes how modern applications, running in the cloud, still need to be resilient and fault tolerant, still need monitoring, still need to adapt to huge swings in load, etc. But he notes that those features, formerly provided by the IT/operations infrastructures, now need to be part of the application, particularly in "platform as a service" environments. Operations doesn't go away, it becomes part of the development. And rather than envision some sort of uber developer, who understands big data, web performance optimization, application middleware, and fault tolerance in a massively distributed environment, we need operations specialists on the development teams. The infrastructure doesn't go away - it moves into the code; and the people responsible for the infrastructure, the system administrators and corporate IT groups, evolve so that they can write the code that maintains the infrastructure. Rather than being isolated, they need to cooperate and collaborate with the developers who create the applications. This is the movement informally known as "DevOps."

Amazon's EBS outage last year demonstrates how the nature of "operations" has changed. There was a marked distinction between companies that suffered and lost money, and companies that rode through the outage just fine. What was the difference? The companies that didn't suffer, including Netflix, knew how to design for reliability; they understood resilience, spreading data across zones, and a whole lot of reliability engineering. Furthermore, they understood that resilience was a property of the application, and they worked with the development teams to ensure that the applications could survive when parts of the network went down. More important than the flames about Amazon's services are the testimonials of how intelligent and careful design kept applications running while EBS was down. Netflix's ChaosMonkey is an excellent, if extreme, example of a tool to ensure that a complex distributed application can survive outages; ChaosMonkey randomly kills instances and services within the application. The development and operations teams collaborate to ensure that the application is sufficiently robust to withstand constant random (and self-inflicted!) outages without degrading.

On the other hand, during the EBS outage, nobody who wasn't an Amazon employee touched a single piece of hardware. At the time, JD Long tweeted that the best thing about the EBS outage was that his guys weren't running around like crazy trying to fix things. That's how it should be. It's important, though, to notice how this differs from operations practices 20, even 10 years ago. It was all over before the outage even occurred: The sites that dealt with it successfully had written software that was robust, and carefully managed their data so that it wasn't reliant on a single zone. And similarly, the sites that scrambled to recover from the outage were those that hadn't built resilience into their applications and hadn't replicated their data across different zones.

In addition to this redistribution of responsibility, from the lower layers of the stack to the application itself, we're also seeing a redistribution of costs. It's a mistake to think that the cost of operations goes away. Capital expense for new servers may be replaced by monthly bills from Amazon, but it's still cost. There may be fewer traditional IT staff, and there will certainly be a higher ratio of servers to staff, but that's because some IT functions have disappeared into the development groups. The bonding is fluid, but that's precisely the point. The task - providing a solid, stable application for customers - is the same. The locations of the servers on which that application runs, and how they're managed, are all that changes.

One important task of operations is understanding the cost trade-offs between public clouds like Amazon's, private clouds, traditional colocation, and building their own infrastructure. It's hard to beat Amazon if you're a startup trying to conserve cash and need to allocate or deallocate hardware to respond to fluctuations in load. You don't want to own a huge cluster to handle your peak capacity but leave it idle most of the time. But Amazon isn't inexpensive, and a larger company can probably get a better deal taking its infrastructure to a colocation facility. A few of the largest companies will build their own datacenters. Cost versus flexibility is an important trade-off; scaling is inherently slow when you own physical hardware, and when you build your data centers to handle peak loads, your facility is underutilized most of the time. Smaller companies will develop hybrid strategies, with parts of the infrastructure hosted on public clouds like AWS or Rackspace, part running on private hosting services, and part running in-house. Optimizing how tasks are distributed between these facilities isn't simple; that is the province of operations groups. Developing applications that can run effectively in a hybrid environment: that's the responsibility of developers, with healthy cooperation with an operations team.

The use of metrics to monitor system performance is another respect in which system administration has evolved. In the early '80s or early '90s, you knew when a machine crashed because you started getting phone calls. Early system monitoring tools like HP's OpenView provided limited visibility into system and network behavior but didn't give much more information than simple heartbeats or reachability tests. Modern tools like DTrace provide insight into almost every aspect of system behavior; one of the biggest challenges facing modern operations groups is developing analytic tools and metrics that can take advantage of the data that's available to predict problems before they become outages. We now have access to the data we need, we just don't know how to use it. And the more we rely on distributed systems, the more important monitoring becomes. As with so much else, monitoring needs to become part of the application itself. Operations is crucial to success, but operations can only succeed to the extent that it collaborates with developers and participates in the development of applications that can monitor and heal themselves.

Success isn't based entirely on integrating operations into development. It's naive to think that even the best development groups, aware of the challenges of high-performance, distributed applications, can write software that won't fail. On this two-way street, do developers wear the beepers, or IT staff? As Allspaw points out, it's important not to divorce developers from the consequences of their work since the fires are frequently set by their code. So, both developers and operations carry the beepers. Sharing responsibilities has another benefit. Rather than finger-pointing post-mortems that try to figure out whether an outage was caused by bad code or operational errors, when operations and development teams work together to solve outages, a post-mortem can focus less on assigning blame than on making systems more resilient in the future. Although we used to practice "root cause analysis" after failures, we're recognizing that finding out the single cause is unhelpful. Almost every outage is the result of a "perfect storm" of normal, everyday mishaps. Instead of figuring out what went wrong and building procedures to ensure that something bad can never happen again (a process that almost always introduces inefficiencies and unanticipated vulnerabilities), modern operations designs systems that are resilient in the face of everyday errors, even when they occur in unpredictable combinations.

In the past decade, we've seen major changes in software development practice. We've moved from various versions of the "waterfall" method, with interminable up-front planning, to "minimum viable product," continuous integration, and continuous deployment. It's important to understand that the waterfall and methodology of the '80s aren't "bad ideas" or mistakes. They were perfectly adapted to an age of shrink-wrapped software. When you produce a "gold disk" and manufacture thousands (or millions) of copies, the penalties for getting something wrong are huge. If there's a bug, you can't fix it until the next release. In this environment, a software release is a huge event. But in this age of web and mobile applications, deployment isn't such a big thing. We can release early, and release often; we've moved from continuous integration to continuous deployment. We've developed techniques for quick resolution in case a new release has serious problems; we've mastered A/B testing to test releases on a small subset of the user base.

All of these changes require cooperation and collaboration between developers and operations staff. Operations groups are adopting, and in many cases, leading in the effort to implement these changes. They're the specialists in resilience, in monitoring, in deploying changes and rolling them back. And the many attendees, hallway discussions, talks, and keynotes at O'Reilly's Velocity conference show us that they are adapting. They're learning about adopting approaches to resilience that are completely new to software engineering; they're learning about monitoring and diagnosing distributed systems, doing large-scale automation, and debugging under pressure. At a recent meeting, Jesse Robbins described scheduling EMT training sessions for operations staff so that they understood how to handle themselves and communicate with each other in an emergency. It's an interesting and provocative idea, and one of many things that modern operations staff bring to the mix when they work with developers.

What does the future hold for operations? System and network monitoring used to be exotic and bleeding-edge; now, it's expected. But we haven't taken it far enough. We're still learning how to monitor systems, how to analyze the data generated by modern monitoring tools, and how to build dashboards that let us see and use the results effectively. I've joked about "using a Hadoop cluster to monitor the Hadoop cluster," but that may not be far from reality. The amount of information we can capture is tremendous, and far beyond what humans can analyze without techniques like machine learning.

Likewise, operations groups are playing a huge role in the deployment of new, more efficient protocols for the web, like SPDY. Operations is involved, more than ever, in tuning the performance of operating systems and servers (even ones that aren't under our physical control); a lot of our "best practices" for TCP tuning were developed in the days of ISDN and 56 Kbps analog modems, and haven't been adapted to the reality of Gigabit Ethernet, OC48* fiber, and their descendants. Operations groups are responsible for figuring out how to use these technologies (and their successors) effectively. We're only beginning to digest IPv6 and the changes it implies for network infrastructure. And, while I've written a lot about building resilience into applications, so far we've only taken baby steps. There's a lot there that we still don't know. Operations groups have been leaders in taking best practices from older disciplines (control systems theory, manufacturing, medicine) and integrating them into software development.

And what about NoOps? Ultimately, it's a bad name, but the name doesn't really matter. A group practicing "NoOps" successfully hasn't banished operations. It's just moved operations elsewhere and called it something else. Whether a poorly chosen name helps or hinders progress remains to be seen, but operations won't go away; it will evolve to meet the challenges of delivering effective, reliable software to customers. Old-style system administrators may indeed be disappearing. But if so, they are being replaced by more sophisticated operations experts who work closely with development teams to get continuous deployment right; to build highly distributed systems that are resilient; and yes, to answer the pagers in the middle of the night when EBS goes down. DevOps.

Related:

Adrian Cockcroft's Blog Ops, DevOps and PaaS (NoOps) at Netflix

March 19, 2012

Adrian Cockcroft

There has been a sometimes heated discussion on twitter about the term NoOps recently, and I've been quoted extensively as saying that NoOps is the way developers work at Netflix. However, there are teams at Netflix that do traditional Operations, and teams that do DevOps as well. To try and clarify things I need to explain the history and current practices at Netflix in chunks of more than 140 characters at a time.

When I joined Netflix about five years ago, I managed a development team, building parts of the web site. We also had an operations team who ran the systems in the single datacenter that we deployed our code to. The systems were high end IBM P-series virtualized machines with storage on a virtualized Storage Area Network. The idea was that this was reliable hardware with great operational flexibility so that developers could assume low failure rates and concentrate on building features. In reality we had the usual complaints about how long it took to get new capacity, the lack of consistency across supposedly identical systems, and failures in Oracle, in the SAN and the networks, that took the site down too often for too long.

At that time we had just launched the streaming service, and it was still an experiment, with little content and no TV device support. As we grew streaming over the next few years, we saw that we needed higher availability and more capacity, so we added a second datacenter. This project took far longer than initial estimates, and it was clear that deploying capacity at the scale and rates we were going to need as streaming took off was a skill set that we didn't have in-house. We tried bringing in new ops managers, and new engineers, but they were always overwhelmed by the fire fighting needed to keep the current systems running.

Netflix is a developer oriented culture, from the top down. I sometimes have to remind people that our CEO Reed Hastings was the founder and initial developer of Purify, which anyone developing serious C++ code in the 1990's would have used to find memory leaks and optimize their code. Pure Software merged with Atria and Rational before being swallowed up by IBM. Reed left IBM and formed Netflix. Reed hired a team of very strong software engineers who are now the VPs who run developer engineering for our products. When we were deciding what to do next Reed was directly involved in deciding that we should move to cloud, and even pushing us to build an aggressively cloud optimized architecture based on NoSQL. Part of that decision was to outsource the problems of running large scale infrastructure and building new datacenters to AWS. AWS has far more resources to commit to getting cloud to work and scale, and to building huge datacenters. We could leverage this rather than try to duplicate it at a far smaller scale, with greater certainty of success. So the budget and responsibility for managing AWS and figuring out cloud was given directly to the developer organization, and the ITops organization was left to run its datacenters. In addition, the goal was to keep datacenter capacity flat, while growing the business rapidly by leveraging additional capacity on AWS.

Over the next three years, most of the ITops staff have left and been replaced by a smaller team. Netflix has never had a CIO, but we now have an excellent VP of ITops Mike Kail (@mdkail), who now runs the datacenters. These still support the DVD shipping functions of Netflix USA, and he also runs corporate IT, which is increasingly moving to SaaS applications like Workday. Mike runs a fairly conventional ops team and is usually hiring, so there are sysadmin, database,, storage and network admin positions. The datacenter footprint hasn't increased since 2009, although there have been technology updates, and the over-all size is order-of-magnitude a thousand systems.

As the developer organization started to figure out cloud technologies and build a platform to support running Netflix on AWS, we transferred a few ITops staff into a developer team that formed the core of our DevOps function. They build the Linux based base AMI (Amazon Machine Image) and after a long discussion we decided to leverage developer oriented tools such as Perforce for version control, Ivy for dependencies, Jenkins to automate the build process, Artifactory as the binary repository and to construct a "bakery" that produces complete AMIs that contain all the code for a service. Along with AWS Autoscale Groups this ensured that every instance of a service would be totally identical. Notice that we didn't use the typical DevOps tools Puppet or Chef to create builds at runtime. This is largely because the people making decisions are development managers, who have been burned repeatedly by configuration bugs in systems that were supposed to be identical.

By 2012 the cloud capacity has grown to be order-of-magnitude 10,000 instances, ten times the capacity of the datacenter, running in nine AWS Availability zones (effectively separate datacenters) on the US East and West coast, and in Europe. A handful of DevOps engineers working for Carl Quinn (@cquinn - well known from the Java Posse podcast) are coding and running the build tools and bakery, and updating the base AMI from time to time. Several hundred development engineers use these tools to build code, run it in a test account in AWS, then deploy it to production themselves. They never have to have a meeting with ITops, or file a ticket asking someone from ITops to make a change to a production system, or request extra capacity in advance. They use a web based portal to deploy hundreds of new instances running their new code alongside the old code, put one "canary" instance into traffic, if it looks good the developer flips all the traffic to the new code. If there are any problems they flip the traffic back to the previous version (in seconds) and if it's all running fine, some time later the old instances are automatically removed. This is part of what we call NoOps. The developers used to spend hours a week in meetings with Ops discussing what they needed, figuring out capacity forecasts and writing tickets to request changes for the datacenter. Now they spend seconds doing it themselves in the cloud. Code pushes to the datacenter are rigidly scheduled every two weeks, with emergency pushes in between to fix bugs. Pushes to the cloud are as frequent as each team of developers needs them to be, incremental agile updates several times a week is common, and some teams are working towards several updates a day. Other teams and more mature services update every few weeks or months. There is no central control, the teams are responsible for figuring out their own dependencies and managing AWS security groups that restrict who can talk to who.

Automated deployment is part of the normal process of running in the cloud. The other big issue is what happens if something breaks. Netflix ITops always ran a Network Operations Center (NOC) which was staffed 24x7 with system administrators. They were familiar with the datacenter systems, but had no experience with cloud. If there was a problem, they would start and run a conference call, and get the right people on the call to diagnose and fix the issue. As the Netflix web site and streaming functionality moved to the cloud it became clear that we needed a cloud operations reliability engineering (CORE) team, and that it would be part of the development organization. The CORE team was lucky enough to get Jeremy Edberg (@jedberg - well know from running Reddit) as its initial lead engineer, and also picked up some of the 24x7 shift sysadmins from the original NOC. The CORE team is still staffing up, looking for Site Reliability Engineer skill set, and is the second group of DevOps engineers within Netflix. There is a strong emphasis on building tools too make as much of their processes go away as possible, for example they have no run-books, they develop code instead,

To get themselves out of the loop, the CORE team has built an alert processing gateway. It collects alerts from several different systems, does filtering, has quenching and routing controls (that developers can configure), and automatically routes alerts either to the PagerDuty system (a SaaS application service that manages on call calendars, escalation and alert life cycles) or to a developer team email address. Every developer is responsible for running what they wrote, and the team members take turns to be on call in the PagerDuty rota. Some teams never seem to get calls, and others are more often on the critical path. During a major production outage con call, the CORE team never make changes to production applications, they always call a developer to make the change. The alerts mostly refer to business transaction flows (rather than typical operations oriented Linux level issues) and contain deep links to dashboards and developer oriented Application Performance Management tools like AppDynamics which let developers quickly see where the problem is at the Java method level and what to fix,

The transition from datacenter to cloud also invoked a transition from Oracle, initially to SimpleDB (which AWS runs) and now to Apache Cassandra, which has its own dedicated team. We moved a few Oracle DBAs over from the ITops team and they have become experts in helping developers figure out how to translate their previous experience in relational schemas into Cassandra key spaces and column families. We have a few key development engineers who are working on the Cassandra code itself (an open source Java distributed systems toolkit), adding features that we need, tuning performance and testing new versions. We have three key open source projects from this team available on github.com/Netflix. Astyanax is a client library for Java applications to talk to Cassandra, CassJmeter is a Jmeter plugin for automated benchmarking and regression testing of Cassandra, and Priam provides automated operation of Cassandra including creating, growing and shrinking Cassandra clusters, and performing full and incremental backups and restores. Priam is also written in Java. Finally we have three DevOps engineers maintaining about 55 Cassandra clusters (including many that span the US and Europe), a total of 600 or so instances. They have developed automation for rolling upgrades to new versions, and sequencing compaction and repair operations. We are still developing our Cassandra tools and skill sets, and are looking for a manager to lead this critical technology, as well as additional engineers. Individual Cassandra clusters are automatically created by Priam, and it's trivial for a developer to create their own cluster of any size without assistance (NoOps again). We have found that the first attempts to produce schemas for Cassandra use cases tend to cause problems for engineers who are new to the technology, but with some familiarity and assistance from the Cloud Database Engineering team, we are starting to develop better common patterns to work to, and are extending the Astyanax client to avoid common problems.

In summary, Netflix stil does Ops to run its datacenter DVD business. we have a small number of DevOps engineers embedded in the development organization who are building and extending automation for our PaaS, and we have hundreds of developers using NoOps to get their code and datastores deployed in our PaaS and to get notified directly when something goes wrong. We have built tooling that removes many of the operations tasks completely from the developer, and which makes the remaining tasks quick and self service. There is no ops organization involved in running our cloud, no need for the developers to interact with ops people to get things done, and less time spent actually doing ops tasks than developers would spend explaining what needed to be done to someone else. I think that's different to the way most DevOps places run, but its similar to other PaaS enviroments, so it needs it's own name, NoOps. [Update: the DevOps community argues that although it's different, it's really just a more advanced end state for DevOps, so lets just call it PaaS for now, and work on a better definition of DevOps].

John Allspaw12:20 PM

Looks like your comments can't be more than 4,096 chars on this blog. :)

So here's my comment: https://gist.github.com/2140086

Adrian Cockcroft1:51 PM
Thanks John. I agree with some of what you point out. Netflix in effect over-reacted to a dysfunctional ops organization. I think there are several other organizations who would recognize our situation, and would also find a need to over-react to make the solution stick.

Your definition of DevOps seems far broader than the descriptions and definitions I can find by googling or looking on Wikipedia. I don't recognize what we do in those definitions - since they are so focused on the relationship between a Dev org and an Ops org, so someone should post an updated definition to Wikipedia or devops.com.

Until then maybe I'll just call it NetflOps or botchagalops.

Edward Capriolo 12:28 PM
I have a loaded NoOps question for you :) I am very interesting in understanding how a decentralized environments he said-she said issues get solved. For example, I know netflix uses horizontally scalable rest layers as integration points.

Suppose one team/application is having an intermittent problem/bug with another team/application. Team 1 opens a issue. Team 2 reads investigates closes the issue as not a problem. Team 1 double checks and reasserts the issue is Team 2.

In a decentralized environment how is this road block cleared?

As an ops person I spend a good deal of time chasing down problems very external to me. I accept this as an ops person. Since developers are pressed into ops how much time will a developer spend on another teams reported problems. Will team 1 forgo there own scrum deadlines this week because team 2 sucks up all their time reporting bogus problems?

Adrian Cockcroft 1:44 PM
We aren't decentralized. So in the scenario you mention everyone gets in a room and figures it out, or we just end up with a long email thread if it's less serious. APM tools help pinpoint what is going on at the request level down to Java code. Once we have root cause someone files a Jira to fix the issue. There is a manager rotation for centrally prioritizing and coordinating response to major outages. (I'm on duty this week, it comes up every few months.)

We have a few people who have "mad wireshark skills" to debug network layer problems, but that's infrequent and I'm hopeful that boundary.com will come up with better tools in this space.

We don't follow a rigid methodology or fixed release deadlines, we ship code frequently enough that delays aren't usually a big issue, and we have a culture of responsible adults so we communicate and respect each others needs across teams. The infrequent large coordinating events like a new country launch are dealt with by picking one manager to own the overall big picture and lead the coordination.

@sixfootdad 2:44 PM
Adrian - Great article! I'm always fascinated to read & hear how folks have solved problems that plague a lot of IT organizations.

I've got a question about something in your reply above: "we have a culture of responsible adults so we communicate and respect each others needs across teams".
I've found that, time and time again, the most difficult thing about organizational change is the people. How does one go about hiring "responsible adults"? I know that it might sound like a silly or a flippant question, but seriously -- I've lost count of how many times grown up folks act like childish, selfish, spoiled brats.

Adrian Cockcroft 4:50 PM
My views on culture may not be much help - read http://perfcap.blogspot.com/2011/12/how-netflix-gets-out-of-way-of.html to see my explanation of how Netflix does things differently.

Culture is very hard to create or modify but easy to destroy. This is because everyone has to buy into it for it to be effective, and then every manager has to hire only people who are compatible with the culture, and also get rid of people who turn out not to fit in, even if they are doing good work.

So the short answer is start a new company from scratch with the culture you want, and pay a lot of attention to who you hire. I don't think it is possible to do a culture shift if there are more than a roomful of people involved.

Paul Kelly 1:37 AM
Part of getting a "culture of responsible adults" together is partly down to "culture" - although it helps to have mature sensible individuals, fostering that also means avoiding finger-pointing and blame.

The more defensive people are made to feel, the more likely they are to start throwing tantrums when under pressure. A culture where you can put your hand up and say: "I got that wrong, how can we put it right?" gets better results in the long term than one where you might be fired or disciplined for a genuine mistake.

I always wondered how evil geniuses like Ernst Blofeld recruit when getting it wrong means you might end up in the shark tank...

Adrian Cockcroft 4:53 PM
Yes, incident reviews that don't involve blame and finger-pointing are also key. Making the same mistake several times, trying to hide your mistakes, or clear lapses of judgement can't be tolerated though.
Jason6:02 PM
Great article Adrian. I have a question. Is a consecutive IP space important? Since AWS EIP doesn't guarantee consecutive addresses, I've wondered if this mattered to app developers.

Anything that could have been done by subnet is out the window. For example if you wanted to do port sweeps of your network blocks for an audit, perform penetration testing, or parsing logs by IPs. I suppose this could be done pragmatically but was curious about your experiences. Does it matter?

Adrian Cockcroft 11:16 AM
If the network topology matters you can use VPC to manage it. Also if you are a big enough customer to have an enterprise level support contract with AWS and use a lot of EIPs it is possible to get them allocated in contiguous blocks.
manul 9:10 AM
Some thoughts by me on the topic of NoOps and DevOps and the future of operations and the need for operations http://imansson.wordpress.com/2012/03/21/35/
Sudsy11:44 AM
I'm curious about your statement "Notice that we didn't use the typical DevOps tools Puppet or Chef to create builds at runtime. This is largely because the people making decisions are development managers, who have been burned repeatedly by configuration bugs in systems that were supposed to be identical."

If systems are built from the same puppet manifests, what kind of configuration bugs can occur? Also, how is the alternative method you choose any less likely to cause the same problems?

Adrian Cockcroft12:09 PM
Puppet is overkill for what we end up doing. We have a base AMI that is always identical, we install an rpm on it or untar a file once, rebake the AMI and we are done. That AMI is then turned into running systems using the AWS autoscaler. It's more reliable because there are fewer things to go wrong at boot time, no dependencies on repo's or orchestration tools, we know that the install had completely succeeded the same way on every instance before the instance booted.
TimFraser1:18 PM

Adrian, Last time we talked you mentioned that you were not 100% transitioned to AWS, but were - looks like this was achieved, with the exception for the core DVD Business.
As many companies are talking about Hyrbrid cloud as a target state, and Netflix went through the transition from Pvt/Managed Ops to Public-AWS/NoOps, can you talk about the interim state you were in and what development and operations use cases you optimized around during transition – ala, what were the Hybrid models and principles you all followed – to move from Ops to NoOps. Did you attempt to keep these teams and tools separate or did you all try to create a transition strategy that allowed you to hedge the bets with AWS and the 'all-in strategy" to possibly come back if needed etc..

What Ops Governance and Dev Tooling approach did you take in the state? Specifically around cloud abstraction layers to ease the Access management, support, tooling, elasticity needs. Can you shed some light on the thinking and approach you took while you were in mid-state of transition?

Also can you comment on the how much do you govern and drive the development and deployment approach so you can unify the continuous Integration and Continuous deployment tools so that you can reduce the chaos in this space?
Tim Fraser

Adrian Cockcroft12:15 PM

If you look at the presentations on slideshare.net/adrianco I have discussed in some detail what the transition strategy and tools looked like. We continued to run the old code in the datacenter as we gradually moved functionality away from it. The old DC practices were left intact as we moved developers to the cloud one group at a time.

JonColes10:17 PM

Hi Adrian,

It seems to me that rather than NoOps, this is an outsourcing of ops. I assume that the issues you had with Ops came from the need to control cost, scale etc? If not, please would you clarify?

If it was, how has moving to AWS solved your problems? Is it that AWS can scale more readily based on experience to date? Was cost control an issue before, and is it now, in terms of Ops
costs?

Given that NoOps = outsourcing, I can see that managing the relationship with the service provider becomes vital for you?

Thanks in advance, interesting stuff!

Jonathan Coles

DevOps Stackify 8:43 PM

Dev and admin teams struggle these days with keeping up with agile development. DevOps helps by breaking down some of the walls. But one of the biggest challenges is getting the entire development team involved and not just 1 or 2 people who help do deployments.

The entire team needs visibility to the production server environments to help support and troubleshoot applications.

Recommended Links

Internal

External

History



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: June, 01, 2021