Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Is DevOps a yet another "for profit" technocult
proposing "salvation" from the current difficulties

DevOps Hoopla

What’s all the hype about?

 

News

Hybrid Cloud as Alternative to "Pure" Cloud Computing

Recommended Links

Unix Configuration Management Tools

Unix System Monitoring

 Enterprise Job schedulers
Bandwidth communism Problem of loyalty Issues of security and trust in "pure" cloud environment Questionable costs efficiency of pure cloud Lock in related problems Cloud Mythology
Preventing Vendors From Playing The Blame Game Dispelling IT Management Myths Agile -- Fake Solution to an Important Problem Fundamental Absurdity of IT Management Infrastructure as Code Managing Managers Expectations
Troubleshooting Remote Autonomous Servers Configuring Low End Autonomous Servers Review of Remote Management Systems Virtual Software Appliances Working with serial console Is Google evil ?

Typical problems with IT infrastructure

Heterogeneous Unix server farms Webliography of problems with "pure" cloud environment Sysadmin Horror Stories Humor Etc
The term cult usually refers to a social group defined by its religious, spiritual, or philosophical beliefs, or its common interest in a particular personality, object or goal. The term itself is controversial and it has divergent definitions in both popular culture and academia and it also has been an ongoing source of contention among scholars across several fields of study.[1][2] In the sociological classifications of religious movements, a cult is a social group with socially deviant or novel beliefs and practices...

... ... ...

...In 1990 Lucy Patrick commented: "Although we live in a democracy, cult behavior manifests itself in our unwillingness to question the judgment of our leaders, our tendency to devalue outsiders and to avoid dissent. We can overcome cult behavior, he says, by recognizing that we have dependency needs that are inappropriate for mature people, by increasing anti-authoritarian education, and by encouraging personal autonomy and the free exchange of ideas."[108]

Cult - Wikipedia

Groupthink is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Group members try to minimize conflict and reach a consensus decision without critical evaluation of alternative viewpoints by actively suppressing dissenting viewpoints, and by isolating themselves from outside influences.

Groupthink requires individuals to avoid raising controversial issues or alternative solutions, and there is loss of individual creativity, uniqueness and independent thinking. The dysfunctional group dynamics of the "ingroup" produces an "illusion of invulnerability" (an inflated certainty that the right decision has been made). Thus the "ingroup" significantly overrates its own abilities in decision-making and significantly underrates the abilities of its opponents (the "outgroup"). Furthermore, groupthink can produce dehumanizing actions against the "outgroup".

Antecedent factors such as group cohesiveness, faulty group structure, and situational context (e.g., community panic) play into the likelihood of whether or not groupthink will impact the decision-making process.

Groupthink - Wikipedia

 


Introduction

You’ve probably heard the hype about DevOps: It helps you deliver products faster, improves your profitability, ensures continuous integration, removes roadblocks from your releases and gives you a competitive advantage. That such companies  as Netflix and Amazon supposedly practice (and in case of Netflix advertize) it  (notwithstanding very average quality of Amazon portal (one bad thing is that the reviews are indexed only in one dimension (number of stars), not by "reviewer reputation" and more advanced criteria. Many reviews are "fake" (friends reviews), with some detectable by the total number of reviews posted, but not always. As well as below-average quality of Netflix interface (sometimes it's challenging to find the movie that you interested in, unless you know the exact title;-).

But junk science is and always was based on cherry-picked evidence which has carefully been selected or edited to support a pre-selected "truth".  Facts that are not fit are suppressed 

In reality nothing is new under the sun in software development. Now people rehash ideas that are at least 30-40 years old.  And the level is often lower than in The Mythical Man-Month which was published in 1975. What is more important that all under the surface of all those lofty goals is the desire of companies brass to used DevOps as another gateway to outsourcing. Yet another justification for "firing a lot of people."

If we are talking about DevOps as a software development methodology it is very similar to Agile: another rotten attempt to reshuffle a set of old ideas for fun and profit (some worthwhile. some not so much) into attractive marketable technocult. In 2017 only complete few software specialists believe that Agile is more then self-promotion campaign of group of unscrupulous and ambitious people, who appointed themselves as the high priest of this cult.  Half-life of such "cargo cult" programming methodologies is usually a decade (Agile became fashionable around 1996) so it is well past hype stage of the software methodology cycle and is mostly "forgotten".  For good.

In any cult  there are some grains of rationality in DevOps, along with poignant critique of status quo. As well as some ideas about how to cope with the unsatisfactory for potential followers situation. Some good some false, but at least superficially attractive. Otherwise such a technocult can't attract followers.  Techno-cult emerge in time of huge dissatisfaction and strive by proposing "salvation" from the current difficulties:

think this is one of the main reason why we see this DevOps movement, we are many that see this happen in many organizations, the malfunctioning organization and failing culture that can’t get operation and development to work towards common goals. In those organizations some day development give up and take care of operation by them self and let the operation guys take care of the old stuff.

But along the this small set of rational (and most probably not new) ideas there tremendous set of completely false, even bizarre ideas and claim, which make it a cult.. There is also fair share of Pollyanna creep.  It seems to me that rather than DevOps serve as a smote screen for another round of outsourcing of ops.  (Ulf Månsson about infrastructure )

This creates new way of working, one good example is the cultural change at Nokia Entertainment UK, presented at the DevOps conference in Göteborg, by inclusion going from 6 releases/year and 50 persons working with releases to 246 releases/year with only 4 persons, see http://www.slideshare.net/pswartout/devopsorg-how-we-are-including-almost-everyone. That story was impressive.

This is accomplished with creating of a new language ripe with new terms. Another variant of NewSpeak.  And this is very important to understand. It is this language that allow to package bizarre and unproven ideas into the cloak of respectability.

Primitivism of thinking and unfounded claims of this new IT fashion (typical half-life of IT fad is less then a decade; who now remember all this verification hoopla ) are clearly visible in advocacy papers such as Comparing DevOps to traditional IT Eight key differences - DevOps.com. Some of the claims are clearly suspect, and smell of "management consultant speak" (an interesting variety of corporate bullshit). For example:

Traditional IT is fundamentally a risk averse organization.  A CIO’s first priority is to do no harm to the business.  It is the reason why IT invests in so much red tape, processes, approvals etc.  All focused on preventing failure.  And yet despite all these investments, IT has a terrible track record – 30% of new projects are delivered late, 50% of all new enhancements are rolled back due to quality issues, and 40% of the delay is caused by infrastructure issues.

A DevOps organization is risk averse too but they also understand that failure is inevitable.  So instead of trying to eliminate failure they prefer to choose when and how they fail.  They prefer to fail small, fail early, and recover fast.  And they have built their structure and process around it.  Again the building blocks we have referred to in this article – from test driven development, daily integration, done mean deployable, small batch sizes, cell structure, automation etc. all reinforce this mindset.

Note disingenuous claim "IT has a terrible track record – 30% of new projects are delivered late, 50% of all new enhancements are rolled back due to quality issues, and 40% of the delay is caused by infrastructure issues."  As  for "So instead of trying to eliminate failure they prefer to choose when and how they fail.  They prefer to fail small, fail early, and recover fast." in case of a SNAFU you can't predict the size of the failure. Tell this mantra "They prefer to fail small, fail early, and recover fast"  to Google or Amazon during the next outage

Note also that criticism is swiped under the carpet and definition evolves with time to preserve its attractiveness to new members ( DevOps is Dead! Long Live DevOps! - DevOps.com )

Some are seekers on the quest for the one, true DevOps. They were misled. I’m here to say: Give it up. Whatever you find at the end of that journey isn’t it. That one true DevOps is dead. Dead and buried. The search is pointless or, kind of worse: The search misses the point altogether.

 DevOps began as a sort of a living philosophy: about inclusion rather than exclusion, raises up rather than chastises, increases resilience rather than assigns blame, makes smarter and more awesome rather than defines process and builds bunkers. At any rate, it was also deliberately never strictly defined.

In 2010, it seemed largely about Application Release Automation (ARA), monitoring, configuration management and a lot of beginning discussion about culture and teams. By 2015, a lot of CI/CD, containers and APIs had been added. The dates are rough but my point is: That’s all still there but now DevOps discussions today also include service design and microservices. Oh, and the new shiny going by the term “serverless.” It is about all of IT as it is naturally adapting.

Connection to Agile

Like Agile before DevOps  emphasizes a culture of common goals (this time between "operation" and developers with the idea of merging them like in good old times  -- the initial name was NoOps)  and get things done together, presenting itself as new IT culture.  But details are fuzzy and contradictory. Like many similar technologies before it DevOs mean "a good thing".  While it seems the IT world is rushing to embrace the concept of DevOps, nobody agrees on what it actually means. And that creates some skepticism.

DevOps paints a picture of cultures once at odds, now working together in harmony.  And it certainly can be done but it is easier said then done.

The fact that DevOsa is somehow connected with Agile suggest that it is snake oil with a bunch of salesmen who benefit from training courses, consulting gigs, published books, conferences, and other legal ways to extract money from lemmings.  When you read sentences like "DevOps is an environment where an agile relationship will take place between operations and development teams. "( https://www.quora.com/What-is-devops-and-why-is-it-important  ) you quickly understand what type of people can benefit from DevOps.

DevOps hoopla

DevOps is presented by adherents as all singing, all dancing universal solution to all problems of mankind, or, at least, of problems in the data center. Ignoring the fact that there is no "techno cure" for large datacenter problems, because those problem are not only technological in nature, but also reflect complex mix of sociological (the curce of overcompexity (see The Collapse of Complex Societies)  and especially balance of power between various groups, such as corporate management, developers and operation staff). As well as certain political dimensions (connection of neoliberal transformation of the society, rise of top 1% and decline of middle class).  Outsourcing and related layoffs is probably the most prominent of those ("never mentioned aloud" in DevOps hoopla) problems.

Of cause this is an illusion. Re-reading Mythical Man-month helps to get proper line of thinking about this problem. Large datacenters and large software development problem are inherently complex and failures and going over budget  in large software project is more or less typical course of events. Software development talent is a very scare commodity. No superficial remedies can help to solve this tremendously difficult problem. Really the key problem of modern datacenter and modern software applications  is mind boggling complexity.  They are probably the most complex artifacts created by mankind.  And it is unclear whether the better solution lies in increasing the complexity (as DevOps advocates via move to virtual instances, configuration management (via Puppet and similar packages) and continues delivery schemes) or decreasing it (as KISS software development  movement advocates).  May be the truth lies somewhere in between.

The main problem with DevOps hoopla is that it substantially increases the complexity of environment. Both virtual machines and configuration management tools provide additional "levels of indirection" which make troubleshooting more complex and causes of failures more varied and complex.

Sometimes DevOps methodology implemented in modest scope does provide some of the claimed benefits (and some level of configuration management in a large datacenter is a must; the question is what if the optimal solution for this), but often it does not and just degenerates into another round of centralization and outsourcing which make the situation worse. Often much worse to the level when It really became completely dysfunctional. I saw this affect of DevOps implementation in large corporations.

So it should be evaluated on case by case basic, not as panacea. As always much depends of the talent of the people who try to implement  it. Also change in a large datacenter is exceedingly difficult and often degenerates into what can be called one step forward, two steps back". For example, learning such tools as Puppet or Chief requires quite a lot of effort for rather questionable return on investment as complexity precludes full utilization of the tool and it is downsized to basic staff.  So automation using them is a mixed blessing.

Similarly virtual servers which is a part of DevOps hype are easier to deploy, but load management of multiple servers running on the same box is an additional and pretty complex task. Also VMware that dominates VM scene is expensive (which mean that saving are going to VMware not the enterprise which deploys it ;-) and is a bad VM for Linux. Linux needs para-virtualization not full CPU virtualization that VMware designed for Windows offers (with some optimization tweaks). Docker, which is a rehash of the idea of Solaris zones is a better deal but it has its own limitations. Often severe as this is a light weight VM. 

Cost wise DevOps is an expensive proposition which provides both higher complexity and lower reliability (VM problems are addition to the set of existing problem; also if server gos down all VM gores down with it). Using VM is very similar to replacing enterprise servers with desktops, but at higher cost (disposable desktops is an interesting alternative to VMware VMs cost wise).  The cost of "decent" desktop now is one tens of the cost of a "decent" enterprise server ($6K-$8K) but 10 desktops can definitely do more computation wise.

Connection with outsourcing and the move to the cloud of the enterprise datacenters

DevOps serve as a smoke screen for further outsourcing and moving from a traditional data center to the cloud deployment  model. Those activities are considered by corporate brass as another way to cut costs (and the costs of IT in most large manufacturing corporations are already about 1% or less; so there is not much return on this investment anyway).

From 1992 to 2012 Data Centers already experienced technological reorganization, which might be called Intel revolution, which dramatically increased role of Intel computers in the datacenter, introduced new server form factors such as blades and new storage technologies such as SAN and NAS. Virtualization became common in Windows world due to proliferation of VMware instances. Faster internet and wireless technologies allowed more distributed workforce and ability for people work part of the week from home. Smartphones now exceed the power of 1996 desktop. Moreover there was already a distinct trend  of  the consolidation of datacenters within the large companies.

As the result many services, such as email and to lesser extent file storage are already provided via internal company cloud from central servers. At the same time it became clear that along with technical challenges "cloud services" present huge threat to security and privacy.   The driving force behind the cloud is the desire to synchronize and access data from several devices that people now own (desktop, laptop, smartphone, tablets. In other words to access your own data from multiple devices.  The first such application, ability to view corporate e-mail from the cell phone, essentially launched Blackberry smartphones into prominence. 

In view of those changes managing datacenter remotely became a distinct possibility. That's why DevOps serves for higher management as a kind of "IT outsourcing manifesto".  But with outsourcing the problem o loyalty comes into forefront.

 

Why DevOps emerged and what problem it addresses

DevOps was a perverse reaction to the problem in companies who now need to maintain thousand of servers, such as Netflix, Amazon, Facebook and Google. People wanted some technological panacea and it fills the niche. Providing a false solution to the real and complex problems. So DevOps is intrinsically connected with attempts of automation of system administration using Unix configuration management tools. BTW that does not mean that Puppet or Chief are the proper way to do that (the king might still be naked ;-).  They just bask in DevOps hype . And try to sustain and maintain it for fun and profit.   

The second part is an attempt to break isolated and fossilized silos in which developers and operation staff live in large corporation. Sometimes with almost without interaction.  So in a way this as an over-reaction to a dysfunctional ops organization (as Adrian Cockcroft admitted in his comment.), which actually clearly demonstrates that  he does not understand how complex organization operated (his idea of abolishing operations is just an attempt to grab the power by "developer class") and never read the book by Charles Perrow (Complex Organizations A Critical Essay).  In complex organization technological issues re always intersect with power struggle and effects of neoliberalism on IT and corporate environment in general. Blowback of excessive outsourcing of IT is hitting large corporations and is a part of problem that IT organizations experience now. DevOps in this sense is a part of the problem and not a part of the solution.  You will never read about this this in "DevOps hoopla" style of books that proliferate (Amazon lists around hundred of books on this topic; most junk). In reality globalization while solving  one set of problem creates another and her IT is both the victim and the part of this "march of neoliberalism" over globe:

Information and communications technologies are part of the infrastructure of globalization in finance, capital mobility and transnational business. Major changes in the international economic landscape are intertwined and contemporary accelerated globalization is in effect a package deal that includes informatization (applications of information technology), flexibilization (changes in production and labour associated with post-Fordism), financialization (the growing importance of financial instruments and services) and deregulation or liberalization (unleashing market forces). This package effect contributes to the dramatic character of the changes associated with globalization, which serves as their shorthand description. 

A good early overview why DevOps emerged (when it was still called NoOps) was made in 2012 by Mile Loukidis, who wrote historically important paper of the subject.

Adrian Cockcroft’s article about NoOps at Netflix ignited a controversy that has been smoldering for some months. John Allspaw’s detailed response to Adrian’s article makes a key point: What Adrian described as “NoOps” isn’t really. Operations doesn’t go away. Responsibilities can, and do, shift over time, and as they shift, so do job descriptions. But no matter how you slice it, the same jobs need to be done, and one of those jobs is operations. What Adrian is calling NoOps at Netflix isn’t all that different from Operations at Etsy. But that just begs the question: What do we mean by “operations” in the 21st century? If NoOps is a movement for replacing operations with something that looks suspiciously like operations, there’s clearly confusion. Now that some of the passion has died down, it’s time to get to a better understanding of what we mean by operations and how it’s changed over the years.

At a recent lunch, John noted that back in the dawn of the computer age, there was no distinction between dev and ops. If you developed, you operated. You mounted the tapes, you flipped the switches on the front panel, you rebooted when things crashed, and possibly even replaced the burned out vacuum tubes. And you got to wear a geeky white lab coat. Dev and ops started to separate in the ’60s, when programmer/analysts dumped boxes of punch cards into readers, and “computer operators” behind a glass wall scurried around mounting tapes in response to IBM JCL. The operators also pulled printouts from line printers and shoved them in labeled cubbyholes, where you got your output filed under your last name.

The arrival of minicomputers in the 1970s and PCs in the ’80s broke down the wall between mainframe operators and users, leading to the system and network administrators of the 1980s and ’90s. That was the birth of modern “IT operations” culture. Minicomputer users tended to be computing professionals with just enough knowledge to be dangerous. (I remember when a new director was given the root password and told to “create an account for yourself” … and promptly crashed the VAX, which was shared by about 30 users). PC users required networks; they required support; they required shared resources, such as file servers and mail servers. And yes, BOFH (“Bastard Operator from Hell”) serves as a reminder of those days. I remember being told that “no one” else is having the problem you’re having — and not getting beyond it until at a company meeting we found that everyone was having the exact same problem, in slightly different ways. No wonder we want ops to disappear. No wonder we wanted a wall between the developers and the sysadmins, particularly since, in theory, the advent of the personal computer and desktop workstation meant that we could all be responsible for our own machines.

But somebody has to keep the infrastructure running, including the increasingly important websites. As companies and computing facilities grew larger, the fire-fighting mentality of many system administrators didn’t scale. When the whole company runs on one 386 box (like O’Reilly in 1990), mumbling obscure command-line incantations is an appropriate way to fix problems. But that doesn’t work when you’re talking hundreds or thousands of nodes at Rackspace or Amazon. From an operations standpoint, the big story of the web isn’t the evolution toward full-fledged applications that run in the browser; it’s the growth from single servers to tens of servers to hundreds, to thousands, to (in the case of Google or Facebook) millions. When you’re running at that scale, fixing problems on the command line just isn’t an option. You can’t afford letting machines get out of sync through ad-hoc fixes and patches. Being told “We need 125 servers online ASAP, and there’s no time to automate it” (as Sascha Bates encountered) is a recipe for disaster.

The response of the operations community to the problem of scale isn’t surprising. One of the themes of O’Reilly’s Velocity Conference is “Infrastructure as Code.” If you’re going to do operations reliably, you need to make it reproducible and programmatic. Hence virtual machines to shield software from configuration issues. Hence Puppet and Chef to automate configuration, so you know every machine has an identical software configuration and is running the right services. Hence Vagrant to ensure that all your virtual machines are constructed identically from the start. Hence automated monitoring tools to ensure that your clusters are running properly. It doesn’t matter whether the nodes are in your own data center, in a hosting facility, or in a public cloud. If you’re not writing software to manage them, you’re not surviving.

Furthermore, as we move further and further away from traditional hardware servers and networks, and into a world that’s virtualized on every level, old-style system administration ceases to work. Physical machines in a physical machine room won’t disappear, but they’re no longer the only thing a system administrator has to worry about. Where’s the root disk drive on a virtual instance running at some colocation facility? Where’s a network port on a virtual switch? Sure, system administrators of the ’90s managed these resources with software; no sysadmin worth his salt came without a portfolio of Perl scripts. The difference is that now the resources themselves may be physical, or they may just be software; a network port, a disk drive, or a CPU has nothing to do with a physical entity you can point at or unplug. The only effective way to manage this layered reality is through software.

So infrastructure had to become code. All those Perl scripts show that it was already becoming code as early as the late ’80s; indeed, Perl was designed as a programming language for automating system administration. It didn’t take long for leading-edge sysadmins to realize that handcrafted configurations and non-reproducible incantations were a bad way to run their shops. It’s possible that this trend means the end of traditional system administrators, whose jobs are reduced to racking up systems for Amazon or Rackspace. But that’s only likely to be the fate of those sysadmins who refuse to grow and adapt as the computing industry evolves. (And I suspect that sysadmins who refuse to adapt swell the ranks of the BOFH fraternity, and most of us would be happy to see them leave.) Good sysadmins have always realized that automation was a significant component of their job and will adapt as automation becomes even more important. The new sysadmin won’t power down a machine, replace a failing disk drive, reboot, and restore from backup; he’ll write software to detect a misbehaving EC2 instance automatically, destroy the bad instance, spin up a new one, and configure it, all without interrupting service. With automation at this level, the new “ops guy” won’t care if he’s responsible for a dozen systems or 10,000. And the modern BOFH is, more often than not, an old-school sysadmin who has chosen not to adapt.

James Urquhart nails it when he describes how modern applications, running in the cloud, still need to be resilient and fault tolerant, still need monitoring, still need to adapt to huge swings in load, etc. But he notes that those features, formerly provided by the IT/operations infrastructures, now need to be part of the application, particularly in “platform as a service” environments. Operations doesn’t go away, it becomes part of the development. And rather than envision some sort of uber developer, who understands big data, web performance optimization, application middleware, and fault tolerance in a massively distributed environment, we need operations specialists on the development teams. The infrastructure doesn’t go away — it moves into the code; and the people responsible for the infrastructure, the system administrators and corporate IT groups, evolve so that they can write the code that maintains the infrastructure. Rather than being isolated, they need to cooperate and collaborate with the developers who create the applications. This is the movement informally known as “DevOps.”

Amazon’s EBS outage last year demonstrates how the nature of “operations” has changed. There was a marked distinction between companies that suffered and lost money, and companies that rode through the outage just fine. What was the difference? The companies that didn’t suffer, including Netflix, knew how to design for reliability; they understood resilience, spreading data across zones, and a whole lot of reliability engineering. Furthermore, they understood that resilience was a property of the application, and they worked with the development teams to ensure that the applications could survive when parts of the network went down. More important than the flames about Amazon’s services are the testimonials of how intelligent and careful design kept applications running while EBS was down. Netflix’s ChaosMonkey is an excellent, if extreme, example of a tool to ensure that a complex distributed application can survive outages; ChaosMonkey randomly kills instances and services within the application. The development and operations teams collaborate to ensure that the application is sufficiently robust to withstand constant random (and self-inflicted!) outages without degrading.

Taken at IBM's headquarter On the other hand, during the EBS outage, nobody who wasn’t an Amazon employee touched a single piece of hardware. At the time, JD Long tweeted that the best thing about the EBS outage was that his guys weren’t running around like crazy trying to fix things. That’s how it should be. It’s important, though, to notice how this differs from operations practices 20, even 10 years ago. It was all over before the outage even occurred: The sites that dealt with it successfully had written software that was robust, and carefully managed their data so that it wasn’t reliant on a single zone. And similarly, the sites that scrambled to recover from the outage were those that hadn’t built resilience into their applications and hadn’t replicated their data across different zones.

In addition to this redistribution of responsibility, from the lower layers of the stack to the application itself, we’re also seeing a redistribution of costs. It’s a mistake to think that the cost of operations goes away. Capital expense for new servers may be replaced by monthly bills from Amazon, but it’s still cost. There may be fewer traditional IT staff, and there will certainly be a higher ratio of servers to staff, but that’s because some IT functions have disappeared into the development groups. The bonding is fluid, but that’s precisely the point. The task — providing a solid, stable application for customers — is the same. The locations of the servers on which that application runs, and how they’re managed, are all that changes.

One important task of operations is understanding the cost trade-offs between public clouds like Amazon’s, private clouds, traditional colocation, and building their own infrastructure. It’s hard to beat Amazon if you’re a startup trying to conserve cash and need to allocate or deallocate hardware to respond to fluctuations in load. You don’t want to own a huge cluster to handle your peak capacity but leave it idle most of the time. But Amazon isn’t inexpensive, and a larger company can probably get a better deal taking its infrastructure to a colocation facility. A few of the largest companies will build their own datacenters. Cost versus flexibility is an important trade-off; scaling is inherently slow when you own physical hardware, and when you build your data centers to handle peak loads, your facility is underutilized most of the time. Smaller companies will develop hybrid strategies, with parts of the infrastructure hosted on public clouds like AWS or Rackspace, part running on private hosting services, and part running in-house. Optimizing how tasks are distributed between these facilities isn’t simple; that is the province of operations groups. Developing applications that can run effectively in a hybrid environment: that’s the responsibility of developers, with healthy cooperation with an operations team.

The use of metrics to monitor system performance is another respect in which system administration has evolved. In the early ’80s or early ’90s, you knew when a machine crashed because you started getting phone calls. Early system monitoring tools like HP’s OpenView provided limited visibility into system and network behavior but didn’t give much more information than simple heartbeats or reachability tests. Modern tools like DTrace provide insight into almost every aspect of system behavior; one of the biggest challenges facing modern operations groups is developing analytic tools and metrics that can take advantage of the data that’s available to predict problems before they become outages. We now have access to the data we need, we just don’t know how to use it. And the more we rely on distributed systems, the more important monitoring becomes. As with so much else, monitoring needs to become part of the application itself. Operations is crucial to success, but operations can only succeed to the extent that it collaborates with developers and participates in the development of applications that can monitor and heal themselves.

Success isn’t based entirely on integrating operations into development. It’s naive to think that even the best development groups, aware of the challenges of high-performance, distributed applications, can write software that won’t fail. On this two-way street, do developers wear the beepers, or IT staff? As Allspaw points out, it’s important not to divorce developers from the consequences of their work since the fires are frequently set by their code. So, both developers and operations carry the beepers. Sharing responsibilities has another benefit. Rather than finger-pointing post-mortems that try to figure out whether an outage was caused by bad code or operational errors, when operations and development teams work together to solve outages, a post-mortem can focus less on assigning blame than on making systems more resilient in the future. Although we used to practice “root cause analysis” after failures, we’re recognizing that finding out the single cause is unhelpful. Almost every outage is the result of a “perfect storm” of normal, everyday mishaps. Instead of figuring out what went wrong and building procedures to ensure that something bad can never happen again (a process that almost always introduces inefficiencies and unanticipated vulnerabilities), modern operations designs systems that are resilient in the face of everyday errors, even when they occur in unpredictable combinations.

In the past decade, we’ve seen major changes in software development practice. We’ve moved from various versions of the “waterfall” method, with interminable up-front planning, to “minimum viable product,” continuous integration, and continuous deployment. It’s important to understand that the waterfall and methodology of the ’80s aren’t “bad ideas” or mistakes. They were perfectly adapted to an age of shrink-wrapped software. When you produce a “gold disk” and manufacture thousands (or millions) of copies, the penalties for getting something wrong are huge. If there’s a bug, you can’t fix it until the next release. In this environment, a software release is a huge event. But in this age of web and mobile applications, deployment isn’t such a big thing. We can release early, and release often; we’ve moved from continuous integration to continuous deployment. We’ve developed techniques for quick resolution in case a new release has serious problems; we’ve mastered A/B testing to test releases on a small subset of the user base.

All of these changes require cooperation and collaboration between developers and operations staff. Operations groups are adopting, and in many cases, leading in the effort to implement these changes. They’re the specialists in resilience, in monitoring, in deploying changes and rolling them back. And the many attendees, hallway discussions, talks, and keynotes at O’Reilly’s Velocity conference show us that they are adapting. They’re learning about adopting approaches to resilience that are completely new to software engineering; they’re learning about monitoring and diagnosing distributed systems, doing large-scale automation, and debugging under pressure. At a recent meeting, Jesse Robbins described scheduling EMT training sessions for operations staff so that they understood how to handle themselves and communicate with each other in an emergency. It’s an interesting and provocative idea, and one of many things that modern operations staff bring to the mix when they work with developers.

What does the future hold for operations? System and network monitoring used to be exotic and bleeding-edge; now, it’s expected. But we haven’t taken it far enough. We’re still learning how to monitor systems, how to analyze the data generated by modern monitoring tools, and how to build dashboards that let us see and use the results effectively. I’ve joked about “using a Hadoop cluster to monitor the Hadoop cluster,” but that may not be far from reality. The amount of information we can capture is tremendous, and far beyond what humans can analyze without techniques like machine learning.

Likewise, operations groups are playing a huge role in the deployment of new, more efficient protocols for the web, like SPDY. Operations is involved, more than ever, in tuning the performance of operating systems and servers (even ones that aren’t under our physical control); a lot of our “best practices” for TCP tuning were developed in the days of ISDN and 56 Kbps analog modems, and haven’t been adapted to the reality of Gigabit Ethernet, OC48* fiber, and their descendants. Operations groups are responsible for figuring out how to use these technologies (and their successors) effectively. We’re only beginning to digest IPv6 and the changes it implies for network infrastructure. And, while I’ve written a lot about building resilience into applications, so far we’ve only taken baby steps. There’s a lot there that we still don’t know. Operations groups have been leaders in taking best practices from older disciplines (control systems theory, manufacturing, medicine) and integrating them into software development.

And what about NoOps? Ultimately, it’s a bad name, but the name doesn’t really matter. A group practicing “NoOps” successfully hasn’t banished operations. It’s just moved operations elsewhere and called it something else. Whether a poorly chosen name helps or hinders progress remains to be seen, but operations won’t go away; it will evolve to meet the challenges of delivering effective, reliable software to customers. Old-style system administrators may indeed be disappearing. But if so, they are being replaced by more sophisticated operations experts who work closely with development teams to get continuous deployment right; to build highly distributed systems that are resilient; and yes, to answer the pagers in the middle of the night when EBS goes down. DevOps.

Photo: Taken at IBM’s headquarters in Armonk, NY. By Mike Loukides.

 

Supplement

PSYCHOLOGY OF SPIRITUAL SECTS. The psychological dynamics underlying the creation and growth of spiritual movements.

The main features

Which are the features of psychological influence most common to spiritual movements ?


1. Type of members.

There are many types of members, each with its own motivation.

The weaker the individual's independance, the more will he be tied to the group. Members who understand group-mechanisms, prepared to cope with them in order to direct their attention to the spirit, will benefit most as they are selective in picking up the cream of what is given and taking the rest with a grain of salt.

2. Leader/founder/guru

New religious movements arise usually around a father/mother figure who has gained authority after receiving a special revelation, communication, truth or insight. His charisma will vouchsafe loyal followers, even if his lifestyle may give rise to severe doubts to some. He may boost his prestige by claiming to follow the footsteps of a an esteemed spiritual teacher, represent an esoteric tradition, be of noble descent, or channel the wisdom of a great mind. (Eckankar's Paul Twitchell is the last in the lineage of 970 "Eckmasters")
He/she represents an archetype in members' subconscious minds. That of a wise father, or mother. As such he/she will have a compelling influence on followers who project their father/mother complex on him/her.

Alternatively women may fall in love with the leader, worship him, exert themselves to cater after his wishes and whims. They will try to stay in his viscinity, make themselves indispensable and slowly take control of the movement. Jealousy amongst them will make things even worse and split the ranks.

The psychological make-up of a guru may be generalized as follows:

Jeffrey Masson (see below) has this to say about gurus:

Every guru claims to know something you cannot know by yourself or through ordinary channels. All gurus promise access to a hidden reality if only you will follow their teaching, accept their authority, hand your life over to them. Certain questions are off limits. There are things you cannot know about the guru and the guru's personal life. Every doubt about the guru is a reflection of your own unworthiness, or the influence of an external evil force. The more obscure the action of the guru, the more likely it is to be right, to be cherished. Ultimately you cannot admire the guru, you must worship him. You must obey him, you must humble yourself, for the greater he is, the less you are - until you reach the inner circle and can start abusing other people the way your guru abused you. All this is in the very nature of being a guru.

Sub-conscious drive

Nature seems to instill in a person, faced with a mission, great task, or challenge, a feeling of superiority, unsurmountable optimism, and enormous self-esteem, bordering on an inflated ego, to accomplish what is needed. This drive is reminiscent of the reckless impetus of the adolescent. Having reached maturity a person may feel "chosen" - impelled to forge ahead with vigor and inspire others. Undaunted in the face of obstacles and criticism, it is as if a cloak of invulnerability is laid on his/her shoulders.
Similarly an artist may be driven by a compulsion to express an inner content. He will be prepared to sacrifice everything to give way to his creative impulse. Fortunately his sacrifice does not involve more than the people immediately around him.

Not so with the leader. The number of his followers may grow to considerable proportions. Nature is not concerned whether his sense of superiority has any real foundation. The inflated ego is more or less instinctively driven towards a goal.
Although attaining heights no one would have thought conceivable of that person, when the hour of truth has come events may prove that he has overreached himself, disregarded good advice, or lost complete sense of reality. The result may be either catastrophe, or the uncritical followers may be saddled up with a heritage built on quicksand - on a flight of fancy without actual foundation.
This applies to many fields of human endeavour (Hitler), but specially in the treacherous domain of the spirit.

Discipline - nausea

The teacher may come to the conclusion that unless his followers change fundamentally - undergo a catharsis, or transformation - they will never be able to move forward. He/she regards them as being "asleep" (Jesus, Gurdjieff). Unless drastic measures are employed they will not wake up. To jolt them out of their complacency great sacrifices are demanded. Jesus asked a rich young man to give up all his worldly possessions (S.Matthew 19:21) before following him. Masters in Zen Buddhism, or Gurdjieff, made novices undergo a harsh regime in order to crack open and attain a different state of mind.

This I can have no quarrel with, if it is done against a background of compassion. If the unselfish motive disappears, or commercial considerations become dominant, the harsh discipline may become morbid and degrading. Having lost his dedication the teacher may become nauseated by the mentality and sheepishness of his followers, and in cases derive a sadistic delight in tormenting them.

In recent years reports are brought out about sexual violation of members by guru's, leaders and....bishops! Another example of authority being abused.

The path of a guru is like that of a razor's edge. He may so easily succumb to the temptation of exploiting the power he has attained over his followers. Financial irresponsibility, abuse of followers, reprehensible sexual behaviour......... mass suicide, it is all within his reach once he has overstepped boundaries.

Legacy

During his lifetime the leader will act as a moderator and steer the movement. He will re-interprete his teachings as he sees fit from the responses he receives. The death of the founder marks a turning point. His teachings will become inflexible, as no one dares to temper with them as he did himself. The élan disappears, rigidity takes over, unless another figure arises that leads the movement in a different direction, for better or for worse (St.Paul).

3. Doctrine/teaching

The more secret(ive) the leader's sayings the better. Pronouncements are characterized by great certainty and authority as if it were the word of God. In some cases it is presented as such. By his special way of delivery and presentation it may escape the audience that similar wisdom may be found in any book on spirituality nowadays found in the bookshop around the corner.

Whether the guru bases his wise words on actual experience or on hearsay is difficult to ascertain. In general it may be said that the more mystifying his teachings the stronger their appeal. After all it is beyond reason and should appeal only to the heart.
An exception should be made for true mystical literature based on inner experience which can hardly be expected to appeal to the intellect, but be appreciated intuitively, especially by those who had similar experiences.

Group-speak

Members may adopt fresh meanings to words, talk to each other in a jargon that the outsider can hardly follow (group-speak). The result being an inability to relate in speech, or explain new concepts to the outsider (Fourth Way).
(This may be best understood in other fields: help-programs of software, pop-up windows, warning-messages, not to speak of manuals for installing hardware, drawn up by boffins, are a nightmare to most users!)

Another characteristic is to lift out of context one aspect of religious truth and make it absolute. Such key truth will overshadow all other aspects of faith.
It may be:

etc. When this occurs other significant facets of faith are pushed to the background.

Such partial truths are often heralded as the result of a search for knowledge. The motto "Knowledge is power" is used to suggest that the statements are objective, scientific, or historical facts. Actually they cannot stand the touchstone of the merest critical scrutiny.
Authorities may be paraded to back-up such claims. They have either never been heard of, cannot be considered impartial, or their pronouncements have been lifted out of context. The discussion about the veracity of evolution is full of such red herrings.

4. Uniqueness of the movement

Movements will extol usually their superiority over others. After all there should be a strong reason to select that particular group. Some present themselves as being the sole way towards salvation, being God's chosen people. Others make a promise of a benefit that is only reserved for members of that sect. To avert attention some pride themselves of not having a teaching, or for their openness and democratic rules.

In short new movements will advance a variety of reasons for their uniqueness. Herewith a few:

Noteworthy is the vehemency with which groups stress differences between each other. The closer movements share an outlook the more virulent the attacks on their rivals become, seemingly more than on groups which follow a completely different belief.

Eric Hoffer writes in his 'The True Believer': "true believers of various hues ....view each other with mortal hatred and are ready to fly at each other's throat..."

This manifests itself specially when groups split. In Christianity one could not steep low enough to attack other followers of Christ, who held a slightly different opinion. It resulted in persecution of heretics, burning of early Christian literature, and disastrous wars.

Despite their peaceful appearance relatively new spiritual movements like Theosophy, Rosicrucianism, etc., following splits, exert themselves in accusations against former comrades.

Attacks against belief in paranormal phenomena, for instance by CSICOP, are reminiscent of the zeal of a Christian crusade, be it that they have their roots in humanism and its desperately clinging to a rationalistic/materialistic outlook on life current at the beginning of this century. Consequently the groups of these 'evangelists of rational enlightenment' have similar behavioural patterns and vehemency as sects.

5. Probation and conversion

Certain sects are only too eager to accept individuals. They may have high entrance fees. Or their members are swayed by zeal to convert.

Many movements will put up a barrier by means of an initiation to test the applicant's fitness to become part of the group. Henceforth they will play an important pioneer-part in the foretold future. Having reached such coveted stage members will not fail to follow what they are being told for fear of expulsion.

The new member may undergo a conversion, gaining a completely new insight in the meaning of life, see it in a way the sect does. His previous life with all its relationships has become meaningless. He may have turned himself inside out by a confession of his previous "sins". His conversion is marked by a feeling of peace, happiness and transcendence.

6. Failure of predictions

Common belief in a prophecy will be a strong binding force. One of the principal attractions of the first Christian sects was that they offered salvation from a threatening disaster. That being the end of the world. Only the baptized would await a glorious future. Sects like the Jehova's Witnesses have taken over this succesful formula.

Christians have had to come up with all sorts of arguments to explain away the unfulfilled prediction of their founder regarding the end of the world: "This generation shall not pass away, till all these things be accomplished." (S.Matthew 24:34). One of the lame excuses being that this prediction concerns the fall of Jersusalem only. However, all prophecies in the New Testament in this respect suggest that the impending doom was to be expected in their lifetime.

Jehova's Witnesses have taken the risk of being more specific in their predictions. Older members, who built their faith on them, have had the humiliating experience of having had to explain away various times in their lives the failure of the outcome of their forewarnings.

But predictions are not limited to the religious faiths. The New Age movements use this shared belief in portents as well. For more than sixty years an imminent landing of UFO's has been predicted. Various cults claimed in vain to be their first contactees.
In other movements the second coming of Christ was a main feature (Benjamin Creme). In Theosophy a Messenger was expected from 1975 onward.

The uncritical believers in Edgar Cayce's trance sayings put weight on his predictions of cataclysms (photo Edgar Cayce).

Nostradamus' (photo) obscure astrological foresayings have captured the minds of people for centuries. Each time his verses were interpreted again to suit the circumstances. In hindsight some of his quatrains seem to have relevance to the catastrophe of the destroyed World Trade Center. Quatrains I, 87 - IX, 92 and X, 59 may refer to skyscrapers in New York involved in a terrible explosion.

Sociologists have observed that, failure of prediction results in quite the opposite effect on believers. Contrary to what one would expect it may cause a rally amongst members. Failure is blamed on a misunderstanding, or a faux pas by members. To counteract ridicule they tend to stick together more than ever.

Of course there is a limit. According to a social survey, when predictions fail to materialize three times in a row members are bound to stop, reflect and draw conclusions.
The shattering of such false hopes comes as a severe blow and may mark the beginning of the end of a movement.
One wonders in this respect how many members of the People Forever International sect promoting physical immortality for its followers would have to die before their groups would break up in disappointment. (Since I wrote this ten years ago I have been informed that indeed members have died and the movement broke up in 1998!)
Yet, we see from the Jehova's Witnesses that skilful manoeuvring may off-set unfulfilled prophecies.

To what extremes such believes can lead shows the mass suicidal action of the Heaven's Gate sect and later in Uganda. Such tragic endings are the result of various contributing factors, which are beyond the scope of this article.

7. Belief versus intellect/Secrets

Often disciplines followed in spiritual movements have the effect of a lowering of the threshold to the unconscious mind. Suggestion will begin to play an important part. Precepts are being experienced as the truth, sacrosanct and sure. There is no element of doubt anymore about assumptions and speculation, although actually they lack any factual foundation.

Absolute belief that the Bible is God's word is the cornerstone of most orthodox Christian sects. In Islam the Koran is supposed to contain the word of Allah.
Intellectual analysis of faith is tentamount to heresy.

The ideal breeding ground for convictions are mass-gatherings. During mass gatherings, such as congresses, members are stirred up to an euphoria, the effect of which may linger on for weeks. This is the precise period of time for leaders, or committees, to announce fresh sectarian measures, postulate incredible notions/prophecies, call for further sacrifices, etc. etc. It will all be accepted unquestioningly. Only at a later date, when the euphoria has worn off, will one start to wonder about what was decided.

Secrets

Spiritual movements often hide a corpse in their closet. It may be a part of the history of the movement, details about the hidden life of the leader, or a once revered figure. Things may have been written by them one does not like to be reminded of. A fight, quarrel, full of vehemence and hatred, may have led to a split.

There are so many examples that a long list could be drawn up of the many concealed secrets of spiritual groups.
Whereas in most movements the works of the leader are almost known by heart, Jehova's Witnesses hardly know of the existence of the seven volumes of writings Studies in the Scriptures of their founder Charles Taze Russell (1852-1916). Some of his opinions are such cause of embarrassment that they are not deemed worth reading nowadays.

Eventually a renegate member will reveal such secrets in writing. Frantic denials and counter accusations by those in charge presently will follow almost automatically. These are usually accepted in gratitude by devotees, who cannot get over the shock of such revelations.

8. Common practice, work and ritual

Communal singing, ritual and (incomprehensible) practices (Freemasonry) are strong binding factors. The more irrational they are, the better. Others are a special food regime, the change of name, clothing, or a common aversion.
Joint work for the benefit of the group gives the feeling of a common endeavour and unites the participants. So does proselytization in the streets, menial work of construction and renovation of premises. There is a thin line between true participation and exploitation, however.

Dubious was the practice, common in the seventies, to incite members to criticize one of them to an extent that he/she would break down under the weight of often absurd allegations and insults, resulting in a brainwash effect.

9. Sacrifices, financial secrecy, favours to the rich.

Finances are always a ticklish matter. Human groups always wish to grow. Finances are important. Accountability is often not considered appropriate. Danger arises that members of the inner circle become lax in expenditure of members' contributions. Ambitious schemes call for a constant need for funding. This is the ideal breeding ground for favours to wealthy members. Those who contribute generously stand more chance to be taken in confidence and admitted to the inner circles. Often, as a proof of loyalty, extraordinary sums of money are demanded.

Degrees of initiation may be dependent on one's years of loyalty to the group. In Eckankar up to 8 degrees are given. However, if one fails to pay membership's fees for some time, degrees of initiation may be stripped off again.

Next to financial contributions members will often be expected to offer services to the group. However, if they also have to work for practically nothing in commercial enterprises it becomes dubious. Movements that gather wealth at the expense of their members are questionable. Seldom or never requests for return of contributions/investments are honoured.

10. Unquestioning leadership, reprehensible behaviour amongst members

Man in a herd may not show the best side of his nature. Unconscious drives may reign his/her behaviour. This is applicable especially in circumstances that man strives for the spiritual. Heshe may tend to show split-personality behaviour. On one hand the spiritual personality which is supposed to have come to terms with his animal nature. It is wise, friendly and compassionate on the outside. In the shadows lurks the personality that has been forced into the background, still ridden with all the expulsed human frailties. In moments of weakness it will see its chance to play hideous tricks. It will do so without being noticed by the person involved. The result being: uncharitable behaviour, envy, malicious gossip, hypocrisy, harsh words, insensitivity, unfounded criticism and even worse, not expected from such charismatic figure. It is one of the main reasons for people leaving a particular group in great disappointment.

It is not often realized that, like other human groups, spiritual movements behave like organisms. Group-psychological processes manifest which are sometimes not unlike those in primitive societies. There is the pecking order, the alpha members, and also the group-instinct directed against similar groups. Aggression goes unnoticed and is tolerated when an acceptable common goal is provided. For instance hostility against an individual outside the group, or a critical member inside. This has the effect of strengthening ties within the group like in the animal world.

If leadership loses contact with its members it will have to exert greater discipline. Deviating opinions cannot be tolerated anymore. Persons who hold them are seen as traitors. Acting against them, preferably in secret, is the only way out for the leadership to avert this danger. Members may disappear suddenly without the reasons becoming known, much to the surprise of those left behind. For such machinations in Theosophy read Emily Lutyens: "Candles in the Sun".

Spiritual newsgroups on Internet provide illustration of (un)concious nastiness being ventilated under the veil of anonimity. Messages are often rife with diatribe, personal attacks and misunderstanding. Many of such contributors have no interest at all in the matters discussed. Yet even in closed newsgroups, only open to subscribers, complaints about the tone of communications are being aired.

11. Fear of exclusion

The stronger members are tied to a group, the more the fear of exclusion lurks. They may have invested their life's savings in the work (Scientology), paid a percentage of their income, failed to conclude their study, or make a career, or sacrificed a succesful one.
In many cases a member will have alienated himself from family and friends. He has been told to cut ties with the past. (In the Attleboro cult followers are advised to burn photographs that remind them of bygone days). No wonder his or her sudden conversion, accompanied by fanatism and urge to proselytize, has shied away former friends and relatives.
There is no way left but to seek comfort and understanding with members of the spiritual group.

Isolation is sometimes intentionally sought. Formerly, in the Bhagavan Shri Rajneesh movement, members went about in red/orange dresses and wore mala's with a photo of their master, so setting themselves aside from the mundane world.

The Hare Krishna movement goes even further. Groups of members go out into the streets in their oriental dresses for song and dance routines. However, in most movements the alienation is far more subtle and the natural outcome of an adverse attitude towards the materialism of society.

The true nature of the so-called friendships within the group will only be revealed after a devotee has left the fold. Members have seen this happen, but did not give it a thought at the time, because it happened to someone else. But when they undergo the same fate themselves they will feel the humiliation of being ignored, not being greeted anymore, marriage gone - even not being recognized by one's own children anymore.

The outcast feels thrown in an abyss. He is cut off from social contacts, his life in pieces. The magnitude of this desperate experience should not be under-estimated. The renegade will feel deep shame. He may have confessed in the group intimate secrets, which are now being ridiculed by his former so-called friends. The expulsee, deeply hurt, may become embittered and even enter into a suicidal mental state.

Those readers who have been a member of a movement may recognize some of the above psychological mechanisms. The first reaction of non-members may be to vow never to enter a group. Let us bear in mind, however, that it should be considered a challenge to face these obstacles for the benefit that may result from association with kindred spirits.

A prerequisite is that these conditions are being noticed, looked in the eye, and not denied. The closer people live together, the more group-tensions will build up. Even in reputable circles as Freudian psycho-analytical associations they occur. Few communes are granted a long life as a result of one or more of the pitfalls summarized above. Headquarters, contrary to expectations, are known to be hotbeds of gossip, mutual repulsion and cynism.

So, do not be disheartened and join a group of your liking. After all people who marry also see wrecked marriages all around them, yet go ahead intent on a happy union in mutual trust, without regard to the outcome.
Involvement with other people will lead to personal growth if the consequences are anticipated. The more one stands on one's own feet the more benefit will arise from cooperating with others. It should be borne in mind that the saying "It is better to give, than to receive" is not merely a moral precept. (Read my precepts for living)

Please remember that there are hundreds of movements and that it has not been my intention to summarize them all, or to level any form of criticism at one of them. Indicating the psychological mechanisms operative in some, or all of them, has been my main theme.

On a separate page I have gone into the mysterious presence-phenomenon arising between people who meet in harmony.

In conclusion one may take heed of Krishnamurti's words in 1929 when he refused to become a 'World Teacher' of an organisation set up for him:
I maintain that Truth is a pathless land, and you cannot approach it by any path whatsoever, by any religion, by any sect. Truth being limitless, unconditioned, cannot be organised, nor should any organisation be formed to lead or coerce people along any particular path. If you first understand that, then you will see how impossible it is to organize a belief. A belief is purely an individual matter, and you cannot and must not organize it. If you do, it becomes dead, crystallised; it becomes a creed, a sect, a religion, to be imposed on others.

© Michael Rogge, 2011


Top updates

Softpanorama Switchboard
Softpanorama Search


NEWS CONTENTS

Old News ;-)

[May 19, 2017] IT ops doesnt matter. Really by Dale Vile

Notable quotes:
"... All of the hype around software and developers, which tends to significantly skew even the DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil. Indeed, some go so far as to make the case for a 'NoOps' world in which the public cloud magically takes care of everything downstream once developers have 'innovated' and 'created'. ..."
"... This kind of view comes about from people looking through the wrong end of the telescope. Turn the thing around and look up close at what goes on in the world of ops, and you get a much better sense of perspective. Teams operating in this space are not just there to deploy the next custom software release and make sure it runs quickly and robustly - in fact that's often a relatively small part of what they do. ..."
"... And coming back to operations, you are sadly mistaken if you think that the public cloud makes all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has made things more complex and unpredictable from an integration and management perspective. ..."
"... There are all kinds of valid reasons to keep an application sitting on your own infrastructure anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not forget the simple fact that running things in the cloud is often more expensive over the longer term. ..."
Dec 19, 2016 | theregister.co.uk

Get real – it's not all about developers and DevOps

Listen to some DevOps evangelists talk, and you would get the impression that IT operations teams exist only to serve the needs of developers. Don't get me wrong, software development is a good competence to have in-house if your organisation depends on custom applications and services to differentiate its business.

As an ex-developer, I appreciate the value of being able to deliver something tailored to a specific need, even if it does pain me to see the shortcuts too often taken nowadays due to ignorance of some of the old disciplines, or an obsession with time-to-market above all else.

But before this degenerates into an 'old guy' rant about 'youngsters today', let's get back to the point that I really want to make.

All of the hype around software and developers, which tends to significantly skew even the DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil. Indeed, some go so far as to make the case for a 'NoOps' world in which the public cloud magically takes care of everything downstream once developers have 'innovated' and 'created'.

This kind of view comes about from people looking through the wrong end of the telescope. Turn the thing around and look up close at what goes on in the world of ops, and you get a much better sense of perspective. Teams operating in this space are not just there to deploy the next custom software release and make sure it runs quickly and robustly - in fact that's often a relatively small part of what they do.

This becomes obvious when you recognize how much stuff runs in an Enterprise IT landscape - software packages enabling core business processes, messaging, collaboration and workflow platforms keeping information flowing, analytics environments generating critical business insights, and desktop and mobile estates serving end user access needs - to name but a few.

Vital operations

There's then everything required to deal with security, data protection, compliance and other aspects of risk. Apart from the odd bit of integration and tailoring work - the need for which is diminishing with modern 'soft-coded', connector-driven solutions - very little of all this has anything to do with development and developers.

A big part of the rationale for modernising your application landscape and migrating to the latest flexible and open software packages and platforms is to eradicate the need for coding wherever you can. Code is expensive to build and maintain, and the same can often be achieved today through software switches, policy-driven workflow, drag-and-drop interface design, and so on. Sensible IT teams only code when they absolutely have to.

And coming back to operations, you are sadly mistaken if you think that the public cloud makes all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has made things more complex and unpredictable from an integration and management perspective.

There are all kinds of valid reasons to keep an application sitting on your own infrastructure anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not forget the simple fact that running things in the cloud is often more expensive over the longer term.

Against this background, an 'appropriate' level of custom development and the selective use of cloud services will be the way forward for most organisations, all underpinned by a well-run data centre environment acting as the hub for hybrid delivery. This is the approach that tends to be taken by the most successful enterprise IT teams, and the element that makes particularly high achievers stand out is agile and effective IT operations.

This isn't just to support any DevOps agenda you might have; it is demonstrably a key enabler across the board. Of course if you work in operations, you will know already intuitively know all this. But if you want some ammunition to spell it out to others who need enlightenment, take a look at our research report entitled IT Ops and a Digital Business Enabler; more than just keeping the lights on . This is based on input from 400 Senior European IT professionals. ®

Paul Smith
I think this is one fad that has run its course. If nothing else, the one thing that cloud has brought to the software world is the separation of software from the environment it runs in, and since the the Ops side of DevOps is all about the integration of the platform and software, what you end up with in a cloudy world is a lot of people looking for a new job.
Anonymous Coward

For decades developers have been ignored by infrastructure vendors because the decision makers buying infrastructure sit in the infrastructure teams. Now with the cloud etc vendors realize they will lose supporters within these teams.

So instead - infrastructure vendors target developers to become their next fanboys.

E.g. Dear developer, you won't need to speak to your infrastructure admins anymore to setup a development environment. Now you can automate, orchestrate the provisioning of your containerized development environment at the push of a button. Blah blah blah, but you have to buy our storage.

I remember the days when every DBA wanted RAID10 just because thats what the whitepaper recommended. By that time storage technology had long moved on, but the DBA still talked about Full Stripe Writes.

Now with DevOps you'll have Developers influencing infrastructure decisions, because they just learned about snapshots. And yes - it has to be all flash - and designed from the ground up by millenials that eat avocado.

John 104
Re: DevOps was never supposed to replace Operations

Yes, DevOps isn't about replacing Ops. But try telling that to the powers that be. It is sold and seen as a cost cutting measure.

As for devs learning Ops and vice versa, there are very few on both sides who really understand what it takes to do the others job. I have a very high regard for Devs, but when it comes to infra, they are, as a whole, very incompetent. Just like I'm incompetent in Dev. can't have one without the other. I feel that in time, the pendulum will swing away from cloud as execs and accountants realize how it isn't really saving any money.

The real question is: Will there be any qualified operations engineers available or will they all have retired out or have found work elsewhere. It isn't easy to be an ops engineer, takes a lot of experience to get there, and qualified candidates are hard to come by. Let's face it, in today's world, its a dying breed.

John 104
Very Nice

Nice of you to point out what us in Ops have known all along. I'm afraid it will fall on deaf ears, though. Until the executives who constantly fall for the new shiny are made to actually examine business needs and processes and make business decisions based on said.

Our laughable move to cloud here involved migrating off of on prem Exchange to O365. The idea was to free up our operations team to allow us to do more in house projects. Funny thing is, it takes more management of the service than we ever did on premises. True, we aren't maintaining the Exchange infra, but now we have SQL servers, DCs, ADFS, etc, to maintain in the MS cloud to allow authentication just to use the product. And because mail and messaging is business critical, we have to have geographically disparate instances of both. And the cost isn't pretty. Yay cloud.

[May 17, 2017] Talk of tech innovation is bullsht. Shut up and get the work done – says Linus Torvalds

May 17, 2017 | theregister.co.uk

Linus Torvalds believes the technology industry's celebration of innovation is smug, self-congratulatory, and self-serving. The term of art he used was more blunt: "The innovation the industry talks about so much is bullshit," he said. "Anybody can innovate. Don't do this big 'think different'... screw that. It's meaningless.

In a deferential interview at the Open Source Leadership Summit in California on Wednesday, conducted by Jim Zemlin, executive director of the Linux Foundation, Torvalds discussed how he has managed the development of the Linux kernel and his attitude toward work.

"All that hype is not where the real work is," said Torvalds. "The real work is in the details."

Torvalds said he subscribes to the view that successful projects are 99 per cent perspiration, and one per cent innovation.

As the creator and benevolent dictator of the open-source Linux kernel , not to mention the inventor of the Git distributed version control system, Torvalds has demonstrated that his approach produces results. It's difficult to overstate the impact that Linux has had on the technology industry. Linux is the dominant operating system for servers. Almost all high-performance computing runs on Linux. And the majority of mobile devices and embedded devices rely on Linux under the hood.

The Linux kernel is perhaps the most successful collaborative technology project of the PC era. Kernel contributors, totaling more than 13,500 since 2005, are adding about 10,000 lines of code, removing 8,000, and modifying between 1,500 and 1,800 daily, according to Zemlin. And this has been going on – though not at the current pace – for more than two and a half decades.

"We've been doing this for 25 years and one of the constant issues we've had is people stepping on each other's toes," said Torvalds. "So for all of that history what we've done is organize the code, organize the flow of code, [and] organize our maintainership so the pain point – which is people disagreeing about a piece of code – basically goes away."

The project is structured so people can work independently, Torvalds explained. "We've been able to really modularize the code and development model so we can do a lot in parallel," he said.

Technology plays an obvious role but process is at least as important, according to Torvalds.

"It's a social project," said Torvalds. "It's about technology and the technology is what makes people able to agree on issues, because ... there's usually a fairly clear right and wrong."

But now that Torvalds isn't personally reviewing every change as he did 20 years ago, he relies on a social network of contributors. "It's the social network and the trust," he said. "...and we have a very strong network. That's why we can have a thousand people involved in every release."

The emphasis on trust explains the difficulty of becoming involved in kernel development, because people can't sign on, submit code, and disappear. "You shoot off a lot of small patches until the point where the maintainers trust you, and at that point you become more than just a guy who sends patches, you become part of the network of trust," said Torvalds.

Ten years ago, Torvalds said he told other kernel contributors that he wanted to have an eight-week release schedule, instead of a release cycle that could drag on for years. The kernel developers managed to reduce their release cycle to around two and half months. And since then, development has continued without much fuss.

"It's almost boring how well our process works," Torvalds said. "All the really stressful times for me have been about process. They haven't been about code. When code doesn't work, that can actually be exciting ... Process problems are a pain in the ass. You never, ever want to have process problems ... That's when people start getting really angry at each other." ®

[May 17, 2017] So your client's under-spent on IT for decades and lives in fear of an audit

Notable quotes:
"... Most of us use some form of desired state solution already. Desired state solutions basically involve an OS agent that gets a config from a centralized location and applies the relevant configuration to the operating system and/or applications. ..."
May 17, 2017 | theregister.co.uk
12 May 2017 at 14:56, Trevor Pott Infrastructure as code is a buzzword frequently thrown out alongside DevOps and continuous integration as being the modern way of doing things. Proponents cite benefits ranging from an amorphous "agility" to reducing the time to deploy new workloads. I have an argument for infrastructure as code that boils down to "cover your ass", and have discovered it's not quite so difficult as we might think.

... ... ...

None of this is particularly surprising. When you have an environment where each workload is a pet , change is slow, difficult, and requires a lot of testing. Reverting changes is equally tedious, and so a lot of planning goes into making sure than any given change won't cascade and cause knock-on effects elsewhere.

In the real world this is really the result of two unfortunate aspects of human nature. First: everyone hates doing documentation, so it's highly unlikely that in an unstructured environment every change from the last refresh was documented. The second driver of chaos and problems is that there are few things more permanent than a temporary fix.

When you don't have the budget for the right hardware, software or services you make do. When something doesn't work you "innovate" a solution. When that breaks something, you patch it. You move from one problem to the next, and if you're not careful, you end up with something so fragile that if you breathe on it, it falls over. At this point, you burn it all down and restart from scratch.

This approach to IT is fine - if you have 5, 10 or even 50 workloads. A single techie can reasonably be expected to keep that all in their head, know their network and solve any problems they encounter. Unfortunately, 50 workloads is today restricted to only the smallest of shops. Everyone else is juggling too many workloads to be playing the pets game any more.

Most of us use some form of desired state solution already. Desired state solutions basically involve an OS agent that gets a config from a centralized location and applies the relevant configuration to the operating system and/or applications. Microsoft's group policy can be considered a really primitive version of this, with System Center being a more powerful but miserable to use example. The modern friendly tools being Puppet, Chef, Saltstack, Ansible and the like.

Once you have desired state configs in place we're no longer beating individual workloads into shape, or checking them manually for deviation from design. If all does what it says on the tin, configurations are applied and errors thrown if they can't be. Usually there is some form of analysis software to determine how many of what is out of compliance. This is a big step forward.

... ... ...

This article is sponsored by HPE.

[May 16, 2017] The Technocult Soleil Wiki Fandom powered by Wikia

May 16, 2017 | soleil.wikia.com
The Technocult, also known as the Machine cult is the semi-offical name given by The Church of the Crossed Heart to followers of the Mechanicum faith who supply and maintain virtually all of the church's technology, engineering and industry.

Although they serve with the Church of the Crossed Heart they have their own version of worship that differs substantially in theology and ritualistic forms from that of The Twelve Angels . Instead the Technocult worships a deity they call the Machine god or Omnissiah. The Technocult believes that knowledge is divine and comes only form the Omnissiah thus making any objects that demonstrate the application of knowledge , i.e machinery, or contain it (books) holy in the eyes/optical implants of the Techcult. The Technocult regard organic flesh as weak and imperfect, with the Rot being veiwed as a divine message from the Omnissah demonstrating its weakness, thus making its removal and replacement by mechanical, bionic parts a sacred process that brings them closer to their god with many of its older members having very little of their original bodies remaining.

The date of the cults formation is unknown, or a closely guarded secret...

[May 16, 2017] 10 Things I Hate About Agile Development!

May 16, 2017 | www.allaboutagile.com

1. Saying you're doing Agile just cos you're doing daily stand-ups. You're not doing agile. There is so much more to agile practices than this! Yet I'm surprised how often I've heard that story. It really is remarkable.

... ... ....

3. Thinking that agile is a silver bullet and will solve all your problems. That's so naiive, of course it won't! Humans and software are a complex mix with any methodology, let alone with an added dose of organisational complexity. Agile development will probably help with many things, but it still requires a great deal of skill and there is no magic button.

... ... ...

8. People who use agile as an excuse for having no process or producing no documentation. If documents are required or useful, there's no reason why an agile development team shouldn't produce them. Just not all up-front; do it as required to support each feature or iteration. JFDI (Just F'ing Do It) is not agile!

David, 23 February 2010 at 1:21 am

So agree on number 1. Following "Certified" Scrum Master training (prior to the exam requirement), a manager I know now calls every regular status meeting a "scrum", regardless of project or methodology. Somehow the team is more agile as a result.

Ironically he pulled up another staff member for "incorrectly" using the term retrospective.

Andy Till, 23 February 2010 at 9:28 am

I can think of far worse, how about pairing with the guy in the office who is incapable of compromise?

Steve Watson, 13 May 2010 at 10:06 am

Kelly

Good list!

I like number 9 as I find with testing people think that they no longer need to write proper test cases and scripts – a list of confirmations on a user story will do. Well, if its a simple change I guess you can dispense with test scripts, but if its something more complex then there is no reason NOT to write scripts. If you have a reasonably large team of people who could execute the tests, they can follow the test steps and validate against the expected results. It also means that you can sensibly lump together test cases and cover them with one test.

If you dont think about how you will execute them and just tackle them one by one off the confirmations list, you miss the opportunity to run one test and cover many separate cases, saving time.

I always find test scripts useful if someone different re-runs a test, as they then follow the same process as before. This is why we automate regression so the tests are executed the same each time.

John Quincy, 24 October 2011 at 12:02 am

I am not a fan of agile. Unless you have a small group of developers who are in perfect sync with each other at all times, this "one size fits all" methodology is destructive and downright dangerous. I have personally witnessed a very good company go out of business this year because they transformed their development shop from a home-grown iterative methodology to SCRUM. The team was required to abide by the SCRUM rules 100%. They could not keep up with customer requirements and produced bug filled releases that were always late. These developers went from fun, friendly, happy people (pre-SCRUM) [who NEVER missed a date] to bitter, sarcastic, hard to be around 'employees'. When the writing was on the wall a couple of months back, the good ones got the hell out of there, and the company could not recover.

Some day, I'm convinced that Beck through Thomas will proclaim that the Agile Manifesto was all a big practical joke that got out of control.

This video pretty much lays out the one and only reason why management wants to implement Agile:

http://www.youtube.com/watch?v=nvks70PD0Rs

grumpasaurus, 9 February 2014 at 4:30 pm

It's a cycle of violence when a project claims to be Agile just because of standups and iterations and don't think about resolving the core challenges they've had to begin with. People are left still battling said challenges and then say that Agile sucks.

[May 15, 2017] Wall Street Journal Enterprises Are Not Ready for DevOps, but May Not Survive Without It by Abel Avram

Notable quotes:
"... while DevOps is appealing to startups, there are important stumbling blocks in the path of DevOps adoption within the enterprise. ..."
"... The tools needed to implement a DevOps culture are lacking. While some of the tools can be provided by vendors and others can be created within the enterprise, a process which takes a long period of time, "there is a marathon of organizational change and restructuring that must occur before such tools could ever be bought or built." ..."
Jun 06, 2014 | www.infoq.com
Rachel Shannon-Solomon suggests that most enterprises are not ready for DevOps, while Gene Kim says that they must make themselves ready if they want to survive.

Rachel Shannon-Solomon, a venture associate at At Work-Bench, has recently written a blog post for The Wall Street Journal entitled DevOps Is Great for Startups, but for Enterprises It Won't Work-Yet , arguing that while DevOps is appealing to startups, there are important stumbling blocks in the path of DevOps adoption within the enterprise.

While acknowledging that large companies such as Google and Facebook benefit from implementing DevOps, and that "there is no lack of appetite to experiment with DevOps practices" within "Fortune 500s and specifically financial services firms", Shannon-Solomon remarks that "there are few true change agents within enterprise IT willing to affect DevOps implementations."

Shehas come to this conclusion basedon "conversations with startup founders, technology incumbents offering DevOps solutions, and technologists within large enterprises."

Shannon-Solomon brings four arguments to support her position:

Shannon-Solomonends her post wondering "how long will it be until enterprises are forced to accept that they must accelerate their experiments with DevOps" and hoping that "more individual change agents within large organizations may emerge" in the future.

[May 15, 2017] Why Your Users Hate Agile

No methodology can substitute good engineers who actually talk to and work with each other. Good engineers can benefit from a better software development methodology, but even the best software development methodology is powerless to convert mediocre developers into stars.
Notable quotes:
"... disorganized and never-ending ..."
"... Agile is to proper software engineering what Red Bull Flugtag is to proper aeronautic engineering.... ..."
"... As TFA points out, that always works fine when your requirements are *all* known an are completely static. That rarely happens in most fields. ..."
"... The problem with Agile is that it gives too much freedom to the customer to change their mind late in the project and make the developers do it all over again. ..."
"... If you are delivering to customer requests you will always be a follower and never succeed. You need to anticipate what the customers need. ..."
"... It frequently is. It doesn't matter what methodology you use -- if you change major features/priorities at the last minute it will cost multiple times as much. Yet frequently customers expect it to be cheap because "we're agile". And by accepting that change will happen you don't push the customers to make important decisions early, ensuring that major changes will happen, instead of just being possible. ..."
"... The problem with all methodologies, or processes, or whatever today's buzzword is, is that too many people want to practice them in their purest form. Excessive zeal in using any one approach is the enemy of getting things done. ..."
"... On a sufficiently large project, some kind of upfront design is necessary. ..."
"... If you insist on spinning back every little change to a monstrously detailed Master Design Document, you'll move at a snail's pace. As much as I hate the buzzword "design patterns", some pattern is highly desirable. ..."
"... there is no substitute for good engineers who actually talk to and work with each other. ..."
"... If you don't trust those people to make intelligent decisions (including about when things do have to be passed up) then you've either got the wrong people or a micromanagement fetish ..."
"... The problem the article refers to about an upfront design being ironclad promises is tough. Some customers will work with you, and others will get their lawyers and "systems" people to waste your time complaining about every discrepancy, ..."
"... In defense everything has to meet spec, but it doesn't have to work. ..."
"... There is absolutely no willingness to make tradeoffs as the design progresses and you find out what's practical and necessary and what's not. ..."
"... I'll also admit that there is a tendency to get sloppy in software specs because it is easier to make changes. Hardware, with the need to order materials, have things fabbed, tape out a chip, whatever, imposes a certain discipline that's lacking when you know you can change the source code at anytime. Being both, I'm not saying this is because hardware engineers are virtuous and software engineers are sloppy, but because engineers are human (at least some of them). ..."
"... Impressive stuff, and not unique to the space shuttle. Fly-by-wire systems are the same way. You're talking DO-178B [wikipedia.org] Level A stuff. It works, and it's very very expensive. If it was only 10x the cost of normal software development I'd be amazed. I agree that way too much software is poorly planned and implemented crap, and part of the reason is that nobody wants realistic cost estimates or to make the difficult decisions about what it's supposed to do up-front. But what you're talking about is aerospace quality. You couldn't afford a car or even a dishwasher made to those standards. ..."
Jun 05, 2013 | Slashdot

"What developers see as iterative and flexible, users see as disorganized and never-ending.

This article discusses how some experienced developers have changed that perception. '... She's been frustrated by her Agile experiences - and so have her clients.

"There is no process. Things fly all directions, and despite SVN [version control] developers overwrite each other and then have to have meetings to discuss why things were changed. Too many people are involved, and, again, I repeat, there is no process.' The premise here is not that Agile sucks - quite to the contrary - but that developers have to understand how Agile processes can make users anxious, and learn to respond to those fears. Not all those answers are foolproof.

For example: 'Detailed designs and planning done prior to a project seems to provide a "safety net" to business sponsors, says Semeniuk. "By providing a Big Design Up Front you are pacifying this request by giving them a best guess based on what you know at that time - which is at best partial or incorrect in the first place." The danger, he cautions, is when Big Design becomes Big Commitment - as sometimes business sponsors see this plan as something that needs to be tracked against.

"The big concern with doing a Big Design up front is when it sets a rigid expectation that must be met, regardless of the changes and knowledge discovered along the way," says Semeniuk.' How do you respond to user anxiety from Agile processes?"

Shinobi

Agile summed up (Score:5, Funny)

Agile is to proper software engineering what Red Bull Flugtag is to proper aeronautic engineering....

Nerdfest

Re: doesn't work

As TFA points out, that always works fine when your requirements are *all* known an are completely static. That rarely happens in most fields.

Even in the ones where it does it's usually just management having the balls to say "No, you can give us the next bunch of additions and changes when this is delivered, we agreed on that". It frequently ends up delivering something less than useful.

MichaelSmith

Re: doesn't work (Score:5, Insightful)

The problem with Agile is that it gives too much freedom to the customer to change their mind late in the project and make the developers do it all over again.

ArsonSmith

Re: doesn't work (Score:4, Insightful)

...but they can be trusted to say what is most important to them at the time.

No they can't. If you are delivering to customer requests you will always be a follower and never succeed. You need to anticipate what the customers need. As with the I guess made up quote attributed to Henry Ford, "If I listened to my customers I'd have been trying to make faster horses." Whether he said it or not, the statement is true. Customers know what they have and just want it to be faster/better/etc you need to find out what they really need.

AuMatar

Re: doesn't work (Score:5, Insightful)

It frequently is. It doesn't matter what methodology you use -- if you change major features/priorities at the last minute it will cost multiple times as much. Yet frequently customers expect it to be cheap because "we're agile". And by accepting that change will happen you don't push the customers to make important decisions early, ensuring that major changes will happen, instead of just being possible.

ebno-10db

Re: doesn't work (Score:5, Interesting)

"Proper software engineering" doesn't work.

You're right, but you're going to the other extreme. The problem with all methodologies, or processes, or whatever today's buzzword is, is that too many people want to practice them in their purest form. Excessive zeal in using any one approach is the enemy of getting things done.

On a sufficiently large project, some kind of upfront design is necessary. Spending too much time on it or going into too much detail is a waste though. Once you start to implement things, you'll see what was overlooked or why some things won't work as planned. If you insist on spinning back every little change to a monstrously detailed Master Design Document, you'll move at a snail's pace. As much as I hate the buzzword "design patterns", some pattern is highly desirable. Don't get bent out of shape though when someone has a good reason for occasionally breaking that pattern or, as you say, you'll wind up with 500 SLOC's to add 2+2 in the approved manner.

Lastly, I agree that there is no substitute for good engineers who actually talk to and work with each other. Also don't require that every 2 bit decision they make amongst themselves has to be cleared, or even communicated, to the highest levels. If you don't trust those people to make intelligent decisions (including about when things do have to be passed up) then you've either got the wrong people or a micromanagement fetish. Without good people you'll never get anything decent done, but with good people you still need some kind of organization.

The problem the article refers to about an upfront design being ironclad promises is tough. Some customers will work with you, and others will get their lawyers and "systems" people to waste your time complaining about every discrepancy, without regard to how important it is. Admittedly bad vendors will try and screw their customers with "that doesn't matter" to excuse every screw-up and bit of laziness. For that reason I much prefer working on in-house projects, where "sure we could do exactly what we planned" gets balanced with the cost and other tradeoffs.

The worst example of those problems is defense projects. As someone I used to work with said: In defense everything has to meet spec, but it doesn't have to work. In the commercial world specs are flexible, but it has to work.

If you've ever worked in that atmosphere you'll understand why every defense project costs a trillion dollars. There is absolutely no willingness to make tradeoffs as the design progresses and you find out what's practical and necessary and what's not. I'm not talking about meeting difficult requirements if they serve a purpose (that's what you're paid for) but being unwilling to compromise on any spec that somebody at the beginning of the project pulled out of their posterior and obviously doesn't need to be so stringent. An elephant is a mouse built to government specifications.

Ok, you can get such things changed, but it requires 10 hours from program managers for every hour of engineering. Conversely, don't even think about offering a feature or capability that will be useful and easy to implement, but is not in the spec. They'll just start writing additional specs to define it and screw you by insisting you meet those.

As you might imagine, I'm very happy to be back in the commercial world.

Anonymous Coward

Re: doesn't work (Score:2, Interesting)

You've fallen into the trap of using their terminology. As soon as 'the problem' is defined in terms of 'upfront design', you've already lost half the ideological battle.

'The problem' (with methodology) is that people want to avoid the difficult work of thinking hard about the business/customer's problem and coming up with solutions that meet all their needs. But there isn't a substitute for thinking hard about the problem and almost certainly never will be.

The earlier you do that hard thinking about the customer's problems that you are trying to solve the cheaper, faster and better quality the result will be. Cheaper? Yes, because bugfixing that is done later in the project is a lot more expensive (as numerous software engineering studies have shown) Faster? Yes, because there's less rework. (Also, since there is usually a time = money equivalency, you can't have it done cheap unless it is also done fast. Higher quality? Yes, because you don't just randomly stumble across quality. Good design trumps bad design every single time.

... ... ...

ebno-10db

Re: doesn't work (Score:4, Interesting)

Until the thing is built or the software is shipped there are many options and care should be taken that artificial administrative constraints don't remove too many of them.

Exactly, and as someone who does both hardware and software I can tell you that that's better understood by Whoever Controls The Great Spec in hardware than in software. Hardware is understood to have physical constraints, so not every change is seen as the result of a screw-up. It's a mentality.

I'll also admit that there is a tendency to get sloppy in software specs because it is easier to make changes. Hardware, with the need to order materials, have things fabbed, tape out a chip, whatever, imposes a certain discipline that's lacking when you know you can change the source code at anytime. Being both, I'm not saying this is because hardware engineers are virtuous and software engineers are sloppy, but because engineers are human (at least some of them).

ebno-10db

Re: doesn't work (Score:2)

http://www.fastcompany.com/28121/they-write-right-stuff

This is my evidence that "proper software engineering" *can* work. The fact that most businesses (and their customers) are willing to save money by accepting less from their software is not the fault of software engineering. We could and did build buildings much faster than we do today, if you are willing to make more mistakes and pay more in human lives. If established industries and their customers began demanding software at that higher standard and were willing to pay for it like it was real engineering, then maybe it would happen more often.

Impressive stuff, and not unique to the space shuttle. Fly-by-wire systems are the same way. You're talking DO-178B [wikipedia.org] Level A stuff. It works, and it's very very expensive. If it was only 10x the cost of normal software development I'd be amazed. I agree that way too much software is poorly planned and implemented crap, and part of the reason is that nobody wants realistic cost estimates or to make the difficult decisions about what it's supposed to do up-front. But what you're talking about is aerospace quality. You couldn't afford a car or even a dishwasher made to those standards.

donscarletti

Re: doesn't work (Score:3)

260 people maintaining 420,000 lines of code, written to precise externally provided specifications that change once every few years.

This is fine for NASA, but if you want something that does roughly what you need before your competitors come up with something better, you'd better find some better programmers.

[May 15, 2017] DevOps Fact or Fiction

May 15, 2017 | blog.appdynamics.com

In light of all the hype, we have created a DevOps parody Series – DevOps: Fact or Fiction . For those of you who did not see, in October we created an entirely separateblog(inspired by this ) – however decided that it is relevant enough to transform into a series on the AppDynamics Blog . The series will point out the good, the bad, and the funny about IT and DevOps. Don't take anything too seriously – it's nearly 100% stereotypes : ). Stay tuned for more DevOps: Fact or Fiction to come. Here we go

[May 15, 2017] How DevOps is Killing the Developer by Jeff Knupp

Notable quotes:
"... Start-ups taught us this. Good developers can be passable DBAs, if need be. They make decent testers, "deployment engineers", and whatever other ridiculous term you'd like to use. Their job requires them to know much of the domain of "lower" roles. There's one big problem with this, and hopefully by now you see it: It doesn't work in the opposite direction. ..."
"... An example will make this more clear. My dad is a dentist running his own practice. He employs a secretary, hygienist, and dental assistant. Under some sort of "DentOps" movement, my dad would be making appointments and cleaning people's teeth while trying to find time to drill cavities, perform root canals, etc. My dad can do all of the other jobs in his office, because he has all the specialized knowledge required to do so. But no one, not even all of his employees combined, can do his job. ..."
"... Such a movement does a disservice to everyone involved, except (of course) employers. What began as an experiment aimed at increasing software quality has become a farce, where the most talented employees are overworked (while doing less, less useful work) and lower-level positions simply don't exist. ..."
"... you're right. Pure DevOps is no more efficient or sensible than pure Agile (or the pure Extreme Programming or the pure Structured Programming that preceeded it). The problem is purists and ideological zealotry not the particular brand of religion in question. Insistence on adherence to dogma is the problem as it prevents adoption of flexible, 'fit for purpose' solutions. Exposure to all the alternatives is good. Insisting that one hammer is ideal for every sort of nail we have and ever will have is not. ..."
"... There are developers who have a decent set of skills outside of development in QA, Operations, DB Admin, Networking, etc. Equally so, there are operations engineers who have a decent set of skills outside of operations in QA, Development, DB Admin, networking, etc. Extend this to QA and other disciplines. What I have never seen is one person who can perform all those jobs outside of their main discipline with the same level of professionalism, experience and acumen that each of those roles require to do it well at an Enterprise/World Class level. ..."
"... I prefer to think of DevOps as more of a full-stack team concept. Applying the full-stack principle at the individual levels is not sustainable, as you point out. ..."
"... DevOps roles are strictly automation focused, at least according to all job specifications I see on the internet. They don't need any development skills at all. To me it looks like a new term for what we used to call IT Operations, but more scripting/automation focused. DevOps engineer will need to know Puppet, Chef, Ansible, OS management, public cloud management, know how to set up monitoring, logging and all that stuff usual sysadmin used to do but in the modern world. In fact I used to apply for DevOps roles but quickly changed my mind as it turned out no companies need a person wearing many hats, it has absolutely nothing to do with creating software. Am I wrong? ..."
Apr 15, 2014 | jeffknupp.com
How 'DevOps' is Killing the Developer

There are two recent trends I really hate: DevOps and the notion of the "full-stack" developer. The DevOps movement is so popular that I may as well say I hate the x86 architecture or monolithic kernels. But it's true: I can't stand it. The underlying cause of my pain? This fact: not every company is a start-up, though it appears that every company must act as though they were. DevOps

"DevOps" is meant to denote a close collaboration and cross-pollination between what were previously purely development roles, purely operations roles, and purely QA roles. Because software needs to be released at an ever-increasing rate, the old "waterfall" develop-test-release cycle is seen as broken. Developers must also take responsibility for the quality of the testing and release environments.

The increasing scope of responsibility of the "developer" (whether or not that term is even appropriate anymore is debatable) has given rise to a chimera-like job candidate: the "full-stack" developer. Such a developer is capable of doing the job of developer, QA team member, operations analyst, sysadmin, and DBA. Before you accuse me of hyperbole, go back and read that list again. Is there any role in the list whose duties you wouldn't expect a "full-stack" developer to be well versed in?

Where did these concepts come from? Start-ups, of course (and the Agile methodology). Start-ups are a peculiar beast and need to function in a very lean way to survive their first few years. I don't deny this . Unfortunately, we've taken the multiple technical roles that engineers at start-ups were forced to play due to lack of resources into a set of minimum qualifications for the role of "developer".

Many Hats

Imagine you're at a start-up with a development team of seven. You're one year into development of a web applications that X's all the Y's and things are going well, though it's always a frantic scramble to keep everything going. If there's a particularly nasty issue that seems to require deep database knowledge, you don't have the liberty of saying "that's not my specialty," and handing it off to a DBA team to investigate. Due to constrained resources, you're forced to take on the role of DBA and fix the issue yourself.

Now expand that scenario across all the roles listed earlier. At any one time, a developer at a start-up may be acting as a developer, QA tester, deployment/operations analyst, sysadmin, or DBA. That's just the nature of the business, and some people thrive in that type of environment. Somewhere along the way, however, we tricked ourselves into thinking that because, at any one time, a start-up developer had to take on different roles he or she should actually be all those things at once.

If such people even existed , "full-stack" developers still wouldn't be used as they should. Rather than temporarily taking on a single role for a short period of time, then transitioning into the next role, they are meant to be performing all the roles, all the time . And here's what really sucks: most good developers can almost pull this off.

The Totem Pole

Good developers are smart people. I know I'm going to get a ton of hate mail, but there is a hierarchy of usefulness of technology roles in an organization. Developer is at the top, followed by sysadmin and DBA. QA teams, "operations" people, release coordinators and the like are at the bottom of the totem pole. Why is it arranged like this?

Because each role can do the job of all roles below it if necessary.

Start-ups taught us this. Good developers can be passable DBAs, if need be. They make decent testers, "deployment engineers", and whatever other ridiculous term you'd like to use. Their job requires them to know much of the domain of "lower" roles. There's one big problem with this, and hopefully by now you see it: It doesn't work in the opposite direction.

A QA person can't just do the job of a developer in a pinch, nor can a build-engineer do the job of a DBA. They never acquired the specialized knowledge required to perform the role. And that's fine. Like it or not, there are hierarchies in every organization, and people have different skill sets and levels of ability. However, when you make developers take on other roles, you don't have anyone to take on the role of development!

An example will make this more clear. My dad is a dentist running his own practice. He employs a secretary, hygienist, and dental assistant. Under some sort of "DentOps" movement, my dad would be making appointments and cleaning people's teeth while trying to find time to drill cavities, perform root canals, etc. My dad can do all of the other jobs in his office, because he has all the specialized knowledge required to do so. But no one, not even all of his employees combined, can do his job.

Such a movement does a disservice to everyone involved, except (of course) employers. What began as an experiment aimed at increasing software quality has become a farce, where the most talented employees are overworked (while doing less, less useful work) and lower-level positions simply don't exist.

And this is the crux of the issue. All of the positions previously held by people of various levels of ability are made redundant by the "full-stack" engineer. Large companies love this, as it means they can hire far fewer people to do the same amount of work. In the process, though, actual development becomes a vanishingly small part of a developer's job . This is why we see so many developers that can't pass FizzBuzz: they never really had to write any code. All too common a question now, can you imagine interviewing a chef and asking him what portion of the day he actually devotes to cooking?

Jack of All Trades, Master of None

If you are a developer of moderately sized software, you need a deployment system in place. Quick, what are the benefits and drawbacks of the following such systems: Puppet, Chef, Salt, Ansible, Vagrant, Docker. Now implement your deployment solution! Did you even realize which systems had no business being in that list?

We specialize for a reason: human beings are only capable of retaining so much knowledge. Task-switching is cognitively expensive. Forcing developers to take on additional roles traditionally performed by specialists means that they:

What's more, by forcing developers to take on "full-stack" responsibilities, they are paying their employees far more than the market average for most of those tasks. If a developer makes 100K a year, you can pay four developers 100K per year to do 50% development and 50% release management on a single, two-person task. Or, simply hire a release manager at, say, 75K and two developers who develop full-time. And notice the time wasted by developers who are part time release-managers but don't always have releases to manage.

Don't Kill the Developer

The effect of all of this is to destroy the role of "developer" and replace it with a sort of "technology utility-player". Every developer I know got into programming because they actually enjoyed doing it (at one point). You do a disservice to everyone involved when you force your brightest people to take on additional roles.

Not every company is a start-up. Start-ups don't make developers wear multiple hats by choice, they do so out of necessity. Your company likely has enough resource constraints without you inventing some. Please, don't confuse "being lean" with "running with the fewest possible employees". And for God's sake, let developers write code!


Enno 2 years ago
Some background... I started life as a dev (30years ago), have mostly been doing sysadmin and project tech lead sorts of work for the last 15. I've always assumed the DevOps movement was resulting in sub-par development and sub-par sysadmin/ops precisely because people were timesharing their concerns.

But what it does bring to the party is a greater level of awareness of the other guys problems. There's nothing quite like being rung out of bed at 3am to motivate a developer to improve his products logging to make supporting it easier. Similarly the admin exposed to the vagaries of promoting things into production in a supportable, repeatable, deterministic manner quickly learns to appreciate the issues there. So DevOps has served a purpose and has offered benefits to the organisations that signed on for it.

But, you're right. Pure DevOps is no more efficient or sensible than pure Agile (or the pure Extreme Programming or the pure Structured Programming that preceeded it). The problem is purists and ideological zealotry not the particular brand of religion in question. Insistence on adherence to dogma is the problem as it prevents adoption of flexible, 'fit for purpose' solutions. Exposure to all the alternatives is good. Insisting that one hammer is ideal for every sort of nail we have and ever will have is not.

Zakaria ANBARI -> Enno 2 years ago
totally agree with you !
DevOps Reaper 2 years ago
I'm very disappointed to see this kind of rubbish. It's this type of egocentric thinking and generalization that the developer is an omniscient deity requiring worshiping and pampering that prevents DevOps from being successful. Based on the tone and your perspective it sounds like you've been doing DevOps wrong.

A developer role alone is not the linchpin that keeps DevOps humming - instead it's the respect that each team member holds for each discipline and each team member's area of expertise, the willingness of the entire team to own the product, feature delivery and operational stability end to end, to leverage each others skills and abilities, to not blame Dev or Ops or QA for failure, and to share knowledge.

There are developers who have a decent set of skills outside of development in QA, Operations, DB Admin, Networking, etc. Equally so, there are operations engineers who have a decent set of skills outside of operations in QA, Development, DB Admin, networking, etc. Extend this to QA and other disciplines. What I have never seen is one person who can perform all those jobs outside of their main discipline with the same level of professionalism, experience and acumen that each of those roles require to do it well at an Enterprise/World Class level.

If you're a developer doing QA and operations, you're doing it because you have to, but there should be no illusion that you're as good in alternate roles as someone trained and experienced in those disciplines. To do so is a disservice to yourself and your organization that signs your paycheck. If you're in this situation and you'd prefer making a difference rather than spewing complains, I would recommend talking to your manager and above about changing their skewed vision of DevOps. If they aren't open to communication, collaboration, experimentation and continual improvement, then their DevOps vision is dysfunctional and they're not supporting DevOps from the top down. Saying your DevOps and not doing it is *almost* more egregious than saying the developer is the top of a Totem Pole of existence.

spunky brewster -> DevOps Reaper a month ago
he prefaced it with 'crybabies please ignore' It's his opinion. That everyone but the lower totem pole people agree with so.. agree to disagree. I also don't think being at the bottom of the totem pole is a big f'in deal. If you're getting paid.. embrace it! So many other ways to enjoy life! The top dog people have all the pressure and die young! 99% of the people on earth dont know the difference between one nerd and another. And other nerds are always going to be egomaniacs who will find some way to justify their own superiority no matter what your achievements. So this kind of posturing is a waste of time.
Pramod 2 years ago
Amen to that!!
carlivar 2 years ago
I think there's a problem with your definition of DevOps. It doesn't mean developers have to be "full-stack" or do ops stuff. And it doesn't mean "act like a startup." It simply means, at its basis, that Developers and Operations work well together and do not have any communication barriers. This is why I hate DevOps as a title or department, because DevOps is a culture.

Let's take your DentOps example. The dentist has 3 support staff. What if they rarely spoke to the dentist? What if they were on different floors of the building? What if the dentist wrote an email about how teeth should be cleaned and wasn't available to answer questions or willing to consider feedback? What if once in a while the dentist needed to understand enough about the basics of appointment scheduling to point out problems with the system? Maybe appointments are being scheduled too close together. Would the patients get backed up throughout the day because that's the secretary's problem? Of course not. Now we'd be getting into a more accurate analogy to DevOps. If anything a dentist's office is ALREADY "DentOps" and the whole point of DevOps is to make the dev/ops interaction work in a logical culture that other industries (like dentists) already use!

StillMan -> carlivar 2 years ago
I would tend to agree with some of that. Being able to trouble shoot network issues using monitoring tools like Fiddler is a good thing to be aware of. I can also see a lot of companies using it as a way to make one person do everything. Moreover, there are probably folks out there that perpetuate that behavior by taking on the machismo argument.

By saying that if I can do it that you should be able to do it too or else you're not as good of a developer as I am. I have never heard anyone outright claim this, but I've seen this attitude time and time again from ambitious analysts looking to get a leg up, a pay raise, and a way to template their values on the rest of the team. One of the first things that you're taught as a dev is that you can't hope to know it all.

Your responsibility first and foremost as a developer is the stability and reliability of your code and the services that you provide. In some industries this is literally a matter of life and death(computers in your car, mission critical medical systems). It doesn't work for everyplace.

spunky brewster -> carlivar a month ago

I wouldn't want to pay a receptionist 200k a year like a dentist though. Learn to hire better receptionists. Even a moderately charming woman can create more customer loyalty, and cheaper, than the best dentist in the world. I want my dentist to keep quiet and have a steady hand. I want my receptionist to engage me and acknolwedge my existence.

I want my secretary to be a multitasking master. I want my dentist not to multitask at all - OUCH!

Ole Hauris -> Sørensen 2 years ago
Good points, I tend to agree. I prefer to think of DevOps as more of a full-stack team concept. Applying the full-stack principle at the individual levels is not sustainable, as you point out.

The full-stack DevOps team will have team members with primary skills in either of the traditional specialties, and will, over time, develop decent secondary skills. But the value is not in people constantly content switching - that actually kills efficiency. The value is in developers understanding and developing an open relationship with testing and operations - and vice versa. And this cooperation is inhibited by putting people in separate teams with conflicting goals. DevOps in practice is not a despecialization. It's bringing the specialists together.

ceposta Ole Hauris Sørensen 2 years ago +1.

The more isolated or silo'd developers become, the less they realize what constitutes delivering software, and the more problems are contributed to the IT process of test/build/release/scale/monitor, etc. Writing code is a small fraction of that delivery process. I've written about the success of devops and microservices that touches on this stuff because they're highly related. The future success of devops/microservices/cloud/etc isn't related to technology insofar as it is culture: http://blog.christianposta....

Thanks for the post!

Julio 2 years ago

Interesting points were raised. Solid arguments. I identified with this text and my current difficulties as developer.

Cody 2 years ago
Great article and you're definitely describing one form of dysfunctional organisation where DevOps, Agile, Full Stack, and every other $2 word has been corrupted to become a cost cutting justification; cramming more work onto people who aren't skilled for it, and eho end up not having any time to do what they were hired as experts for!

But I'd also agree with other posters that it's a little developer centric. I'm a terrible programmer and a great DBA. I can tell you most programmers who try to be DBAs are equally terrible. It's definitely not "doing the job of the receptionist" 😄

And we shouldn't forget what DevOps is meant to be about; teams making sure nobody gets called at night to fix each other's messes. That means neither developers with shitty deployments straight to production nor operations letting the disks silently fill because "if it ain't C: it ain't our problem."

Zac Smith 6 months ago
I know of 0 developers that can manage a network of any appreciable scale.

In cloud and large enterprise networks, if there were a totem (which there isn't) using your methodology would place the dev under the network engineer. Their software implements the protocol and configuration intent of the NE. Good thing the whole concept is a pile of rubbish. I think you fell into the trap you called out which is thinking at limited scale.

spunky brewster Zac Smith a month ago
It's true. We can all create LAN's at home but I wouldn't dare f with a corporate network and risk shutting down amazon for a day. Which seems to happen quite a bit.... maybe they're DEVOPPING a bit too much.
David Rawk Zac Smith 6 months ago
I tend to agree.. they are not under, but beside.. Both require a heap of skill and that includes coding.. but vastly different code.
Wilmer 9 months ago
Jeff Knupp is to one side of the spectrum. DevOps Reaper is to the other side.

Enno is more attune to what is really going on. So I won't repeat any of those arguments.

However I will ask you to put me in a box. What am I?

I graduated as a Computer Engineer (hybrid between Electrical Engineering and Computer Science). I don't say that anymore as companies have no idea as to what that means. So I called myself a Digital Electronics and Software Engineer for a while. The repeated question was all too often: "So what are you, software or hardware?"
I spent my first few years working down from board design, writing VHDL and Verilog, to embedded software in C and C++, then algorithms in optimization with the CUDA framework in C, with C++ wrappers and C# for the logic tier. Then worked another few year in particle physics with C++ compute engines with x86 assembly declarations for speed and C# for WPF UIs.

After that I went to work for a wind turbines company as system architect where is was mostly embedded and programming ARM Cortex microprocessors, high power electronics controls, custom service and diagnostics tools in C#. Real-time web based dashboards with Angular, Bootrap, and the likes for a good looking web app.
Nowadays I'm working with mobile first web applications that have a massive backend to power them. It is mostly a .NET stack form Entity Framework, to .NET WebAPI, to Angular power font ends. This company is not a start up but it is a small company. Therefore I wear the many hats. I introduced the new software life cycle with includes continuous integration and continuous deployment. Yes, I manage build servers, build tools, I develop, I'm QA, I'm a tester, I'm a DBA., I'm the deployment and configuration manager.

If you are wondering I have resorted to start calling a full stack developer. It has that edgy sound that companies like to hear. I'm still a young developer. I've only been developing for 10 years.

In my team we are all "Jack of all Trades" and "Masters of Many". We switch tasks and hats because it is fun and keep everyone from getting bored/stuck. Our process is called "Best practices that work for this team".

So, I think of myself as a software engineer. I think I'm a developer. I think I'm DevOps, I think I'm QA.

ישראל פרוכטר Wilmer 8 months ago
I join you with the lack of title, we don't need those titles (only when HR people are involved, and we need to kind of fake our persona anyhow...)
Matt King a year ago
Lets start with that DevOps didn't come from startups. It came from Boeing mainly, and a few other major blue chip IT shops, investing heavily in systems management technology around the turn of the century. The goal at the time was simply to change the ratio of servers to IT support personnel, and the re-thinking and re-organizing of development and operations into one organization with one set of common goals. The 'wearing many hats' thing you discuss is a feature of startups, but that feature is independent of siloed or integrated organizations.

I prefer the 'sportzing' analogy of basketball and football. Football has specialist teams that are largely functionally independent because they focus on distinct goals. Basketball has specialist positions, but the whole team is focused on the same goals. I'm not saying one sport is better than the other. I am saying the basketball mentality works better in the IT environment. Delivering the product or service to the customer is the common goal that everyone should be thinking about, and how the details of their job fits into that overall picture. It sounds like to me you are really saying "Hey, its my job and only my job to think about how it all fits together and works"

Secondly, while it is pretty clear that the phrase 'full stack engineer' is about as useful as "Cloud Computing", your perspective that somehow developers are the 'top' of the tree able to do any job is very mistaken. There are key contributors from every specialty who have that ability, and more useful names for them are things like "10x", or "T-shaped". Again, you are describing a real situation, but correlating it with unrelated associations. It is just as likely, and just as valuable, to find an information architect who can also code, or a systems admin that can also diagnose database performance, or an electrician that can also hang sheetrock. Those people do fit your analogy of 'being on top', because they are not siloed and stovepiped into just their speciality.

The DevOps mindset fosters this way of thinking, instead of the old and outdated specialist way of thinking you are defending. Is it possible your emotional reaction is fear based against the possibility that your relative value will decrease if others start thinking outside their boxes?

Interesting to note that Agile also started at Boeing, but 10 years earlier. I live in the startup world of Seattle, but know my history and realize that much of what appears new is actually just 'new to you'(or me), and that most of cutting edge technology and thinking is just combining ideas from other industries in new ways.

BosnianDolphin 2 years ago

Agree on most points, but nobody needs DBA - unless it is some massive project. DBA people should pick up new skills fast.

Safespace Scooter 2 years ago
The problem is that developers are trained to crank out code and hope that QA teams will find problems, many times not even sure how to catch holes. DevOps trains people to think critically and do both. It isn't killing developers, it is making them look like noobs while phasing them out.
strangedays Safespace Scooter a year ago
Yeah, good luck with that attitude. Your company's gonna have a good'ole time looking for and keeping new developer talent. Because as we all know, smart people love working with dummies. I'd love to see 'your QA' team work on our 'spatial collision algorithm' and make our devs "look like noob". You sound like most middle management schmucks.
Manachi 2 years ago
Fantastic article! I was going to start honing in on the points I particularly agree with but all of it is just spot on. Great post.
Ralli Soph 9 days ago
Funniest article so far on full stack. It's a harsh reality for devs, because were asked to do everything, know everything, so how can you really believe QA or DBA can do the job of someone like that? There is a crazy amount of hours a fullstack dev invests in aquiring that kind of knowledge, not to mention some people are also talented at their job. Imagine trying to tell the QA to do that? Maybe for a few hours someone can be a backup just in case something happens, but really it's like replacing the head surgeon.
spunky brewster a month ago
The best skill you can learn in your coding career is your next career. Noone wants a 45 year old coder.

I see so much time wasted learning every new thing when you should just be plugging away to get the job done, bank the $$, and move on. All your accumulated skills will be worthless in a decade or so, and your entire knowledge useless in 2 decades. My ability to turn a wrench is what's keeping me from the poor house. And I have a engineering degree from UIUC! I also don't mind. Think about a 100 week as a plumber with OT in a reasonably priced neighborhood, vs a coder. Who do you think is making more? Now I'm not saying you cant survive into your 50's programming, but typically they get retired forcefully, and permanently.. by a heart attack!

But rambling aside.. the author makes a good point and i think is the future of big companies in tech. The current model is driven by temporary factors. Ideally you'd have a specialized workforce. But I think that as a programmer you are in constant fear of being obsolete so you don't want to be pigeon-holed. It's just not mathematically possible to have that 10,000 hour mastery in 50 different areas.. unless you are Bill Murray in Groundhog Day.

hrmilo 3 months ago
A developer who sees himself at the top of a pyramid. Not surprising, your myopic and egotistical view. I laugh at people who code a few SELECT statements and think they can fill the DBA role. HA HA HA. God, the arrogance. "Well it worked on my machine." - How many sys admins have heard this out of a developers mouth. Unfortunately, projects get stuck with supporting such issues because that very ego has led the developer too far down the road to turn back. They had no common sense or modesty to call on the knowledge of their Sys Ops team to help design the application. I interview jobs candidates all the time calling themselves full stack simply because they compliment their programming language of choice with a mere smattering of knowledge in client-side technologies and can write a few SQL queries. Most developers have NO PERCEPTION of the myriad intricacies it takes to get an application from their unabated desktop with its FULL ADMIN perms and "unlimited resources", through a staging/QA environment, and eventually to the securely locked down production system with limited and perhaps shared or hosted limited resources. Respect of your support teams, communication and coordination, and the knowledge that you do not know it all. THAT'S being Full Stack and DevOps sir.
spunky brewster hrmilo a month ago
there's always that one query that noone can do in a way that takes less than 2 hours until u pass it off to a real DBA.. its the 80/20 rule basically. I truly dont believe 'full stack' exists. It's an illusion. There's always something that suffers.

The real problem is smart people are in such demand we're forced to adapt to this tribal pre-civilization hodgepodge. Once the industry matures, it'll disappear. Until then they will think they re-invented the wheel.

Ivan Gavrilyuk 6 months ago I

'm confused here. DevOps roles are strictly automation focused, at least according to all job specifications I see on the internet. They don't need any development skills at all. To me it looks like a new term for what we used to call IT Operations, but more scripting/automation focused. DevOps engineer will need to know Puppet, Chef, Ansible, OS management, public cloud management, know how to set up monitoring, logging and all that stuff usual sysadmin used to do but in the modern world. In fact I used to apply for DevOps roles but quickly changed my mind as it turned out no companies need a person wearing many hats, it has absolutely nothing to do with creating software. Am I wrong?

Mario Bisignani Ivan Gavrilyuk 4 months ago

It depends on what you mean with development skills. Have you ever tried to automate the deployment of a large web application? In fact the scripts that automate the deployment of large scalable web applications are pretty complex softwares which require in-depth thinking and should follow all the important principles a good developers should know: components isolation, scalability, maintainability, extensibility, etc..
Valentin Georgiev 10 months ago
1k% agree with this article!
TechZilla a year ago
Successful DevOps doesn't mean a full stack developer does it all, that's only true for a broken company that succeeded despite bad organization. For example, Twitter's Dev only culture is downright sick, and ONLY works because they are in the tech field. Mind you, I still believe personally that it works for them DESPITE its unbalanced structure. In other words, bad DevOps means the Dev has no extra resources and just more requirements, yea that sucks!....

BUT, on the flip,

Infrastructure works with QA/Build to define supportable deployment standards, they gotta learn all the automatic bits and practice using them. Now Devs have to package all their applications properly, in the formats supported by QABuild's CI and Repositories (that 'working just fine' install script definitly doesn't count). BUT the Dev's get pre-made CI-ready examples, and if needed, code-migration assistance from the QA/Build team. Pretty soon they learn how to package that type of app, like a J2EE Maven EAR, or a Webdeploy to IIS.... and the rest should be hadled for them, as automaticlly as possible by the proactive operations teams.

Make sense? This is how its supposed to work, It sounds like your left alone in a terrible Dev Only/heavy world. The key to DevOps that is great, and everybody likes, vs. more work... is having a very balanced work flow between the teams, and making sure the pass-off points are VERY well defined. Essentially it requires management that cut the responsibility properly, so they have a shared interest in collaborating. In a Dev heavy organization, the Devs can just throw garbage over the wall, and operations has to react to constant problems... they start to hate each other and ..... Dev managers get the idea that they can cut out ops if they do "DevOps", so then they throw it all at you like right now.

Adi Chiru a year ago

I see in this post so much rubbish and narrow mindedness, so much of the exact stuff that is killing any type of companies. In the last 10 years I had many roles that required of me, as a system engineer, to come in and straighten out all kind of really bad compromises developers did just to make stuff work.

The role never shows the level of intelligence or capabilities. I've seen so many situations in the last 10 years when smart people with the wrong attitude and awareness are too smart for anyone's good and limited people still providing more value than a very smart ones acting as if he is too smart to even have a conversation about anything.

This post is embarrassing for you Jeff, I am sorry for you man.... you just don't get it!

Max Börebäck a year ago
A developer do not have to do full stack, the developer can continue with development, but has to adopt some things for packaging,testing, and how it is operated.
Operations can continue with operations, but has to know how things are built and packaged.
Developers and operations needs to share things like use the same application server for example. Developer needs to understand how it is operated to make sure that the code is written in a proper way. Operations needs to adopt in the need for fast delivery and be able to support a controlled way of deploying daily into production.
Here is a complementing post I have around the topic
http://bit.ly/1r3iVff

Peperud a year ago
Very much agree with you Jeff. I've been thinking along these lines for awhile now...
Lenny Joseph a year ago
I will share my experience , I started off my career teaching Programming which included database programming (Oracle-pl/sql, SQL Server-transact sql) which gave good insights into database internals which landed me into DBA world for last 10 years . During these 10 years where i have worked in technology companies regarded as top-notch , I have seen very smart Developers writing excellent application codes but missing out on writing optimized piece to interact with the database. Hence, I think each Job has a scale and professionals of any group can not do what the top professionals of other group can do. I have seen Developers with fairly good database internals knowledge and I have dbas writing code for their automation which can compares well with features of some commercial database products like TOAD. So , generalization like this does not hold.
ceposta a year ago
BTW.. "efficiency" is not the goal... "

DevOps and the Myth of Efficiency, Part I

http://blog.christianposta....

DG • a year ago
The idea that there is a hierachy of usefulness is bunk. Most developers are horrible at operations because they dislike it. Most sysadmins and DBAs are horrible at coding because they dislike it. People gravitate to what interests them and a disinterested person does a much poorer job than an interested one. DevOps aims to combine roles by removing barriers, but there are costs to quality that no one likes to talk about. Using your hierarchy example most doctors could obtain their RN but they would not make good nurses.
Lana Boltneva 2 years ago
So true! I offer you to check this 6 best practices in DevOps too http://intersog.com/blog/ag...
Jonathan McAllister 2 years ago
This is an excellent article on the general concepts of DevOps and the DevOps movement. It helps to identify the cultural shifts required to facilitate proper DevOps implementations. I also write about DevOps.. I authored a book on implementing CI, CD and DevOps related functions within an organization and it was recently published. The book is aptly titled Mastering Jenkins ( http://www.masteringjenkins... ) and aims to codify not only the architectural implementations and requirements of DevOps but the cultural shift needed to propery advocate for the proper adoption of DevOps practices. Let me know what you think.
Chris Kavanagh 2 years ago
I agree. Although I'm not in the business (yet), I will be soon. What I've noticed just playing around with Vagrant and Chef, Puppet, Ansible is the great amount of time to try and master just one of these provisioners. I can't imagine being responsible for all these roles you spoke of in the article. How can one possibly master all of them, and be good at any of them?
Sarika Mehta 2 years ago
hmmm.... users & business see as one application.... for them how it was developed, deployed does not matter.... IT is an enabler by definition... so DevOps is mostly about that... giving one view to the customer; quick changes, stable changes, stable application....

Frankly, DevOps is not about developers or testers... it is about the right architecture, right framework... developers/testers anyways do what is there in the script... DevOps is just a new scripts to them.

For right DevOps, you need right framework. architecture for the whole of the program; you need architecture which is built end to end and not in silos...

Hanut Singh 2 years ago
Quite the interesting read. Having worked as a "Full Stack" Developer , I totally agree with you. Well done sir. My hat is tipped.
Masood 2 years ago
Software Developer write code that business/customer use
Test Developer write test code to test SUT
Release Developer write code to automate release process
Infrastructure developer write code to create infrastructure automatically.
Performance Developer writes code to performance test the SUT
Security Developer writes code to scan the SUT for security
Database Developer write code for DB

So which developer are you thinking DevOps going to kill?

Today's TDD world, a developer (it could be anyone above) needs to get out of their comfort zone to makes sure they write a testable, releasable, deployable, performable, security complaint and maintainable code.

DevOps brings all this roles together to collaborate and deliver.

Daniel 2 years ago
You deserve an award.....
Mash -> Dick 2 years ago
Why wouldn't they be? What are the basic responsibilities that make for a passable DBA and which of those responsibilities cannot be done by a good developer? Say a good developer has just average experience writing stored procs, analyzing query performance, creating (or choosing not to create, for performance reasons) indexes, constraints and triggers, configuring database access rights, setting up regular backups, regular maintenance (ex. rebuilding indexes to avoid fragmentation)... just to name a few.

I'm sure there's several responsibilities that DBA's have that developers would have very little to no experience in, but we're talking about making for a passable DBA. Developers may not be as good at the job as someone who specializes in it for a living, but the author's wording seems to have been chosen very carefully.

SuperQ Mash a year ago

Yup, I see lots of people trying to defend the DBA as a thing, just like people keep trying to defend the traditional sysadmin as a thing. I started my career as a sysadmin in the 90s, but times have changed and I don't call myself a sysadmin anymore, because that's not what I do.

Now I'm a Systems Engineer/SRE. My mode of working isn't slamming software together, but engineering automation to do it for me.

But I also do QA, Data storage performance analysis, networking, and [have a] deep knowledge of the applications I support.

[May 15, 2017] 10 Things I Hate About DevOps

Notable quotes:
"... The Emergence of the "DevOps' DevOp", a pseudo intellectual loudly spewing theories about distantly unrelated fields that are entirely irrelevant and are designed to make them feel more intelligent and myself more inadequate ..."
"... "The Copenhagen interpretation certainly applies to DevOps" ..."
"... "I'm modeling the relationship between Dev and Ops using quantum entanglement, with a focus on relative quantum superposition - it's the only way to look at it. Why aren't you?" ..."
"... Enterprise Architects. They used to talk about the "Enterprise Continuum". Now they talk about "The Delivery Continuum" or the "DevOps Continuum". ..."
May 15, 2017 | www.upguard.com
DevOps and I sort of have a love/hate relationship. DevOps is near and dear to our heart here at UpGuard and there are plenty of things that I love about it . Love it or hate it, there is little doubt that it is here to stay. I've enjoyed a great deal of success thanks to agile software development and DevOps methods, but here are 10 things I hate about DevOps!

#1 Everyone thinks it's about Automation.

#2 "True" DevOps apparently have no processes - because DevOps takes care of that.

#3 The Emergence of the "DevOps' DevOp", a pseudo intellectual loudly spewing theories about distantly unrelated fields that are entirely irrelevant and are designed to make them feel more intelligent and myself more inadequate:

"The Copenhagen interpretation certainly applies to DevOps"

"I'm modeling the relationship between Dev and Ops using quantum entanglement, with a focus on relative quantum superposition - it's the only way to look at it. Why aren't you?"

#4 Enterprise Architects. They used to talk about the "Enterprise Continuum". Now they talk about "The Delivery Continuum" or the "DevOps Continuum". How about talking about the business guys?

#5 Heroes abound with tragic statements like "It took 3 days to automate everything.. it's great now!" - Clearly these people have never worked in a serious enterprise.

#6 No-one talks about Automation failure...it's everywhere. i.e Listen for the words "Pockets of Automation". Adoption of technology, education and adaptation of process is rarely mentioned (or measured).

#7 People constantly pointing to Etsy, Facebook & Netflix as DevOps. Let's promote the stories of companies that better represent the market at large.

#8 Tech hipsters discounting, or underestimating, Windows sysadmins. There are a lot of them and they better represent the Enterprise than many of the higher profile blowhards.

#9 The same hipsters saying their threads have filled up with DevOps tweets where there were none before.

#10 I've never heard of a Project Manager taking on DevOps. I intend on finding one.

What do you think - did I miss anything? Rants encouraged ;-) Please add your comments.

[May 15, 2017] Why I hate DevOps

Notable quotes:
"... DevOps. The latest software development fad. ..."
"... Continuous Delivery (CD), the act of small, frequent, releases was defined in detail by Jez Humble and Dave Farley in their book – Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. The approach makes a lot of sense and encourages a number of healthy behaviors in a team. ..."
"... The problem is we now have teams saying they're doing DevOps. By that they mean is they make small, frequent, releases to production AND the developers are working closely with the Ops team to get things out to production and to keep them running. ..."
"... Well, the problem is the name. We now have a term "DevOps" to describe the entire build, test, release approach. The problem is when you call something DevOps anyone who doesn't identify themselves as a dev or as Ops automatically assumes they're not part of the process. ..."
May 15, 2017 | testingthemind.wordpress.com
DevOps. The latest software development fad. Now you can be Agile, use Continuous Delivery, and believe in DevOps.

Continuous Delivery (CD), the act of small, frequent, releases was defined in detail by Jez Humble and Dave Farley in their book – Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. The approach makes a lot of sense and encourages a number of healthy behaviors in a team. For example, frequent releases more or less have to be small. Small releases are easier to understand, which in turn increases our chances of building good features, but also our chances of testing for the right risks. If you do run into problems during testing then it's pretty easy to work out the change that caused them, reducing the time to debug and fix issues.

Unfortunately, along with all the good parts of CD we have a slight problem. The book focused on the areas which were considered to be the most broken, and unfortunately that led to the original CD description implying "Done" meant the code was shipped to production. As anyone who has ever worked on software will know, running code in production also requires a fair bit of work.

So, teams started adopting CD but no one was talking about how the Ops team fitted into the release cycle. Everything from knowing when production systems were in trouble, to reliable release systems was just assumed to be fully functional, and unnecessary for explanation.

To try to plug the gap DevOps rose up.

Now, just to make things even more confusing. Dave Farley later said that not talking about Ops was an omission and CD does include the entire development and release cycle, including running in production. So DevOps and CD have some overlap there.

DevOps does take a slightly different angle on the approach than CD. The emphasis for DevOps is on the collaboration rather than the process. Silos should be actively broken down to help developers understand systems well enough to be able to write good, robust and scalable code.

So far so good.

The problem is we now have teams saying they're doing DevOps. By that they mean is they make small, frequent, releases to production AND the developers are working closely with the Ops team to get things out to production and to keep them running.

Sounds good. So what's the problem?

Well, the problem is the name. We now have a term "DevOps" to describe the entire build, test, release approach. The problem is when you call something DevOps anyone who doesn't identify themselves as a dev or as Ops automatically assumes they're not part of the process.

Seriously, go and ask your designers what they think of DevOps. Or how about your testers. Or Product Managers. Or Customer Support.

And that's a problem.

We've managed to take something that is completely dependant on collaboration, and trust, and name it in a way that excludes a significant number of people. All of the name suggestions that arise when you mention this are just ridiculous. DevTestOps? BusinessDevTestOps? DesignDevOps? Aside from just being stupid names these continue to exclude anyone who doesn't have these words in their title.

So do I hate DevOps? Well no, not the practice. I think we should always be thinking about how things will actually work in production. We need an Ops team to help us do that so it makes total sense to have them involved in the process. Just take care with that name.

Is there a solution? Well, in my mindwe're still talking about collaboration above all else. Thinking about CD as "Delivery on demand" also makes more sense to me. We, the whole team, should be ready to deliver working software to the customer when they want it. By being aware of the confusion, and exclusion that some of these names create we can hopefully bring everyone into the project before it's too late.

[May 15, 2017] Hype Cycle for DevOps, 2016

May 15, 2017 | www.gartner.com
Hype Cycle for DevOps, 2016

DevOps initiatives include a range of technologies and methodologies spanning the software delivery process. IT leaders and DevOps practitioners should proactively understand the readiness and capabilities of technology to identify the most appropriate choices for their specific DevOps initiative.

Table of Contents

[May 15, 2017] The Phoenix Project (novel)

May 15, 2017 | en.wikipedia.org

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win (2013) is the third book by Gene Kim. The business novel tells the story of an IT manager who has ninety days to rescue an over-budget and late IT initiative, code-named The Phoenix Project. The book was co-authored by Kevin Behr and George Spafford and published by IT Revolution Press in January 2013.[1][2]

Background

The novel is thought of as the modern day version of The Goal by Eliyahu M. Goldratt.[3] The novel describes the problems that almost every IT organization faces, and then shows the practices of how to solve the problems, improve the lives of those who work in IT and be recognized for helping the business win.[1] The goal of the book is to show that a truly collaborative approach between IT and business is possible.[4]

Synopsis

The novel tells the story of Bill, the IT manager at Parts Unlimited.[4][5][6] The company's new IT initiative, code named Phoenix Project, is critical to the future of Parts Unlimited, but the project is massively over budget and very late. The CEO wants Bill to report directly to him and fix the mess in ninety days or else Bill's entire department will be outsourced. With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow, streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.[7][8]

Reception

The book has been called a "must read" for IT professionals and quickly reached #1 in its Amazon.com categories.[9][10] The Phoenix Project was featured on 800 CEO Reads Top 25: What Corporate America Is Reading for June, 2013.[11] InfoQ stated, "This book will resonate at one point or another with anyone who's ever worked in IT."[4] Jeremiah Shirk, Integration & Infrastructure Manager at Kansas State University, said of the book: "Some books you give to friends, for the joy of sharing a great novel. Some books you recommend to your colleagues and employees, to create common ground. Some books you share with your boss, to plant the seeds of a big idea. The Phoenix Project is all three."[4] Other reviewers were more skeptical, including the IT Skeptic "Fictionalising allows you to paint an idealised picture, and yet make it seem real, plausible... Sorry but it is all too good to be true... none of the answers are about people or culture or behaviour. They're about tools and techniques and processes." [12] Jez Humble (author of Continuous Delivery) said "unlike real life, there aren't many experiments in the book that end up making things worse..."

[May 15, 2017] 8 DevOps Myths Debunked - DZone DevOps

May 15, 2017 | dzone.com

In a recent webinar, XebiaLabs VP of DevOps Strategy Andrew Phillips sat down with Atos Global Thought Leader in DevOps Dick van der Sar to separate the facts from the fiction. Their findings: most myths come attached with a small piece of fact and vice versa.

1. DevOps Is Developers Doing Operations: Myth

An integral part of DevOps' automation component involves a significant amount of code. This causes people to believe Developers do most of the heavy lifting in the equation. In reality, what ends up happening is due to the amount of infrastructure as Code, Ops begin to look a lot like Dev.

2. Projects Are Dead: Myth

Projects are an ongoing process of evolving systems and failures. To think they can just be handed off to maintenance forever after completion is simply incorrect. This is only true for tightly scoped software needs, including systems built for specific events. When you adopt DevOps and Agile, you are replacing traditional project-based approaches with a focus on product lifecycles.

3. DevOps Doesn't Work in Complex Environments: Myth

DevOps is actually made to thrive in complex environments. The only instance in which it doesn't work is when unrealistic and/or inappropriate goals are set for the enterprise. Complex environments typically suffer due to lack of communication about the state of, and changes to, the interconnected systems. DevOps, on the other hand, encourages communication and collaboration that prevent these issues from arising.

4. It's Hard to Sell DevOps to the Business: Myth

The benefits to DevOps are closely tied benefiting the business. However, that's hard to believe when you pitch adopting DevOps as a plan to "stop working on features and sink a lot of your money into playing with shiny new IT tech." Truth is, DevOps is going to impact the entire enterprise. This may be the source of resistance, but as long as you find the balance between adoption and disruption, you will experience a successful transition.

5. Agile Is for Lazy Engineers: Myth

DevOps prides itself on eliminating unnecessary overhead. Through automation, your enterprise can see a reduction in documentation, meetings, and even manual tasks, giving team members more time to focus on more important priorities. You know your team is running successfully if their productivity increases.

Nonetheless, DevOps does not come without its own form of "boring" processes, including test plans or code audits. Agile may eliminate waste but that doesn't include the tedious yet necessary aspects.

6. If You Can't Code, You Have No Chance in DevOps: Fact

This is only afact because the automation side of DevOps is all Infrastructure as Code (IaC). This typically requires some sort of software development skill such as modularization, automated testing, and Continuous Integration (CI) as IaC. Regardless of scale, automating anything will require, at the very least, software development skills.

7. Managers Disappear: Myth

Rather than disappear, managers take a different role with DevOps. In fact, they are still a necessity to the team. Managers are tasked with the responsibility of keeping the entire DevOps team on track. Classic management tasks may seem to disappear but only because the role is changing to be more focused on empowerment.

8. DevOps or Die: Fact!

Many of today's market leaders already have some sort of advanced DevOps structure in place. As industries incorporate IT further into their business, we will begin to see DevOps as a basic necessity to the modern business and those that can't adapt will simply fall behind.

That being said, you shouldn't think of DevOps as the magic invincibility potion that will keep your enterprise failure free. Rather, DevOps can prevent many types of failure, but there will always be environment specific threats unique to every organization that DevOps can't rescue you from. Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs .

[May 15, 2017] DevOps Fact vs Fiction

May 15, 2017 | vassit.co.uk
Out of these misunderstandings several common myths have been created. Acceptance of these myths misleads business further.

Here are some of the most common myths and the facts that debunk them. Myth 1: DevOps needs agile.

Although DevOps and agile are terms frequently used together, they are a long way away from being synonymous with one another. Agile development refers to a method of software delivery that builds software incrementally, whereas DevOps refers not only to a method of delivery but to a culture, which when adopted, results in many business benefits , including faster software delivery.

DevOps processes can help to compliment agile development, but it is not reliant on agile and can support a range of operation models such as

For optimum results, full adoption of the DevOps philosophy is necessary.

Myth 2: DevOps can't work with legacy.

DevOps is often regarded as a modern concept that helps forward-thinking businesses innovate. Although this is true, it can also help those organisations with long-established, standard IT practices. In fact, with legacy applications there are usually big advantages to DevOps adoption.

Managing legacy care and bringing new software to market quickly; blending stability and agility, is a frequently encountered problem in this new era of digital transformation. Bi-modal IT is an approach where Mode 1 refers to legacy systems focussed on stability, and Mode 2 refers to agile IT focussed on rapid application delivery. DevOps principles are often included exclusively within Mode 2, but automation and collaboration can also be used with success within Mode 1 to increase delivery speed whilst ensuring stability.

Myth 3: DevOps is only for continuous delivery.

DevOps doesn't (necessarily) imply continuous delivery. The aim of a DevOps culture is to increase the delivery frequency of an organisation, often from quarterly/monthly to daily releases or more, and improve their ability to respond to changes in the market.

While continuous delivery relies heavily on automation and is aimed at agile and lean thinking organisations, unlike DevOps it is not reliant on a shared culture which enhances collaboration. Gartner summed up the distinction with a report that stated that: "DevOps is not a market, but a tool-centric philosophy that supports a continuous delivery value chain."

Myth 4: DevOps requires new tools.

As with the implementation of any new concept or idea, a common misconception about DevOps adoption is that new toolsets, and skills are required. Though the provision of appropriate and relevant tools can aid adoption, organisations are by no means required to replace tools and processes they use to produce software.

DevOps enables organisations to deliver new capabilities more easily, and bring new software into production more rapidly in order to respond to market changes. It is not strictly reliant on new tools to get this job done.

Myth 5: DevOps is a skill.

The rapid growth of the DevOps movement has resulted in huge demand for professionals who are skilled within the methodology. However, this fact is often misconstrued to suggest that DevOps is itself a skill – this is not the case.

DevOps is a culture – one that needs to be fully adopted throughout an entire organisation for optimum results, and one that is best supported with appropriate and relevant tools.

Myth 6: DevOps is software.

Understanding that DevOps adoption can be better facilitated with software is important, however, maybe more so is understanding that they are not one and the same. Although it is true that there is a significant amount of DevOps software available on the market today, purchasing a specific ad-hoc DevOps product, or even suite of products, will not make your business 'DevOps'.

The DevOps methodology is the communication, collaboration and automation of your development and operations functions, and as described above, is required to be adopted by an entire organisation to achieve optimum results. The software and tools available will undoubtedly reduce the strain of adoption on your business but conscious adoption is required for your business to fully reach the potential that DevOps offers.

Conclusion

Like any new and popular term, people have somewhat confused and sometimes contradictory or partial impressions of what DevOps is and how it works.

DevOps is a philosophy which enables businesses to automate their processes and work more collaboratively to achieve a common goal and deliver software more rapidly.

At VASSIT we help organisations to successfully implement DevOps, click here to learn how we made DevOps a reality at TSB bank Sabadell

[Mar 20, 2017] It sucks to be right - blog dot lusis

These are just a few things that jumped out at me (and annoyed me)

However, there are teams at Netflix that do traditional Operations, and teams that do DevOps as well.

Ops is ops is ops. No matter what you call it, Operations is operations.

Notice that we didn’t use the typical DevOps tools Puppet or Chef to create builds at runtime

There’s no such thing as a “DevOps tool”. People were using CFengine, Puppet and Chef long before DevOps was even a term. These are configuration management tools. In fact Adrian has even said they use Puppet in their legacy datacenter:

yet he seems to make the distinction between the ops guys there and the “devops” guys (whatever those are).

There is no ops organization involved in running our cloud…

Just because you outsourced it, doesn’t mean it doesn’t exist. Oh and it’s not your cloud. It’s Amazon’s.

Reading between the lines

Actually this doesn’t take much reading between the lines. It’s out there in plain sight:

In reality we had the usual complaints about how long it took to get new capacity, the lack of consistency across supposedly identical systems, and failures in Oracle, in the SAN and the networks, that took the site down too often for too long.

We tried bringing in new ops managers, and new engineers, but they were always overwhelmed by the fire fighting needed to keep the current systems running.

This is largely because the people making decisions are development managers, who have been burned repeatedly by configuration bugs in systems that were supposed to be identical.

The developers used to spend hours a week in meetings with Ops discussing what they needed, figuring out capacity forecasts and writing tickets to request changes for the datacenter.

There is no ops organization involved in running our cloud, no need for the developers to interact with ops people to get things done, and less time spent actually doing ops tasks than developers would spend explaining what needed to be done to someone else.

I’m glad to see this spelled out in such detail. This is what I’ve been telling people semi-privately for a while now. Because Netflix had such a terrible experience with its operations team, they went to the opposite extreme and disintermediated them.

Imagine you were scared as a kid by a clown. Now imagine you have kids of your own. You hate clowns. You had a bad experience with clowns. But it’s your kid’s birthday party so here you are making baloon animals, telling jokes and doing silly things to entertain the kids.Just because you aren’t wearing makeup doesn’t make you any less of a clown. You’re doing clown shit. Through the eyes of the kids, you’re a clown. Deal with it. Netflix is still doing operations. What should be telling and frightening to operations teams everywhere is this:

The Netflix response to poorly run operations that can’t service the business is going to become the norm and not the exception. Evolve or die.

Please note that I don’t lay all the blame on the Netflix operations team. I would love to hear the flipside of this story from someone who was there originally when the streaming initiative started. It would probably be full of stories we’ve heard before - no resources, misalignment of incentives and a whole host of others.

Adrian, thank you for writing the blog post. I hope it serves as a warning to those who come. Hopefully someday you’ll be able to see a clown again and not get scared ;)

[Mar 20, 2017] What we mean by "operations," and how it's changed over the years by Mike Loukides

June 7, 2012

Adrian Cockcroft’s article about NoOps at Netflix ignited a controversy that has been smouldering for some months. John Allspaw’s detailed response to Adrian’s article makes a key point: What Adrian described as “NoOps” isn’t really. Operations doesn’t go away. Responsibilities can, and do, shift over time, and as they shift, so do job descriptions. But no matter how you slice it, the same jobs need to be done, and one of those jobs is operations. What Adrian is calling NoOps at Netflix isn’t all that different from Operations at Etsy. But that just begs the question: What do we mean by “operations” in the 21st century? If NoOps is a movement for replacing operations with something that looks suspiciously like operations, there’s clearly confusion. Now that some of the passion has died down, it’s time to get to a better understanding of what we mean by operations and how it’s changed over the years.

At a recent lunch, John noted that back in the dawn of the computer age, there was no distinction between dev and ops. If you developed, you operated. You mounted the tapes, you flipped the switches on the front panel, you rebooted when things crashed, and possibly even replaced the burned out vacuum tubes. And you got to wear a geeky white lab coat. Dev and ops started to separate in the ’60s, when programmer/analysts dumped boxes of punch cards into readers, and “computer operators” behind a glass wall scurried around mounting tapes in response to IBM JCL. The operators also pulled printouts from line printers and shoved them in labeled cubbyholes, where you got your output filed under your last name.

The arrival of minicomputers in the 1970s and PCs in the ’80s broke down the wall between mainframe operators and users, leading to the system and network administrators of the 1980s and ’90s. That was the birth of modern “IT operations” culture. Minicomputer users tended to be computing professionals with just enough knowledge to be dangerous. (I remember when a new director was given the root password and told to “create an account for yourself” … and promptly crashed the VAX, which was shared by about 30 users). PC users required networks; they required support; they required shared resources, such as file servers and mail servers. And yes, BOFH (“Bastard Operator from Hell”) serves as a reminder of those days. I remember being told that “no one” else is having the problem you’re having — and not getting beyond it until at a company meeting we found that everyone was having the exact same problem, in slightly different ways. No wonder we want ops to disappear. No wonder we wanted a wall between the developers and the sysadmins, particularly since, in theory, the advent of the personal computer and desktop workstation meant that we could all be responsible for our own machines.

But somebody has to keep the infrastructure running, including the increasingly important websites. As companies and computing facilities grew larger, the fire-fighting mentality of many system administrators didn’t scale. When the whole company runs on one 386 box (like O’Reilly in 1990), mumbling obscure command-line incantations is an appropriate way to fix problems. But that doesn’t work when you’re talking hundreds or thousands of nodes at Rackspace or Amazon. From an operations standpoint, the big story of the web isn’t the evolution toward full-fledged applications that run in the browser; it’s the growth from single servers to tens of servers to hundreds, to thousands, to (in the case of Google or Facebook) millions. When you’re running at that scale, fixing problems on the command line just isn’t an option. You can’t afford letting machines get out of sync through ad-hoc fixes and patches. Being told “We need 125 servers online ASAP, and there’s no time to automate it” (as Sascha Bates encountered) is a recipe for disaster.

The response of the operations community to the problem of scale isn’t surprising. One of the themes of O’Reilly’s Velocity Conference is “Infrastructure as Code.” If you’re going to do operations reliably, you need to make it reproducible and programmatic. Hence virtual machines to shield software from configuration issues. Hence Puppet and Chef to automate configuration, so you know every machine has an identical software configuration and is running the right services. Hence Vagrant to ensure that all your virtual machines are constructed identically from the start. Hence automated monitoring tools to ensure that your clusters are running properly. It doesn’t matter whether the nodes are in your own data center, in a hosting facility, or in a public cloud. If you’re not writing software to manage them, you’re not surviving.

Furthermore, as we move further and further away from traditional hardware servers and networks, and into a world that’s virtualized on every level, old-style system administration ceases to work. Physical machines in a physical machine room won’t disappear, but they’re no longer the only thing a system administrator has to worry about. Where’s the root disk drive on a virtual instance running at some colocation facility? Where’s a network port on a virtual switch? Sure, system administrators of the ’90s managed these resources with software; no sysadmin worth his salt came without a portfolio of Perl scripts. The difference is that now the resources themselves may be physical, or they may just be software; a network port, a disk drive, or a CPU has nothing to do with a physical entity you can point at or unplug. The only effective way to manage this layered reality is through software.

So infrastructure had to become code. All those Perl scripts show that it was already becoming code as early as the late ’80s; indeed, Perl was designed as a programming language for automating system administration. It didn’t take long for leading-edge sysadmins to realize that handcrafted configurations and non-reproducible incantations were a bad way to run their shops. It’s possible that this trend means the end of traditional system administrators, whose jobs are reduced to racking up systems for Amazon or Rackspace. But that’s only likely to be the fate of those sysadmins who refuse to grow and adapt as the computing industry evolves. (And I suspect that sysadmins who refuse to adapt swell the ranks of the BOFH fraternity, and most of us would be happy to see them leave.) Good sysadmins have always realized that automation was a significant component of their job and will adapt as automation becomes even more important. The new sysadmin won’t power down a machine, replace a failing disk drive, reboot, and restore from backup; he’ll write software to detect a misbehaving EC2 instance automatically, destroy the bad instance, spin up a new one, and configure it, all without interrupting service. With automation at this level, the new “ops guy” won’t care if he’s responsible for a dozen systems or 10,000. And the modern BOFH is, more often than not, an old-school sysadmin who has chosen not to adapt.

James Urquhart nails it when he describes how modern applications, running in the cloud, still need to be resilient and fault tolerant, still need monitoring, still need to adapt to huge swings in load, etc. But he notes that those features, formerly provided by the IT/operations infrastructures, now need to be part of the application, particularly in “platform as a service” environments. Operations doesn’t go away, it becomes part of the development. And rather than envision some sort of uber developer, who understands big data, web performance optimization, application middleware, and fault tolerance in a massively distributed environment, we need operations specialists on the development teams. The infrastructure doesn’t go away — it moves into the code; and the people responsible for the infrastructure, the system administrators and corporate IT groups, evolve so that they can write the code that maintains the infrastructure. Rather than being isolated, they need to cooperate and collaborate with the developers who create the applications. This is the movement informally known as “DevOps.”

Amazon’s EBS outage last year demonstrates how the nature of “operations” has changed. There was a marked distinction between companies that suffered and lost money, and companies that rode through the outage just fine. What was the difference? The companies that didn’t suffer, including Netflix, knew how to design for reliability; they understood resilience, spreading data across zones, and a whole lot of reliability engineering. Furthermore, they understood that resilience was a property of the application, and they worked with the development teams to ensure that the applications could survive when parts of the network went down. More important than the flames about Amazon’s services are the testimonials of how intelligent and careful design kept applications running while EBS was down. Netflix’s ChaosMonkey is an excellent, if extreme, example of a tool to ensure that a complex distributed application can survive outages; ChaosMonkey randomly kills instances and services within the application. The development and operations teams collaborate to ensure that the application is sufficiently robust to withstand constant random (and self-inflicted!) outages without degrading.

On the other hand, during the EBS outage, nobody who wasn’t an Amazon employee touched a single piece of hardware. At the time, JD Long tweeted that the best thing about the EBS outage was that his guys weren’t running around like crazy trying to fix things. That’s how it should be. It’s important, though, to notice how this differs from operations practices 20, even 10 years ago. It was all over before the outage even occurred: The sites that dealt with it successfully had written software that was robust, and carefully managed their data so that it wasn’t reliant on a single zone. And similarly, the sites that scrambled to recover from the outage were those that hadn’t built resilience into their applications and hadn’t replicated their data across different zones.

In addition to this redistribution of responsibility, from the lower layers of the stack to the application itself, we’re also seeing a redistribution of costs. It’s a mistake to think that the cost of operations goes away. Capital expense for new servers may be replaced by monthly bills from Amazon, but it’s still cost. There may be fewer traditional IT staff, and there will certainly be a higher ratio of servers to staff, but that’s because some IT functions have disappeared into the development groups. The bonding is fluid, but that’s precisely the point. The task — providing a solid, stable application for customers — is the same. The locations of the servers on which that application runs, and how they’re managed, are all that changes.

One important task of operations is understanding the cost trade-offs between public clouds like Amazon’s, private clouds, traditional colocation, and building their own infrastructure. It’s hard to beat Amazon if you’re a startup trying to conserve cash and need to allocate or deallocate hardware to respond to fluctuations in load. You don’t want to own a huge cluster to handle your peak capacity but leave it idle most of the time. But Amazon isn’t inexpensive, and a larger company can probably get a better deal taking its infrastructure to a colocation facility. A few of the largest companies will build their own datacenters. Cost versus flexibility is an important trade-off; scaling is inherently slow when you own physical hardware, and when you build your data centers to handle peak loads, your facility is underutilized most of the time. Smaller companies will develop hybrid strategies, with parts of the infrastructure hosted on public clouds like AWS or Rackspace, part running on private hosting services, and part running in-house. Optimizing how tasks are distributed between these facilities isn’t simple; that is the province of operations groups. Developing applications that can run effectively in a hybrid environment: that’s the responsibility of developers, with healthy cooperation with an operations team.

The use of metrics to monitor system performance is another respect in which system administration has evolved. In the early ’80s or early ’90s, you knew when a machine crashed because you started getting phone calls. Early system monitoring tools like HP’s OpenView provided limited visibility into system and network behavior but didn’t give much more information than simple heartbeats or reachability tests. Modern tools like DTrace provide insight into almost every aspect of system behavior; one of the biggest challenges facing modern operations groups is developing analytic tools and metrics that can take advantage of the data that’s available to predict problems before they become outages. We now have access to the data we need, we just don’t know how to use it. And the more we rely on distributed systems, the more important monitoring becomes. As with so much else, monitoring needs to become part of the application itself. Operations is crucial to success, but operations can only succeed to the extent that it collaborates with developers and participates in the development of applications that can monitor and heal themselves.

Success isn’t based entirely on integrating operations into development. It’s naive to think that even the best development groups, aware of the challenges of high-performance, distributed applications, can write software that won’t fail. On this two-way street, do developers wear the beepers, or IT staff? As Allspaw points out, it’s important not to divorce developers from the consequences of their work since the fires are frequently set by their code. So, both developers and operations carry the beepers. Sharing responsibilities has another benefit. Rather than finger-pointing post-mortems that try to figure out whether an outage was caused by bad code or operational errors, when operations and development teams work together to solve outages, a post-mortem can focus less on assigning blame than on making systems more resilient in the future. Although we used to practice “root cause analysis” after failures, we’re recognizing that finding out the single cause is unhelpful. Almost every outage is the result of a “perfect storm” of normal, everyday mishaps. Instead of figuring out what went wrong and building procedures to ensure that something bad can never happen again (a process that almost always introduces inefficiencies and unanticipated vulnerabilities), modern operations designs systems that are resilient in the face of everyday errors, even when they occur in unpredictable combinations.

In the past decade, we’ve seen major changes in software development practice. We’ve moved from various versions of the “waterfall” method, with interminable up-front planning, to “minimum viable product,” continuous integration, and continuous deployment. It’s important to understand that the waterfall and methodology of the ’80s aren’t “bad ideas” or mistakes. They were perfectly adapted to an age of shrink-wrapped software. When you produce a “gold disk” and manufacture thousands (or millions) of copies, the penalties for getting something wrong are huge. If there’s a bug, you can’t fix it until the next release. In this environment, a software release is a huge event. But in this age of web and mobile applications, deployment isn’t such a big thing. We can release early, and release often; we’ve moved from continuous integration to continuous deployment. We’ve developed techniques for quick resolution in case a new release has serious problems; we’ve mastered A/B testing to test releases on a small subset of the user base.

All of these changes require cooperation and collaboration between developers and operations staff. Operations groups are adopting, and in many cases, leading in the effort to implement these changes. They’re the specialists in resilience, in monitoring, in deploying changes and rolling them back. And the many attendees, hallway discussions, talks, and keynotes at O’Reilly’s Velocity conference show us that they are adapting. They’re learning about adopting approaches to resilience that are completely new to software engineering; they’re learning about monitoring and diagnosing distributed systems, doing large-scale automation, and debugging under pressure. At a recent meeting, Jesse Robbins described scheduling EMT training sessions for operations staff so that they understood how to handle themselves and communicate with each other in an emergency. It’s an interesting and provocative idea, and one of many things that modern operations staff bring to the mix when they work with developers.

What does the future hold for operations? System and network monitoring used to be exotic and bleeding-edge; now, it’s expected. But we haven’t taken it far enough. We’re still learning how to monitor systems, how to analyze the data generated by modern monitoring tools, and how to build dashboards that let us see and use the results effectively. I’ve joked about “using a Hadoop cluster to monitor the Hadoop cluster,” but that may not be far from reality. The amount of information we can capture is tremendous, and far beyond what humans can analyze without techniques like machine learning.

Likewise, operations groups are playing a huge role in the deployment of new, more efficient protocols for the web, like SPDY. Operations is involved, more than ever, in tuning the performance of operating systems and servers (even ones that aren’t under our physical control); a lot of our “best practices” for TCP tuning were developed in the days of ISDN and 56 Kbps analog modems, and haven’t been adapted to the reality of Gigabit Ethernet, OC48* fiber, and their descendants. Operations groups are responsible for figuring out how to use these technologies (and their successors) effectively. We’re only beginning to digest IPv6 and the changes it implies for network infrastructure. And, while I’ve written a lot about building resilience into applications, so far we’ve only taken baby steps. There’s a lot there that we still don’t know. Operations groups have been leaders in taking best practices from older disciplines (control systems theory, manufacturing, medicine) and integrating them into software development.

And what about NoOps? Ultimately, it’s a bad name, but the name doesn’t really matter. A group practicing “NoOps” successfully hasn’t banished operations. It’s just moved operations elsewhere and called it something else. Whether a poorly chosen name helps or hinders progress remains to be seen, but operations won’t go away; it will evolve to meet the challenges of delivering effective, reliable software to customers. Old-style system administrators may indeed be disappearing. But if so, they are being replaced by more sophisticated operations experts who work closely with development teams to get continuous deployment right; to build highly distributed systems that are resilient; and yes, to answer the pagers in the middle of the night when EBS goes down. DevOps.

Related:

Adrian Cockcroft's Blog Ops, DevOps and PaaS (NoOps) at Netflix

March 19, 2012 There has been a sometimes heated discussion on twitter about the term NoOps recently, and I've been quoted extensively as saying that NoOps is the way developers work at Netflix. However, there are teams at Netflix that do traditional Operations, and teams that do DevOps as well. To try and clarify things I need to explain the history and current practices at Netflix in chunks of more than 140 characters at a time.

When I joined Netflix about five years ago, I managed a development team, building parts of the web site. We also had an operations team who ran the systems in the single datacenter that we deployed our code to. The systems were high end IBM P-series virtualized machines with storage on a virtualized Storage Area Network. The idea was that this was reliable hardware with great operational flexibility so that developers could assume low failure rates and concentrate on building features. In reality we had the usual complaints about how long it took to get new capacity, the lack of consistency across supposedly identical systems, and failures in Oracle, in the SAN and the networks, that took the site down too often for too long.

At that time we had just launched the streaming service, and it was still an experiment, with little content and no TV device support. As we grew streaming over the next few years, we saw that we needed higher availability and more capacity, so we added a second datacenter. This project took far longer than initial estimates, and it was clear that deploying capacity at the scale and rates we were going to need as streaming took off was a skill set that we didn't have in-house. We tried bringing in new ops managers, and new engineers, but they were always overwhelmed by the fire fighting needed to keep the current systems running.

Netflix is a developer oriented culture, from the top down. I sometimes have to remind people that our CEO Reed Hastings was the founder and initial developer of Purify, which anyone developing serious C++ code in the 1990's would have used to find memory leaks and optimize their code. Pure Software merged with Atria and Rational before being swallowed up by IBM. Reed left IBM and formed Netflix. Reed hired a team of very strong software engineers who are now the VPs who run developer engineering for our products. When we were deciding what to do next Reed was directly involved in deciding that we should move to cloud, and even pushing us to build an aggressively cloud optimized architecture based on NoSQL. Part of that decision was to outsource the problems of running large scale infrastructure and building new datacenters to AWS. AWS has far more resources to commit to getting cloud to work and scale, and to building huge datacenters. We could leverage this rather than try to duplicate it at a far smaller scale, with greater certainty of success. So the budget and responsibility for managing AWS and figuring out cloud was given directly to the developer organization, and the ITops organization was left to run its datacenters. In addition, the goal was to keep datacenter capacity flat, while growing the business rapidly by leveraging additional capacity on AWS.

Over the next three years, most of the ITops staff have left and been replaced by a smaller team. Netflix has never had a CIO, but we now have an excellent VP of ITops Mike Kail (@mdkail), who now runs the datacenters. These still support the DVD shipping functions of Netflix USA, and he also runs corporate IT, which is increasingly moving to SaaS applications like Workday. Mike runs a fairly conventional ops team and is usually hiring, so there are sysadmin, database,, storage and network admin positions. The datacenter footprint hasn't increased since 2009, although there have been technology updates, and the over-all size is order-of-magnitude a thousand systems.

As the developer organization started to figure out cloud technologies and build a platform to support running Netflix on AWS, we transferred a few ITops staff into a developer team that formed the core of our DevOps function. They build the Linux based base AMI (Amazon Machine Image) and after a long discussion we decided to leverage developer oriented tools such as Perforce for version control, Ivy for dependencies, Jenkins to automate the build process, Artifactory as the binary repository and to construct a "bakery" that produces complete AMIs that contain all the code for a service. Along with AWS Autoscale Groups this ensured that every instance of a service would be totally identical. Notice that we didn't use the typical DevOps tools Puppet or Chef to create builds at runtime. This is largely because the people making decisions are development managers, who have been burned repeatedly by configuration bugs in systems that were supposed to be identical.

By 2012 the cloud capacity has grown to be order-of-magnitude 10,000 instances, ten times the capacity of the datacenter, running in nine AWS Availability zones (effectively separate datacenters) on the US East and West coast, and in Europe. A handful of DevOps engineers working for Carl Quinn (@cquinn - well known from the Java Posse podcast) are coding and running the build tools and bakery, and updating the base AMI from time to time. Several hundred development engineers use these tools to build code, run it in a test account in AWS, then deploy it to production themselves. They never have to have a meeting with ITops, or file a ticket asking someone from ITops to make a change to a production system, or request extra capacity in advance. They use a web based portal to deploy hundreds of new instances running their new code alongside the old code, put one "canary" instance into traffic, if it looks good the developer flips all the traffic to the new code. If there are any problems they flip the traffic back to the previous version (in seconds) and if it's all running fine, some time later the old instances are automatically removed. This is part of what we call NoOps. The developers used to spend hours a week in meetings with Ops discussing what they needed, figuring out capacity forecasts and writing tickets to request changes for the datacenter. Now they spend seconds doing it themselves in the cloud. Code pushes to the datacenter are rigidly scheduled every two weeks, with emergency pushes in between to fix bugs. Pushes to the cloud are as frequent as each team of developers needs them to be, incremental agile updates several times a week is common, and some teams are working towards several updates a day. Other teams and more mature services update every few weeks or months. There is no central control, the teams are responsible for figuring out their own dependencies and managing AWS security groups that restrict who can talk to who.

Automated deployment is part of the normal process of running in the cloud. The other big issue is what happens if something breaks. Netflix ITops always ran a Network Operations Center (NOC) which was staffed 24x7 with system administrators. They were familiar with the datacenter systems, but had no experience with cloud. If there was a problem, they would start and run a conference call, and get the right people on the call to diagnose and fix the issue. As the Netflix web site and streaming functionality moved to the cloud it became clear that we needed a cloud operations reliability engineering (CORE) team, and that it would be part of the development organization. The CORE team was lucky enough to get Jeremy Edberg (@jedberg - well know from running Reddit) as its initial lead engineer, and also picked up some of the 24x7 shift sysadmins from the original NOC. The CORE team is still staffing up, looking for Site Reliability Engineer skill set, and is the second group of DevOps engineers within Netflix. There is a strong emphasis on building tools too make as much of their processes go away as possible, for example they have no run-books, they develop code instead,

To get themselves out of the loop, the CORE team has built an alert processing gateway. It collects alerts from several different systems, does filtering, has quenching and routing controls (that developers can configure), and automatically routes alerts either to the PagerDuty system (a SaaS application service that manages on call calendars, escalation and alert life cycles) or to a developer team email address. Every developer is responsible for running what they wrote, and the team members take turns to be on call in the PagerDuty rota. Some teams never seem to get calls, and others are more often on the critical path. During a major production outage con call, the CORE team never make changes to production applications, they always call a developer to make the change. The alerts mostly refer to business transaction flows (rather than typical operations oriented Linux level issues) and contain deep links to dashboards and developer oriented Application Performance Management tools like AppDynamics which let developers quickly see where the problem is at the Java method level and what to fix,

The transition from datacenter to cloud also invoked a transition from Oracle, initially to SimpleDB (which AWS runs) and now to Apache Cassandra, which has its own dedicated team. We moved a few Oracle DBAs over from the ITops team and they have become experts in helping developers figure out how to translate their previous experience in relational schemas into Cassandra key spaces and column families. We have a few key development engineers who are working on the Cassandra code itself (an open source Java distributed systems toolkit), adding features that we need, tuning performance and testing new versions. We have three key open source projects from this team available on github.com/Netflix. Astyanax is a client library for Java applications to talk to Cassandra, CassJmeter is a Jmeter plugin for automated benchmarking and regression testing of Cassandra, and Priam provides automated operation of Cassandra including creating, growing and shrinking Cassandra clusters, and performing full and incremental backups and restores. Priam is also written in Java. Finally we have three DevOps engineers maintaining about 55 Cassandra clusters (including many that span the US and Europe), a total of 600 or so instances. They have developed automation for rolling upgrades to new versions, and sequencing compaction and repair operations. We are still developing our Cassandra tools and skill sets, and are looking for a manager to lead this critical technology, as well as additional engineers. Individual Cassandra clusters are automatically created by Priam, and it's trivial for a developer to create their own cluster of any size without assistance (NoOps again). We have found that the first attempts to produce schemas for Cassandra use cases tend to cause problems for engineers who are new to the technology, but with some familiarity and assistance from the Cloud Database Engineering team, we are starting to develop better common patterns to work to, and are extending the Astyanax client to avoid common problems.

In summary, Netflix stil does Ops to run its datacenter DVD business. we have a small number of DevOps engineers embedded in the development organization who are building and extending automation for our PaaS, and we have hundreds of developers using NoOps to get their code and datastores deployed in our PaaS and to get notified directly when something goes wrong. We have built tooling that removes many of the operations tasks completely from the developer, and which makes the remaining tasks quick and self service. There is no ops organization involved in running our cloud, no need for the developers to interact with ops people to get things done, and less time spent actually doing ops tasks than developers would spend explaining what needed to be done to someone else. I think that's different to the way most DevOps places run, but its similar to other PaaS enviroments, so it needs it's own name, NoOps. [Update: the DevOps community argues that although it's different, it's really just a more advanced end state for DevOps, so lets just call it PaaS for now, and work on a better definition of DevOps]. John Allspaw12:20 PM

Looks like your comments can't be more than 4,096 chars on this blog. :)

So here's my comment:
https://gist.github.com/2140086

ReplyDelete

Replies

  1. Adrian Cockcroft1:51 PM

    Thanks John. I agree with some of what you point out. Netflix in effect over-reacted to a dysfunctional ops organization. I think there are several other organizations who would recognize our situation, and would also find a need to over-react to make the solution stick.

    Your definition of DevOps seems far broader than the descriptions and definitions I can find by googling or looking on Wikipedia. I don't recognize what we do in those definitions - since they are so focused on the relationship between a Dev org and an Ops org, so someone should post an updated definition to Wikipedia or devops.com. Until then maybe I'll just call it NetflOps or botchagalops.

    Delete
  2. Reply
  • Edward Capriolo12:28 PM

    I have a loaded NoOps question for you :) I am very interesting in understanding how a decentralized environments he said-she said issues get solved. For example, I know netflix uses horizontally scalable rest layers as integration points.
    Suppose one team/application is having an intermittent problem/bug with another team/application. Team 1 opens a issue. Team 2 reads investigates closes the issue as not a problem. Team 1 double checks and reasserts the issue is Team 2.

    In a decentralized environment how is this road block cleared?

    As an ops person I spend a good deal of time chasing down problems very external to me. I accept this as an ops person. Since developers are pressed into ops how much time will a developer spend on another teams reported problems. Will team 1 forgo there own scrum deadlines this week because team 2 sucks up all their time reporting bogus problems?

    ReplyDelete
  • Adrian Cockcroft1:44 PM

    We aren't decentralized. So in the scenario you mention everyone gets in a room and figures it out, or we just end up with a long email thread if it's less serious. APM tools help pinpoint what is going on at the request level down to Java code. Once we have root cause someone files a Jira to fix the issue. There is a manager rotation for centrally prioritizing and coordinating response to major outages. (I'm on duty this week, it comes up every few months.) We have a few people who have "mad wireshark skills" to debug network layer problems, but thats infrequent and I'm hopeful that boundary.com will come up with better tools in this space.

    We don't follow a rigid methodology or fixed release deadlines, we ship code frequently enough that delays aren't usually a big issue, and we have a culture of responsible adults so we communicate and respect each others needs across teams. The infrequent large coordinating events like a new country launch are dealt with by picking one manager to own the overall big picture and lead the coordination.

    ReplyDelete

    Replies

    1. @sixfootdad2:44 PM

      Adrian - Great article! I'm always fascinated to read & hear how folks have solved problems that plague a lot of IT organizations.

      I've got a question about something in your reply above: "we have a culture of responsible adults so we communicate and respect each others needs across teams".
      I've found that, time and time again, the most difficult thing about organizational change is the people. How does one go about hiring "responsible adults"? I know that it might sound like a silly or a flippant question, but seriously -- I've lost count of how many times grown up folks act like childish, selfish, spoiled brats.

      Delete
    2. Adrian Cockcroft4:50 PM

      My views on culture may not be much help - read http://perfcap.blogspot.com/2011/12/how-netflix-gets-out-of-way-of.html to see my explanation of how Netflix does things differently.

      Culture is very hard to create or modify but easy to destroy. This is because everyone has to buy into it for it to be effective, and then every manager has to hire only people who are compatible with the culture, and also get rid of people who turn out not to fit in, even if they are doing good work.

      So the short answer is start a new company from scratch with the culture you want, and pay a lot of attention to who you hire. I don't think it is possible to do a culture shift if there are more than a roomful of people involved.

      Delete
    3. Reply
  • Paul Kelly1:37 AM

    Part of getting a "culture of responsible adults" together is partly down to "culture" - although it helps to have mature sensible individuals, fostering that also means avoiding finger-pointing and blame.

    The more defensive people are made to feel, the more likely they are to start throwing tantrums when under pressure. A culture where you can put your hand up and say: "I got that wrong, how can we put it right?" gets better results in the long term than one where you might be fired or disciplined for a genuine mistake.

    I always wondered how evil geniuses like Ernst Blofeld recruit when getting it wrong means you might end up in the shark tank...

    ReplyDelete

    Replies

    1. Adrian Cockcroft4:53 PM

      Yes, incident reviews that don't involve blame and finger-pointing are also key. Making the same mistake several times, trying to hide your mistakes, or clear lapses of judgement can't be tolerated though.

      Delete
    2. Reply
  • Unknown6:00 PM

    This comment has been removed by the author.

    ReplyDelete
  • Jason6:02 PM

    Great article Adrian. I have a question. Is a consecutive IP space important? Since AWS EIP doesn't guarantee consecutive addresses, I've wondered if this mattered to app developers.

    Anything that could have been done by subnet is out the window. For example if you wanted to do port sweeps of your network blocks for an audit, perform penetration testing, or parsing logs by IPs. I suppose this could be done pragmatically but was curious about your experiences. Does it matter?

    ReplyDelete

    Replies

    1. Adrian Cockcroft11:16 AM

      If the network topology matters you can use VPC to manage it. Also if you are a big enough customer to have an enterprise level support contract with AWS and use a lot of EIPs it is possible to get them allocated in contiguous blocks.

      Delete
    2. Reply
  • manul9:10 AM

    Some thoughts by me on the topic of NoOps and DevOps and the future of operations and the need for operations http://imansson.wordpress.com/2012/03/21/35/

    ReplyDelete

    Replies

    1. Adrian Cockcroft11:25 AM

      I added a comment to your blog. Thanks!

      Delete
    2. Reply
  • Sudsy11:44 AM

    I'm curious about your statement "Notice that we didn't use the typical DevOps tools Puppet or Chef to create builds at runtime. This is largely because the people making decisions are development managers, who have been burned repeatedly by configuration bugs in systems that were supposed to be identical."

    If systems are built from the same puppet manifests, what kind of configuration bugs can occur? Also, how is the alternative method you choose any less likely to cause the same problems?

    ReplyDelete

    Replies

    1. Adrian Cockcroft12:09 PM

      Puppet is overkill for what we end up doing. We have a base AMI that is always identical, we install an rpm on it or untar a file once, rebake the AMI and we are done. That AMI is then turned into running systems using the AWS autoscaler. It's more reliable because there are fewer things to go wrong at boot time, no dependencies on repo's or orchestration tools, we know that the install had completely succeeded the same way on every instance before the instance booted.

      TimFraser1:18 PM

      Adrian, Last time we talked you mentioned that you were not 100% transitioned to AWS, but were - looks like this was achieved, with the exception for the core DVD Business.
      As many companies are talking about Hyrbrid cloud as a target state, and Netflix went through the transition from Pvt/Managed Ops to Public-AWS/NoOps, can you talk about the interim state you were in and what development and operations use cases you optimized around during transition – ala, what were the Hybrid models and principles you all followed – to move from Ops to NoOps. Did you attempt to keep these teams and tools separate or did you all try to create a transition strategy that allowed you to hedge the bets with AWS and the 'all-in strategy" to possibly come back if needed etc..

      What Ops Governance and Dev Tooling approach did you take in the state? Specifically around cloud abstraction layers to ease the Access management, support, tooling, elasticity needs. Can you shed some light on the thinking and approach you took while you were in mid-state of transition?

      Also can you comment on the how much do you govern and drive the development and deployment approach so you can unify the continuous Integration and Continuous deployment tools so that you can reduce the chaos in this space?
      Tim Fraser

      Adrian Cockcroft12:15 PM

      If you look at the presentations on slideshare.net/adrianco I have discussed in some detail what the transition strategy and tools looked like. We continued to run the old code in the datacenter as we gradually moved functionality away from it. The old DC practices were left intact as we moved developers to the cloud one group at a time.

      JonColes10:17 PM

      Hi Adrian,

      It seems to me that rather than NoOps, this is an outsourcing of ops. I assume that the issues you had with Ops came from the need to control cost, scale etc? If not, please would you clarify?

      If it was, how has moving to AWS solved your problems? Is it that AWS can scale more readily based on experience to date? Was cost control an issue before, and is it now, in terms of Ops
      costs?

      Given that NoOps = outsourcing, I can see that managing the relationship with the service provider becomes vital for you?

      Thanks in advance, interesting stuff!
      Jonathan Coles

      DevOps Stackify8:43 PM

      Dev and admin teams struggle these days with keeping up with agile development. DevOps helps by breaking down some of the walls. But one of the biggest challenges is getting the entire development team involved and not just 1 or 2 people who help do deployments. The entire team needs visibility to the production server environments to help support and troubleshoot applications.

  • Recommended Links

    Internal

    External

    History



    Etc

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

    ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least


    Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

    The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

    Last modified: May, 20, 2017