Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

A Slightly Skeptical View on Unix Configuration Management Tools

News Enterprise Unix System Administration Recommended Links Recommended Papers Baseliners Software configuration Management
Slurping: copying a file from multiple hosts Etch pacha puppet Chief GNU cfengine
HP SRC Webmin synctool IBM Remote System Management Tool LCFG A large scale UNIX configuration system MValent
Parallel command execution Software Distribution Enterprise Job schedulers Simple Unix Backup Tools Humor Etc

I think that the real "higher ground" is security will be won (if it ever is) in two strongly-related areas: software quality (process) and (automated) configuration management.

Tom Perrine
San Diego Supercomputer Center

System administration area is far from being a paradise. There is fair amount of backstabbing and variants of a classic tale of too many cooks in the kitchen. “Cascade of interventions” that can happen when something going wrong often makes the situation worse. When one administrator make some disastrous change and then denies that he made it is easy to get too emotional. But it is better to get technical ;-).

Another side of the same problem is  how to keep track of all the small changes you make to the configuration of the machine, so that you know a) what you did, b) why you did it, and c) how to do it again? If there are a group of people who administer the same server this situation became the variant of the first, but stress here is not on the resolution of some problem but on understanding how system got to its current configuration stage and why.

Configuration management tools is one answer to this "many cooks in one kitchen" problem of system administration. Tracking changes in a server configuration files can be critical to understand problems and often substantially simply finding the root cause and repair, including security problems.  There many open source and commercial tools. For the list of open source tools see Comparison of open source configuration management software - Wikipedia

Unix configuration management is far from being a new topic. On a basic level you need just to understand who and when makes changes to particular system. The simplest tool for this are so called baseliners.  Tasks that go above and beyond this include synchronizing configuration files for similar systems. This is more difficult but still attainable goal. Going beyond that is more problematic. Few of proposed more powerful configuration management systems successes in lessening the load on sysadmin and provide positive return on investment. In other words for many popular  configuration management systems the return on investment is either negative or close to zero. For example, I do not believe that cfengine is the right solution to the problem. And I am not alone (see Introduction – etch) :

In either cfengine or puppet you have a maze of classes, controls, modules, resources, etc. Where you store your configuration within your cfengine or puppet tree has no obvious correlation to where it ends up on your clients. You can and will spend hours, quite possibly days, studying manuals and searching the web just to get the simplest initial setup.

... cfengine doesn't actually support doing much that is useful. So you end up using it as a framework for a bunch of little shell scripts you hack together. Puppet is somewhat better, but still lacking.

I would say more: cfengine lacks any significant ideas that can lessen admin burden. It is just "wish good" solution in search of useful application domain.  This "poverty of ideas" is the real architectural problem and no amount of enhancements can change that. 

Wikipedia defines configuration management in the following way

In information technology and telecommunications, the term configuration management or configuration control has the following meanings:

  1. The management of security features and assurances through control of changes made to hardware, software, firmware, documentation, test, test fixtures and test documentation of an automated information system, throughout the development and operational life of a system. Source Code Management or revision control is part of this.
  2. The control of changes--including the recording thereof--that are made to the hardware, software, firmware, and documentation throughout the system lifecycle.
  3. The control and adaption of the evolution of complex systems. It is the discipline of keeping evolving software products under control, and thus contributes to satisfying quality and delay constraints. Software configuration management (or SCM) can be divided into two areas. The first (and older) area of SCM concerns the storage of the entities produced during the software development project, sometimes referred to as component repository management. The second area concerns the activities performed for the production and/or change of these entities; the term engineering support is often used to refer this second area.
  4. After establishing a configuration, such as that of a telecommunications or computer system, the evaluating and approving changes to the configuration and to the interrelationships among system components.
  5. In distributed-queue dual-bus (DQDB) networks, the function that ensures the resources of all nodes of a DQDB network are configured into a correct dual-bus topology. The functions that are managed include the head of bus, external timing source, and default slot generator functions.

Software for automation of Unix and application configuration management reduces the  cost and increases the productivity of system administrators. According to Forrester Research, 44% of downtime is due to configuration errors.   Much of this downtime is avoidable. When you manage couple of dozen systems you can no more late each system as individual box and errors like making changes on a wrong box, as avoidable as they sound are a real problem. Even more nasty situation arise when you make changes on the right box but using a wrong set of assumptions about it as between changes you forgot some important facts pertaining the box. That's why keeping a journal is a very import and underappreciated sysadmin tool. And reading you journal entries pertaining particular system before making any important changes can save you from a lot of troubles. Bug Tracking system can be used as a personal journal.

Using corporate bullshit we can state that :

Unmanaged configuration changes impact an organization’s ability to prevent outages, understand the impact of planned changes, and especially in today’s regulatory environment, adhere to corporate and government policies. Knowing who changed what and when is vital to complying with today’s security requirements.

Tom Perrine of the San Diego Supercomputer Center recently offered this guidance to an Internet newsgroup aimed at university security administrators. It offers sage advice for anyone managing and securing networks of heterogeneous UNIX systems. I actually do not share his excitement over cfengine -- IMHO badly architectured agent-based system. Also in a way cfengine is a misguided attempt to reinvent TCL by a person who has no real talent for language design. As happend in such cases such attempts lead to a predictable bad results. 

Let me take a small step back and philosophize from a wider perspective.

The local Cray folks have a saying: "Wanna-bees worry about GigaFLOPS, and nanoseconds; real computer companies worry about *cooling*..."

I think that the real "higher ground" is security will be won (if it ever is) in two strongly-related areas: software quality (process) and (automated) configuration management.

Let's face it, the quality of most commercial software is pretty pitiful at worst, and sub-standard at best. As an industry, we have pretty much ignored 40 years of software process research and lessons learned. The first paper on what we now call "buffer overflows" was published in 1965. This paper and those related to it was influential in the design of Multics, portions of the original UNIX system-call interface, and security kernels. They called this problem "insufficient argument validation" in those papers), and it also influenced language design and the move towards higher-level languages.

We have ignored all the "formal methods", strong specification, structured design and adequate testing strategies. We have forgotten (or never learned) all the lessons of Mythical Man-Month, Peopleware, The Psychology of Computer Programming, Software Tools, and many other books, methodologies and studies. As in the security arena, we have most of the technology and lessons figured out, we just don't apply them :-(

Configuration management is related (a part of any proper development process), but we often fail to use it in non-software-development areas, even if we do use it for software. There is no reason for a person to *ever* ask "What version of *anything*, is this?" and not get a good answer. There is *no* reason for computers to have "version drift" where patches or software are inconsistent. Again, we have the technology, whether it is cfengine, SMS, or vendor-supplied or home-grown scripts, it is just not being applied.

So why are these basic technologies not being applied? The answer is short-term thinking, similar to that that drives the quarterly earnings drives of most US companies.

Let's face it, it initially takes longer to establish a proper software development (or any other) process. You have a steeper, longer initial spending/development curve, and pay more of the costs "up front", and dramatically lower costs in the maintenance and update phase. (You also have fewer bugs to fix, pushing the support costs even lower, but I digress.)

... ... ...

So I guess I believe that "wanna-bees" worry about exploits and patches; real security people are more concerned with process and management..."

For more of my heretical views, see "Security as Infrastructure: Are you shooting rabbits, or building fences", a USENIX LISA Invited Talk.

http://www.sdsc.edu/~tep/Presentations/1998.LISA.Security.Infrastructure/index.htm

Sorry for the rant, but this has been a hot-button for several years, as you may have noticed.

Tom Perrine
San Diego Supercomputer Center

The key idea in configuration management is not to reinvent the wheel.  There are three major components of any configuration system

  1. Repository subsystem. Regular hierarchical filesystem structure with one node per server is adequate and can be enhanced by using Subversion and other similar systems.
  2. Distribution subsystem. A lot of widely-used protocols permit some kind of "poor man configuration management" functionality. For example ssh can securely retrieve and transmit file to any server with the client (if configured properly). NFS for some local repository can do the same.  Unix administrator are not even slightly interested in using some half-based new protocol for communication between the master server and clients.
  3. Configuration description language. The latter should provide such content as dependencies and inheritance.

So far there is some progress in first two areas. For example systems like subversion can be adapted to keeping configuration files. Also protocols like SSH and rsync provide a simple way to move configuration changes from one box to another.

The progress is creating a suitable for practical use configuration language currently is very limited.  I would like to repeat apt characterization of the current state of art in this area from the introduction to  etch:

In either cfengine or puppet you have a maze of classes, controls, modules, resources, etc. Where you store your configuration within your cfengine or puppet tree has no obvious correlation to where it ends up on your clients. You can and will spend hours, quite possibly days, studying manuals and searching the web just to get the simplest initial setup.

... cfengine doesn't actually support doing much that is useful. So you end up using it as a framework for a bunch of little shell scripts you hack together. Puppet is somewhat better, but still lacking.


Top updates

Bulletin Latest Past week Past month
Google Search


Old News ;-)

[Oct 28, 2011] synctool

Written in Python
freshmeat.net

synctool is a cluster administration tool that keeps configuration files synchronized across all nodes in a cluster. Nodes may be part of a logical group or class, in which case they need a particular subset of configuration files. synctool can restart daemons when needed, if their relevant configuration files have been changed. synctool can also be used to do patch management or other system administrative tasks.

[Feb 03, 2011] SPAM

Implemented in Perl
freshmeat.net

SPAM is a tool that assists in the management of system configuration and compliance. SPAM tracks, reports on, and compares system configurations across AIX systems.

Enterprise configuration tools

[Aug 24, 2010] Etch freshmeat.net

Ruby based...

Etch is a tool for system configuration management. It manages the configuration files of the operating system and core applications. It is easy for a professional system administrator to start using, yet is scalable to large and complex environments.

pacha - Project Hosting on Google Code

Basically, any running program that uses a configuration file can use Pacha to safeguard the changes made. Easily revert from mistakes in configuration (since it is already versioned via Mercurial) and keep track o what changed at what time.

As long as you have Python, Mercurial and SSH installed, you are good to go!

[Aug 03, 2010] Puppet vs Chef BHUGA WOOGA!

I spent a while going over recipes, and comparing them to Puppet. For example, here's some code to manage sudo for Chef. The Chef code was written by Chef's authors; the Puppet code was written by myself. The Chef code is spread across 3 files.
# recipes/default.rb:
package "sudo" do
  action :upgrade
end
 
template "/etc/sudoers" do
  source "sudoers.erb"
  mode 0440
  owner "root"
  group "root"
  variables(
    :sudoers_groups => node[:authorization][:sudo][:groups], 
    :sudoers_users => node[:authorization][:sudo][:users]
  )
end
# attributes.rb:
authorization Mash.new unless attribute?("authorization")
 
authorization[:sudo] = Mash.new unless authorization.has_key?(:sudo)
 
unless authorization[:sudo].has_key?(:groups)
  authorization[:sudo][:groups] = Array.new 
end
 
unless authorization[:sudo].has_key?(:users)
  authorization[:sudo][:users] = Array.new
end
# metadata.rb:
maintainer        "Opscode, Inc."
maintainer_email  "cookbooks@opscode.com"
license           "Apache 2.0"
description       "Installs and configures sudo"
version           "0.7"
 
attribute "authorization",
  :display_name => "Authorization",
  :description => "Hash of Authorization attributes",
  :type => "hash"
 
attribute "authorization/sudoers",
  :display_name => "Authorization Sudoers",
  :description => "Hash of Authorization/Sudoers attributes",
  :type => "hash"
 
attribute "authorization/sudoers/users",
  :display_name => "Sudo Users",
  :description => "Users who are allowed sudo ALL",
  :type => "array",
  :default => ""
 
attribute "authorization/sudoers/groups",
  :display_name => "Sudo Groups",
  :description => "Groups who are allowed sudo ALL",
  :type => "array",
  :default => ""

Here's more or less the same thing for Puppet:

class sudo {

  package { ["sudo","audit-libs"]: ensure => latest }

  file { "/etc/sudoers":
    owner   => root,
    group   => root,
    mode    => 440,
    content => template("sudo/files/sudoers.erb"),
    require => Package["sudo"],
  }
}

Both Chef and Puppet then take this information and output it through an ERB template, which is an exercise for the reader, since it's basically the same for both.

There's a few things worth noting here. First of all, Puppet has zero metadata available. If you want to set sudo-able groups, you need to know those variable names ahead of time and set them to what you want. Both your template and whatever code sets your sudo-able groups must magically 'just know' this information. Since the Puppet DSL is not even Ruby, you have *zero* ability to perform any kind of metadata analysis on these attributes in order to make code more generic.

Chef gives you complete metadata about the variables it's using. This is powerful and indeed critical in my imagined use domains for Chef (keep reading). That metadata comes at a cost of a lot of boilerplate code, though. Chef comes with some rake tasks to generate some scaffolding. I'm always uncomfortable with scaffolding like this; I think this kind of code generation is a bad way to do metaprogramming.

Chef spreads this information across 3 files, named a particular way. Puppet has a similar scheme of magically named files, but it's basically just a folder structure, a file called init.pp, and templates/source files. For a fairly simple task, Chef requires you to know a folder structure and 3 file names, and which data goes in which files. This is congruent with the Ruby world's (perhaps specifically the rails/merb world's?) general practice of 'convention not configuration'. This is in addition to all of the 'you just have to know' parts of the Chef system which are taken from Merb, such as where models and controllers live, though you would not need to edit those save for pretty advanced cases.

Lastly, Chef provides you with an actual data structure that is fed to the sudoers template. Puppet simply uses available dynamically-scoped variables in its template files. This is *awful*, and a big loss for puppet. I administrate Zimbra servers, for example, which require extra content in sudoers. I cannot add this to the zimbra module unless the zimbra module were to be the one including the sudo module. There are solutions to this, of course, but this is a really, really simple use case and we're already shaving yaks. Chef's method is undeniably superior.

All 3 of these are part of the same core difference between the two: Puppet is an application, and Chef is a part of one.

Chef is a library to be used in a combined system of resource management in which the application itself is aware of the hardware it's using. This allows certain kinds of applications to exist on certain kinds of platforms (particularly EC2) that simply couldn't before--an application using this system can declare a database just as well as it can declare an integer. That's fundamentally powerful, awesome, amazing.

Puppet is an application which has an enormous built-in library of control methods for systems. The puppet package manager, for example, supports multiple kinds of *nix, Solaris, HPUX, and so forth. Chef cookbooks can certainly be written to do this, but I imagine by the time you supported everything puppet does I don't think Chef would get a smiley-face sticker for being tiny and pure with extra ruby sauce. Puppet's not a fundamental change, it's just a really nice workhorse.

I picked puppet for the project I'm working on now. It made sense for a lot of reasons. Probably first and foremost, there are 3 other sysadmins working with me, some split between this project and others. None of us are ruby programmers. We don't write rake tasks like we configure Apache, we don't want to explain to new hires the difference between a symbol or a variable, or where the default Merb configuration files, or 100 other ruby-isms. Meanwhile, most puppet config, silly folder structure aside, is not any harder to configure than something like Nagios. I think it would be a mistake for an IT shop with a lot of existing systems running various old-fashioned stateful applications like databases or LDAP to suddenly declare that sysadmins need to be Merb programmers.

Puppet's much deeper out-of-the-box support for a lot of systems provides the kind of right-now real improvements that a lot of IT shops and random contractors desperately need. System administration is depressingly rarely about being elegant or 'the best' and much more frequently about being repeatable and reliable. It's just the nature of the business--if the systems ran themselves, there would be no administrators. Having a bunch of non-programmers become not just programmers but programmers specializing in a tiny subset of the ruby world is a lot of yaks to shave for an organization. This is not some abstract jab at my colleagues: I am most certainly not a Merb programmer, and even if I were, I have too many database copies to make, SQL queries to run, mysterious performance problems to diagnose and deployments to make to give this kind of development the attention it requires. How many system administrators do you know that use the kind of TDD that Merb can provide for their bash scripts? What would make one think that's going to happen with Chef?

The other big reason I picked Puppet is that it's got a sizable mailing list, a friendly and frequently used google group for help, and remains in active development after a couple of years. I don't think Reductive Labs is going away, and if it did, there have been a lot of contributors to the code base over those 2 years.

It's worth noting, though, that the Chef guys come with an impressive set of resumes. It seems to be somehow tied in with Engine Yard (several presentations about Chef include Ezra Zygmuntowicz as a speaker). I worry, though, that they are working the typical valley business model, namely to explode about a year after launch. Chef was released about 8 months before I write this. The organization I am installing Puppet for does not have the Ruby talent base required to ensure that they can fix bugs as required in the long term if Opscode goes away, or if they get hired on to Engine Yard and they make Chef into the kind of competitive differentiation secret it could be.

Chef currently manages the EC2 version of Engine Yard, and that's just the kind of thing I cannot imagine using puppet for: interact with a giant ruby application to manage itself. If you have a lot of systems joining and leaving the resource pool as required, Chef's ability to add nodes dynamically is going to save you. The ability to define resources programmatically is very powerful--one could easily imagine reducing the number of web server threads if a system's CPU use goes over a certain threshold, for example. I would not try that in puppet! But note that this is an application built from scratch to expect such a command and control system to exist. If you're just managing a bunch of LAMP stacks and samba servers, this is more power than you need. One of the Opscode founders has some slides that talk about this kind of model.

And Chef is powerful for that model, sure, but is that even the model you want for your applications? Applications should not have to worry about the hardware they use. Making an application's own hardware use visible to itself encourages programmers to spend time thinking about issues they should be trying their hardest to ignore. A better model is App Engine's, where the system just scales forever without developer intervention. Even Azure's service configuration schema model is better, in which different application roles (web, proxy, etc) are described as resources and given a dynamic instance count, and transparently scalable data stores are available. The number of 'nodes' in the system is never an issue for either model.

Chef is what you'd use to build that auto-scaling backend. Engine Yard uses it for, well, Engine Yard--scalable rails hosting, transparently sold as a service to folks who can then just blissfully program in rails and never think about Chef. Very few organizations are making that infrastructure, and most of them that are are shaving really big yaks and need to stop and use one of the available clouds.

Meanwhile, a very many organizations are running 6 kinds of *nix to maintain tens of older applications built on the POSIX or LAMP paradigms, or hosting virtual machines running applications made who knows when. For these organizations, Puppet is probably the easiest thing that could work, and thus probably the best option.

I'm sure there are sysadmins out there who think I'm completely wrong, and that you just can't beat the elegance Chef provides. There are a lot of people better than me out there, and I'm sure they have a point. But in my experience, bad system administration happens when sysadmins try and do everything for themselves. For a given situation in system administration, it's highly unlikely a sysadmin can do a better job than an available tool. Puppet's sizable default library is what most organizations need, not the ability to write their own.

And all of the above aside, one thing is clear: there is little excuse for an organization with 3 or more *nix servers not to be using Puppet, Chef, cfengine, or *something*. I would argue that about 80% of the virtualization push is dodging some of the core questions of system administration, making systems movable to new resources indefinitely rather than making their configuration repeatable, but that's a topic for another post. Especially since nobody got this far on this one anyway.

Adam Jacob

Hi John! Thanks for being passionate about my favorite space - configuration management. You do great work, and I know your intent wasn’t necessarily to sow discord - but I wanted to take a moment to comment on a few of your points that I think are either wrong or missing some important context.

1) Large installed base

Chef has somewhere in the neighborhood of ~1500 working installations. It’s true that our early adopters are primarily large web players like Wikia, Fotopedia, and 37signals. We also have a growing number of people integrating Chef directly into their service offering - it’s not just Engine Yard, it’s RightScale and others.

2) Large developer base

According to Ohloh, 39 developers have contributed to Puppet in the last 12 months, and 71 over the projects entire history.

Chef has been open source for a year. We just had our 100th CLA (contributor license agreement, meaning they can contribute code). Over the course of the year, 52 different people have contributed to Chef, including significant functionality (for the record, 5 of them work for Opscode.) We’re incredibly proud of the community of developers who have joined the project in the last year, and the huge amount of quality code they produce.

3) Dedicated Configuration Language

To each their own, man. :) My preference for writing configuration management in a 3GL was born out of frustration with doing the higher order systems integration tasks. By definition, internal DSLs aren’t meant to do that - when they start being broadly applicable, they loose the benefits they gained from domain specificity. For me, the benefit of being able to leverage the full power of a 3GL dramatically outweigh the learning curve, and I think a side-by-side comparison of the two languages shows just how close you can get to never having to leave the comfort of your DSL most of the time.

4) Robust Architecture

Chef is built to scale horizontally like a web application. It’s a service oriented architecture, built around REST and HTTP. Like cfengine, it pushes work to the edges, rather then centralizing it. There are large (multi-thousand node) chef deployments, and larger ones coming. Chef scales just fine.

5) Documentation

It’s true, we’ve been focused pretty intently on refining Chef in tandem with our earlier adopters, and that focus has had an impact on the clarity of our documentation. Rest assured, we’re working on it.

6) Language/Framework Neutral

I’m not sure where this comes from, other than we’ve had great adoption in the Ruby community. People deploy and manage every imaginable software stack with Chef - Java, Perl, Ruby, PHP - it’s all being managed with Chef.

7) Multi-Platform

It’s true that, at release a year ago, Chef didn’t support many platforms. Since then, we’ve been growing that support steadily - all the platforms you list run Chef just fine, with the exception of AIX. We have native packages for Red Hat (community maintained by the always awesome Matthew Kent!) and Ubuntu that ship regularly at every release. As for the Chef Server only running on Ubuntu - that’s just not true.

8) Doesn’t re-invent the wheel

Again, to each their own. I think Chef’s deterministic ordering, ease of integration, wider range of actions, directly re-usable cookbooks, and lots of other things make it quite innovative. I’m pleased to explain it to you over beer, on my dime. :)

9) Dependency Management

While I understand how you can think this would be true, it isn’t. Chef does have dependency management, and a more robust notification system then Puppet. Each resource is declarative and idempotent. Within a recipe, resources are executed in the order they are written - meaning the way you write it is the way it runs. This is frequently the way puppet manifests are written as well. The difference being, there is no need to declare resource-level dependency relationships.

With Chef, you focus on recipe-level dependencies. “Apache should be working before I install Tomcat”. You can ensure that another recipe has been applied at any point, giving you great flexibility, along with a high degree of encapsulation.

One added benefit of the way Chef works is that the system behaves the exact same way, every time, given the same set of inputs. This greatly eases debugging of ordering issues, and results in a system that is, in my opinion, significantly easier to reason about at scale (thousands of resources under management).

10. Big Mindshare

There is a bit of survivor bias happening here. I meet people every day who are starting with, or switching to, Chef. You don’t, because, well - you don’t use Chef.

* Conclusion

Thanks for taking the time to write about Puppet and Chef - I know your heart is in the right place. Next time, come talk to us - we’re pretty accessible guys, and I would be happy to provide feedback and education about how Chef works. I won’t even try and convince you to switch. :)

Best regards,
 Adam

[Aug 03, 2010] Puppet versus Chef 10 reasons why Puppet wins Bitfield Consulting

Puppet, Chef, cfengine, and Bcfg2 are all players in the configuration management space. If you’re looking for Linux automation solutions, or server configuration management tools, the two technologies you’re most likely to come across are Puppet and Opscode Chef. They are broadly similar in architecture and solve the same kinds of problems. Puppet, from Reductive Labs, has been around longer, and has a large user base. Chef, from Opscode, has learned some of the lessons from Puppet’s development, and has a high-profile client: EngineYard.

You have an important choice to make: which system should you invest in? When you build an automated infrastructure, you will likely be working with it for some years. Once your infrastructure is already built, it’s expensive to change technologies: Puppet and Chef deployments are often large-scale, sometimes covering thousands of servers.

Chef vs. Puppet is an ongoing debate, but here are 10 advantages I believe Puppet has over Chef today.

1. Larger installed base

Put simply, almost everyone is using Puppet rather than Chef. While Chef’s web site lists only a handful of companies using it, Puppet’s has over 80 organisations including Google, Red Hat, Siemens, lots of big businesses worldwide, and several major universities including Stanford and Harvard Law School.

This means Puppet is here to stay, and makes Puppet an easier sell. When people hear it’s the same technology Google use, they figure it works. Chef deployments don’t have that advantage (yet). Devops and sysadmins often look to their colleagues and counterparts in other companies for social proof.

2. Larger developer base

Puppet is so widely used that lots of people develop for it. Puppet has many contributors to its core source code, but it has also spawned a variety of support systems and third-party add-ons specifically for Puppet, including Foreman. Popular tools create their own ecosystems.

Chef’s developer base is growing fast, but has some way to go to catch up to Puppet - and its developers are necessarily less experienced at working on it, as it is a much younger project.

3. Choice of configuration languages

The language which Puppet uses to configure servers is designed specifically for the task: it is a domain language optimised for the task of describing and linking resources such as users and files.

Chef uses an extension of the Ruby language. Ruby is a good general-purpose programming language, but it is not designed for configuration management - and learning Ruby is a lot harder than learning Puppet’s language.

Some people think that Chef’s lack of a special-purpose language is an advantage. “You get the power of Ruby for free,” they argue. Unfortunately, there are many things about Ruby which aren’t so intuitive, especially for beginners, and there is a large and complex syntax that has to be mastered.

There is experimental support in Puppet for writing your manifests in a domain language embedded in Ruby just like Chef’s. So perhaps it would be better to say that Puppet gives you the choice of using either its special-purpose language, or the general-purpose power of Ruby. I tend to agree with Chris Siebenmann that the problem with using general-purpose languages for configuration is that they sacrifice clarity for power, and it’s not a good trade.

4. Longer commercial track record

Puppet has been in commercial use for many years, and has been continually refined and improved. It has been deployed into very large infrastructures (5,000+ machines) and the performance and scalability lessons learned from these projects have fed back into Puppet’s development.

Chef is still at an early stage of development. It’s not mature enough for enterprise deployment, in my view. It does not yet support as many operating systems as Puppet, so it may not even be an option in your environment. Chef deployments do exist on multiple platforms, though, so check availability for your OS.

5. Better documentation

Puppet has a large user-maintained wiki with hundreds of pages of documentation and comprehensive references for both the language and its resource types. In addition, it’s actively discussed on several mailing lists and has a very popular IRC channel, so whatever your Puppet problem, it’s easy to find the answer. (If you’re getting started with Puppet, you might like to check out my Puppet tutorial here.)

Chef’s developers have understandably concentrated on getting it working, rather than writing extensive documentation. While there are Chef tutorials, they’re a little sketchy. There are bits and pieces scattered around, but it’s hard to find the piece of information you need.

6. Wider range of use cases

You can use both Chef and Puppet as a deployment tool. The Chef documentation seems largely aimed at users deploying Ruby on Rails applications, particularly in cloud environments - EngineYard is its main user and that’s what they do, and most of the tutorials have a similar focus. Chef’s not limited to Rails, but it’s fair to say it’s a major use case.

In contrast, Puppet is not associated with any particular language or web framework. Its users manage Rails apps, but also PHP applications, Python and Django, Mac desktops, or AIX mainframes running Oracle.

To make it clear, this is not a technical advantage of Puppet, but rather that its community, documentation and usage have a broader base. Whatever you’re trying to manage with Puppet, you’re likely to find that someone else has done the same and can help you.

7. More platform support

Puppet supports multiple platforms. Whether it’s running on OS X or on Solaris, Puppet knows the right package manager to use and the right commands to create resources. The Puppet server can run on any platform which supports Ruby, and it can run on relatively old and out-of-date OS and Ruby versions (an important consideration in many enterprise environments, which tend to be conservative about upgrading software).

Chef supports fewer platforms than Puppet, largely because it depends on recent versions of both Ruby and CouchDB. As with Puppet, though, the list of supported platforms is growing all the time. Puppet and Chef can both deploy all domains of your infrastructure, provided it’s on the supported list.

8. Doesn’t reinvent the wheel

Chef was strongly inspired by Puppet. It largely duplicates functionality which already existed in Puppet - but it doesn’t yet have all the capabilities of Puppet. If you’re already using Puppet, Chef doesn’t really offer anything new which would make it worth switching.

Of course, Puppet itself reinvented a lot of functionality which was present in earlier generations of config management software, such as cfengine. What goes around comes around.

9. Explicit dependency management

Some resources depend on other resources - things need to be done in a certain order for them to work. Chef is like a shell script: things are done in the order they’re written, and that’s all. But since there’s no way to explicitly say that one resource depends on another, the ordering of your resources in the code may be critical or it may not - there’s no way for a reader to tell by looking at the recipe. Consequently, refactoring and moving code around can be dangerous - just changing the order of resources in a text file may stop things from working.

In Puppet, dependencies are always explicit, and you can reorder your resources freely in the code without affecting the order of application. A resource in Puppet can ‘listen’ for changes to things it depends on: if the Apache config changes, that can automatically trigger an Apache restart. Conversely, resources can ‘notify’ other resources that may be interested in them. (Chef can do this too, but you’re not required to make these relationships explicit - and in my mind that’s a bad thing, though some people disagree. Andrew Clay Shafer has written thoughtfully on this distinction: Puppet, Chef, Dependencies and Worldviews).

Chef fans counter that its behaviour is deterministic: the same changes will be applied in the same order, every time. Steve Traugott and Lance Brown argue for the importance of this property in a paper called Why Order Matters: Turing Equivalence in Automated Systems Administration.

10. Bigger mindshare

Though not a technical consideration, this is probably the most important. When you say ‘configuration management’ to most people (at least people who know what you’re talking about), the usual answer is ‘Puppet’. Puppet owns this space. I know there is a large and helpful community I can call on for help, and even books published on Puppet. Puppet is so widely adopted that virtually every problem you could encounter has already been found and solved by someone.

Conclusion

Currently ‘Chef vs. Puppet’ is a rather unfair comparison. Many of the perceived disadvantages of Chef that I’ve mentioned above are largely due to the fact that Chef is very new. Technically, Puppet and Chef have similar capabilities, but Puppet has first mover advantage and has colonised most corners of the configuration management world. One day Chef may catch up, but my recommendation today is to go with Puppet.

Selected Comments

Julian Simpson:

Culture is an important reason as to why people gravitate to one tool or another. Chef will draw in Ruby developers because it’s not declarative, and because it’s easy.

My experience is that most developers don’t do declarative systems. Everyday languages are imperative, and when you’re a developer looking to get something deployed quickly, you’re most likely to pick the tool that suits your world view.

Systems Administrators tend to use more declarative tools (make, etc.)

Developers and Systems Administrators also have a divergent set of incentives. Developers are generally rewarded for delivering systems quickly, and SA’s are rewarded for stability. IMHO, Chef is a tool to roll out something quickly, and Puppet is the one to manage the full lifecycle. That’s why I think Chef makes a good fit for cloud deployment because Vm instances have a short lifespan.

I think it’s still anybody’s game. The opportunity for Chef is that the developer community could build out an ecosystem very quickly.

vvuksan:

It seems to me that both system have quite a bit of support out there and it really comes down to what you as the sysadmin/developer prefer.

I would also agree with ripienaar’s tweet about disagreeing with point 6. Configuration management systems are not really intended for deploying software but for making sure that systems conform to a certain policy ie. webserver policy etc.

Nick Anderson:

I’m a SA and have worked closely with developers for years. It never ceased to amaze me how differently we think. It does boil down to priorities, culture, and incentives as Julian mentioned. I have not used Chef but I saw quite the stir the last time I mentioned puppet Puppet Works Hard To Make Sure Nodes Are In Compliance.

I have used puppet both as a deployment tool and a configuration management tool. It really can do both just fine as a deployment is essentially a configuration change. But I have found it easier to use a tool like fabric when I need to perform “actions” on a group of machines, especially when those actions are many and very possibly one time. I have found it a bit daunting if you put too much into your configuration management tool as over time it becomes a lot to sift through, and when its time to remove a configuration you have to leave that part of the configuration there (the part that removes whatever it was).

Maybe I haven’t looked around enough but I really want to see a puppet reporting tool. I know bcfg2 has a decent one. I want to be able to know the current stats of my nodes, who is in compliance, who isn’t, when I last spoke with what node, last time nodex changed and what changed.

John Arundel:

It is hard to be objective - probably impossible. I’m sure I haven’t been.

My background is that I’ve used Puppet for commercial sysadmin work for several years (basically since it came out), and it currently manages many infrastructures for many of my clients (I’m a freelancer). The biggest deployment I’ve worked on is probably 25-30 servers, and a comparable number of desktops. Maybe 6,000 lines of manifest code (not counting templates).

When Chef was first announced, I set aside time to build a Chef server and try it out, with a view to adopting it if it was superior to Puppet. I found it quite hard going (admittedly that was early days for Chef), and I didn’t find sufficient advantages for Chef to migrate any of my clients to it. If a client asked for Chef specifically, I’d be quite happy to use it, but so far no-one has.

So based on what I know, I use Puppet and that’s what I recommend to others. I’m very interested in hearing from anyone who knows different.

Anonymous

Readers, do you homework too and stop reading articles with the title ‘versus’, the hallmark of propaganda. If you must read on, some specific points, with disclosure that I’m a Chef early adopter with previous Puppet exposure.

#1, #2, #5, #7, #10: puppet is more mature than Chef

All software starts with a small install base, fewer adherents, etc. That doesn’t make it more suitable for your specific environment or taste in software development (configuration management is development too). The answer here is to try both systems yourself and compare them - something the author of this article seems to not have done yet. It’s not just about the code, it’s about the software used to deploy it, the way it authenticates, etc. These things should also influence your decision.

#9: Dependency management

“Chef has no support for specifying dependencies (ordering resources). Chef is like a shell script: things are done in the order they’re written, and that’s all.”

Chef’s default behavior is to process resources in the order you write them. It has other dependency features just like Puppet does - see below.

“A resource in Puppet can ‘listen’ for changes to things it depends on: if the Apache config changes, that can automatically trigger an Apache restart. Conversely, resources can ‘notify’ other resources that may be interested in them.”

This has been possible in Chef for a long time. See this real world example: http://gist.github.com/276246

http://wiki.opscode.com/display/chef/Resources - See the ‘notifies’ attribute in the Meta section.

#3 Dedicated configuration language

“Ruby is a good general-purpose programming language, but it is not designed for configuration management - and learning Ruby is a lot harder than learning Puppet’s language.”

Sysadmins who can code can learn Ruby quickly, and there are plenty of resources on how to write Ruby. While most of the time you can stick to the Chef style of Ruby, you have access to the power of a mature programming language for free. If you think this language is easier, show why that would be the case for someone who already knows at least one programming language.

I see nothing inherent in Puppet’s language that makes it better suited to configuration management. If you think there is, show some examples.

#6: Language/framework neutral

Straight up bullshit here. There is nothing in Chef specific to Ruby on Rails. All chef deployments I know of (including our own) are used for deploying entire stacks of software totally unrelated to Ruby or Rails, just like Puppet.

Conclusion: In the next installment, show more code examples and tell us why Chef didn’t work for you where Puppet did. Try both software packages the day before you write the article, not 6 months before. Assume your readers write code and already know that adopting less mature software is more risky.

R.I.Pienaar:

I’d agree with almost everything above, this strikes me as mostly self promoting b/s written with the express intend on driving traffic to a blog. Especially given the spammy nature of its promotion.

As an aside, and I wouldn’t want to distract from the fantasy here with actual facts, but Puppet is getting native Ruby base DSL some time soon and so will please both sides of that particular fence.
 

[Aug 03, 2010]   Puppet Labs, Cfengine, and Chef by Opscode rPath

Configuration files contain complex information associated with a system's host environment, including settings for network, storage and other run-time resources. Application, OS and middleware configuration files typically need to be heavily modified to "contextualize" a system for its local host environment.

Today, rPath supports open source configuration tools such as Puppet, Cfengine and Opscode's Chef in two ways:

According to Sorofman: "rPath offers the most advanced capabilities available for provisioning and maintaining software systems across physical, virtual or cloud environments. Increasingly, advanced IT shops—including several rPath customers—are using tools like Puppet, Opscode's Chef and Cfengine to manage configuration settings. But they recognize that these tools are poorly suited to managing software systems, which is rPath's strength. It's a logical combination."

[Mar 6, 2010]  Server configuration management track changes with subversion and be notified - VACS Blog

This is an interesting idea but not a real solution as /etc/ is a dynamic directory into which files are often installed as new packages are added.  This is especially typical for Linux.
Tracking changes in a server configuration can be critical to understand problems, identify security breaches and repair a server. When several people are in charge of administering one or several servers, sharing the configuration changes is helpful to inform each other about these modifications. The article describes a simple organization that uses subversion and daily mail notifications in case of change.

The overall idea is to put the server configuration files stored in /etc directory under a version control system: subversion. The VCS is configured to send an email to the system administrators. The email contains the differences with a previous version. A cron script is executed every day to automatically commit the changes, thus triggering the email.

The best practice is of course that each system administrator commits their changes after they validated the new running configuration. If they do so, they are able to specify a comment which is helpful to understand what was done.

First, you should install subversion with its tools.

sudo apt-get install -y subversion subversion-tools

Mail notification

For the mail notification, you may use postfix, exim or sendmail. But to avoid to setup a complete mail system, you may just use a simple mail client. For this, you can use the combination of esmtp and procmail.

sudo apt-get install -y procmail esmtp

Create the subversion repository

The subversion repository will contain all the version and history of your /etc. It must be protected carefully because it contains sensitive information.

sudo mkdir /home/svn
sudo svnadmin create /home/svn/repos
sudo chmod 700 /home/svn
sudo chmod 700 /home/svn/repos

Now, setup the subversion repository to send an email for each commit. For this, copy or rename the post-commit.tmpl file and edit it to specify to whom you want the email to be sent:

sudo cp /home/svn/repos/hooks/post-commit.tmpl  \
          /home/svn/repos/hooks/post-commit

and change the last line to something like (with your email address)

/usr/share/subversion/hook-scripts/commit-email.pl \
 --from yoda+mercure@alliance.com \
 "$REPOS" "$REV" yoda@alliance.com

Initial import

To initialize the repository, we can use the svn import command:

sudo svn import -m 'Initial import of /etc' \
              /etc file:///home/svn/repos/etc

Subversion repository setup in /etc

Now the hard stuff is to turn /etc into a subversion environment without breaking the server. For this, we extract the subversion /etc repository somewhere and copy only the subversion files in /etc.

sudo mkdir /home/svn/last
sudo sh -c "cd /home/svn/last && svn co file:///home/svn/repos/etc"
sudo sh -c "cd /home/svn/last/etc && tar cf - `find . -name .svn` | (cd /etc && tar xvf -)"

At this step, everything is ready. You can go in /etc directory and use all the subversion commands. Example:

sudo svn log /etc/hosts

to see the changes in the hosts file.

Auto-commit and detection of changes

The goal now is to detect every day the changes that were made and send a mail with the changes to the supervisor. For this, you create a cron script that you put in /etc/cron.daily. The script will be executed every day at 6:25am. It will commit the changes that were made and send an email for the new files.

#!/bin/sh
SVN_ETC=/etc
HOST=`hostname`
# Commit those changes
cd $SVN_ETC && svn commit -m "Saving changes in /etc on $HOST"
# Email address to which changes are sent
EMAIL_TO="TO_EMAIL"
STATUS=`cd $SVN_ETC && svn status`
if test "T$STATUS" != "T"; then
  (echo "Subject: New files in /etc on $HOST";
   echo "To: $EMAIL_TO";
   echo "The following files are new and should be checked in:";
   echo "$STATUS") | sendmail -f'FROM_EMAIL' $EMAIL_TO
fi

In this script you will replace TO_EMAIL and FROM_EMAIL by real email addresses.

Complete setup script

To help setup and configure all this easily, I'm now using a script that configures everything. You can download it: mk-etc-repository. The usage of the script is really simple, you just need to specify the email address for the notification:

sudo sh mk-etc-repository 

[Sep 11, 2008] The LXF Guide 10 tips for lazy sysadmins Linux Format The website of the UK's best-selling Linux magazine

Roll out changes to multiple systems

The one-button install concept should extend to other aspects of your systems, for much the same reasons. Puppet enables you to manage your systems centrally - you change files or settings in the repository on the central Puppet server, and they're rolled out automatically to all your Puppet clients. You will still have to change things twice (once on a test machine to make sure what you're doing, then once in the central Puppet repository), but it'll save a lot of time and reduce mistakes. (Remember that it really is important to test - Puppet also makes it really fast to propagate an error across all your systems.)

... ... ...

Send commands to several PCs

Not everything that you want to do on all machines will work well with Puppet, - you might for example want to temporarily mount a particular disk on all machines. ClusterSSH is great for this - it enables you to log onto a number of machines at once, and issue the same command on all of them simultaneously. Usefully, you can also click on a particular machine's screen and issue a command just on that machine, in case one machine is misbehaving.

You can set up groups of machines, as well, so that you can log in immediately to all your servers, or all your desktops. Combine this with a root ssh key and ssh-agent, and save yourself both typing and time.

[Aug 25, 2008] pssh 1.4.0  by Brent N. Chun -

About: pssh provides parallel versions of the OpenSSH tools that are useful for controlling large numbers of machines simultaneously. It includes parallel versions of ssh, scp, and rsync, as well as a parallel kill command.

Changes: A 64-bit bug was fixed: select now uses None when there is no timeout rather than sys.maxint. EINTR is caught on select, read, and write calls. Longopts were fixed for pnuke, prsync, pscp, pslurp, and pssh. Missing environment variables options support was added.

[May 6, 2008] Project details for Silk Tree by Aleksandr O. Levchuk

Ruby script

Silk Tree propagate /etc/passwd and /etc/group files from a master to a list of hosts via SSH. Neither the sending nor the receiving end connect to each other as root. Instead there is a read-only sudo sub-component on the receiver's side that makes the final modifications in /etc. Many checks are made to ensure reliable authorization updates. ACLs are used to enforce a simple security policy. Differences between old and new versions are shown. Two small scripts are included for exporting LDAP users and groups.

Project details for schily

About: The "Schily" Tool Box is a set of tools written or managed by Jörg Schilling. It includes programs like: cdrecord, cdda2wav, readcd, mkisofs, smake, bsh, btcflash, calc, calltree, change, compare, count, devdump, hdump, isodebug, isodump, isoinfo, isovfy, label, mt, p, sccs, scgcheck, scpio, sdd, sfind, sformat, smake, sh, star, star_sym, suntar, gnutar, tartest, termcap, and ved.

Changes: The source for "copy" (an accurate sparse file enabled copy program) has beeen added. The source for the "mountcd" program from SchilliX has been added. The source for "udiff", a diff program with human readable output has been added. Star has been bumped to 1.5-final. bsh and sh now skip BASH time stamps from the .history file. smake adds MAKE_SHELL_FLAG/MAKE_SHELL_IFLAG macros.

[Apr 22, 2008] Project details for Multi Remote Tools

Apr 18, 2008  | freshmeat.net

MrTools is a suite of tools for managing large, distributed environments. It can be used to execute scripts on multiple remote hosts without prior installation, copy of a file or directory to multiple hosts as efficiently as possible in a relatively secure way, and collect a copy of a file or directory from multiple hosts.

Release focus:

Initial freshmeat announcement

Changes:

Hash tree cleanup in thread tracking code was improved in all tools in the suite. Mrtools Has now adopted version 3 of the GPL. A shell quoting issue in mrexec.pl was fixed. This fixed several known limitations, including the ability to use mrexec.pl with Perl scripts and awk if statements. This fix alone has redefined mrexec.pl's capabilities, making an already powerful tool even more powerful.

[Feb 8, 2008] Project details for Scmbug

Written in Perl
Feb 8, 2008 | freshmeat.net

Scmbug integrates software configuration management (SCM) with bug-tracking. It aims to solve the integration problem once and for all. It will glue any source code version control system (such as CVS/CVSNT, Subversion, and Git) with any bug tracking system (such as Bugzilla, Mantis, Request Tracker, Test Director).

[Feb 7, 2008] System Configuration Collector 1.8.7 (Stable) by siem

Feb 7, 2008  | freshmeat.net

About: System Configuration Collector (SCC) is yet another configuration collector. It consists of a client and a server part. The client collects configuration data in a structured snapshot, compares the new snapshot with the previous one, and adds differences to a logbook.

Then the snapshot and the logbook are converted to HTML for local inspection. Optionally, the data can be sent to a system running the server software. On the server, summaries of the data are generated, and search/compare operations on the snapshots and logbooks are available via a Web interface.

Changes: Some changes to support ServerOrientedLinux have been implemented. The determination of an active name has been corrected. This release avoids messages when the LVM directory is absent on a cluster node. Config files in /etc/rc.d have been added.

[Jan 24, 2008]  Project details for cgipaf

The package also contain Solaris binary of chpasswd clone, which is extremely useful for mass changes of passwords in corporate environments which include Solaris and other Unixes that does not have chpasswd utility (HP-UX is another example in this category).   Version 1.3.2 now includes Solaris binary of chpasswd which works on Solaris 9 and 10.
Jan 23, 2009 | freshmeat.net

cgipaf is a combination of three CGI programs.

All programs use PAM for user authentication. It is possible to run a script to update SAMBA passwords or NIS configuration when a password is changed. mailcfg.cgi creates a .procmailrc in the user's home directory. A user with too many invalid logins can be locked. The minimum and maximum UID can be set in the configuration file, so you can specify a range of UIDs that are allowed to use cgipaf.

[Jan 10, 2008] ProShield - Debian Linux security program

Written in shell. Looks very similar to Titan as simple configuration management tool with the security/hardening bent.
ProShield is a system administration program for Ubuntu/Debian Linux. It helps ensure your system is secure and up-to-date by checking many different aspects of your system. Regular use is recommended.

Whether you are a Linux novice or a system administrator with a dozen servers, ProShield is designed to be useable by all. ProShield's main goal is to help secure a newly installed box (computer), as well as maintain the security of an existing box on a maintenance basis. It's part security, part security administration.

The main features of ProShield are:

When the program is done analyzing your system, it displays an "advisory report", and then (if necessary), guides you through a series of interactive questions to help you solve any problems it found.

[Dec 30, 2007] Project details for ns4

ns4 is a configuration management tool which allows the automated backup of node configurations. Commands are defined within a configuration file, and when they are executed, the output is sent to a series of FTP servers for archiving. As well as archiving configurations, it allows scripts to be run on nodes; this allows configurations to be applied en masse and allows conditional logic so different bits of scripts are run on different nodes.

[Oct 1, 2007] Several useful articles 

[May 27, 2006] Tracking, auditing and managing your server configuration with Subversion in 10 minutes

The R Zone

I’m assuming that you have Subversion installed; in other words, you should have the svn and svnadmin commands and they should work properly. I’m also assuming that you’ll be performing the following tasks as root

The ideal situation to begin applying this tutorial is right after your server has been freshly installed. However, for practical purposes, any server that’s configured and running will do.

Okay. That’s enough of the lists and introductions. Time for some action.

Creating the Subversion repository

If you’re familiar with UNIX, you’ll know /var is the customary directory for files that pertain to the whole system and are changed. So, following tradition, we’ll create a /var/preserve/config repository. Type the following command at your console:

[rudd-o@amauta2 ~]# svnadmin create \ /var/preserve/config

(note the backslashes are being used to add whitespace)

That should create a /var/preserve/config directory, with a couple of files in it. Those files are not meant to be editing, and they’ll be opaque to us for the rest of the tutorial. As usual, I’d advise you to secure that directory so only root can read and write files to it.

Now, you’ll create two directories directly into the repository. You’ll use these directories to travel back and forth between known configuration states.

To perform this task, just type:

[rudd-o@amauta2 ~]# svn create \ file:///var/preserve/config/trunk/ \ file:///var/preserve/config/tags/ \ -m 'Creating trunk and tags directories'

The -m argument specifies a message to attach to the operation. You can consult these messages afterwards through the svn log command.

Preparing the configuration directory

In true UNIX tradition, /etc is the place to go for system-wide configuration. For the rest of the tutorial, I’ll assume those are the files you want to keep in check.

To track files in /etc, you need to both:

That’s easily accomplished via the following command:

[rudd-o@amauta2 ~]# svn checkout \ file:///var/preserve/config/trunk/ /etc

Once you’ve done that, /etc will be a working copy. Time to add existing files into Subversion.

Checking existing configuration files into the repository

[rudd-o@amauta2 ~]# cd /etc [rudd-o@amauta2 /etc]# svn status

You should see a long listing of files, like this:

? 4Suite ? acpi ? adjtime

The question marks at the beginning of each line mean that Subversion has no idea what those files are doing there. So, you’ll add them to the repository:

[rudd-o@amauta2 /etc]# svn add *

You’ll see svn working intensely to add those files. Note that the files are not being added to the repository yet — they’re only being queued for addition. To commit these files into the repository:

[rudd-o@amauta2 /etc]# svn commit \ -m 'Initial addition of files'

And svn should start doing its magic. Once it’s done, it’ll tell you the revision number.

Followup maintenance

Okay, let’s review a few things you need to keep in mind from now on.

When configuration files are added to /etc

Check for added files with svn status /etc. You should see them listed with a question mark.

You should use svn add to add them to the working copy, and then svn commit the added files into the repository. Many people make the mistake of configuring freshly installed files. Do not do that. Instead, commit new files first, then edit. That way, you’ll have a way to track modifications right back to the pristine configuration files.

When configuration files are deleted from /etc

Check for deleted files with svn status /etc. You should see them listed with an exclamation sign.

After doing the check, svn delete them. Don’t forget to commit at the end.

[May 24, 2007] Cultured Perl Managing Linux configuration files by Teodor Zlatanov

The idea of storing files without full path is questionable: "In my configuration scheme, each configuration file is in a single directory or in one of its subdirectories. The configuration files are be named uniquely, and the directories denote machines or platforms rather than location."
More interesting variant of the same sceme was proposed with subvertion Tracking, auditing and managing your server configuration with Subversion in 10 minutes » The R Zone
Jun 10, 2004  | DeveloperWorks

The average developer spends more time navigating, learning, and debugging configuration files than you'd expect. But you can save that time -- and loads of energy and frustration -- with one of the tools you probably use every day: your CVS tree. Take these tips on backing up, distributing, and making portable your peskiest Linux™ (and UNIX®) config files.

Working with configuration files can be a bewildering part of using Linux and computers in general. No standards exist, though several have been proposed. For example, Samba and rsync use INI-style configurations; passwd is in a decades-old colon-separated format that doesn't allow colons in any field; sudo comes with a visudo program to keep people from entering wrong information in the sudoers file; Emacs uses Lisp for configuration files. And the list goes on...

Now, I'm not complaining about the variety of configuration files. I understand the historical and practical reasons for this Configuration Tower of Babel. Changing the Samba configuration format, for instance, would annoy thousands upon thousands of administrators. In another example, Emacs' internal language is Lisp, a powerful high-level language, so using anything else for Emacs configuration files would be ridiculous.

No, my point is the effect all this variety has on the Linux user: a large portion of a Linux user's computer time is spent learning, writing, and debugging configuration files. Thus, it is useful to have a system in which these configuration files (1) are backed up automatically, (2) are distributed automatically, and (3) work on multiple flavors of UNIX and distributions of Linux. This article explains how to achieve the first two goals, and gets you started on the road to achieving the third one.

The Plan

We'll use CVS to hold the configuration files. Feel free to use any other versioning system. Subversion is gaining popularity quickly. The FSF has GNU tla (GNU arch), another nice versioning system. The essential features you need are provided by all those and many others, including the non-free ones like Rational® ClearCase®.

In my configuration scheme, each configuration file is in a single directory or in one of its subdirectories. The configuration files are be named uniquely, and the directories denote machines or platforms rather than location. Thus, the file name maps uniquely to a location in the filesystem. For example, passwd will always be used for /etc/passwd, while cshrc will be used for /home/tzz/.cshrc for user tzz.

For a few programs I use daily, I'll show how I handle multiple platforms with the help of my configuration system and changing the configuration files themselves.

All the examples I show use the C shell to set environment variables. Modifying them to use GNU bash or something else should not be terribly difficult.

Setting up CVS

You probably already have CVS installed on your machine. If not, get it (see the Resources section) and install it. If you are using another versioning system, try to set up something similar to what I show below.

First of all, you need to create a CVS repository. I'll assume you have access to a machine that can be used as a CVS server through OpenSSH or Pserver CVS access (Pserver is the communication protocol for CVS; see Resources for more information). Then, you need to create a module called config, which I will use to hold the sample configuration files. Finally, you need to arrange a way to use your CVS repository remotely non-interactively, through OpenSSH, Pserver, or whatever is appropriate. This last point is highly dependent on your particular system administration skills, level of paranoia, and environment, so I can only point you to some information in the Resources. I will assume you have configured non-interactive (ssh-agent) logins through OpenSSH for the rest of this article.

Listing 1. Set up the CVS repository on a machine
 

# assume that /cvsroot is your repository's home
> setenv CVSROOT /cvsroot
# this will use $CVSROOT if no -d option is specified
> cvs init
# check that it worked
> ls /cvsroot
# you should see one directory called CVSROOT
CVSROOT

Now that the repository is set up, you can continue using it remotely (you can do the steps below on the CVS server, too -- just leave CVSROOT as in Listing 1).

Listing 2. Remotely add the config module to CVS
 

# user tzz, machine home.com, directory /cvsroot is the CVSROOT
> setenv CVSROOT tzz@home.com:/cvsroot
# use SSH as the transport
> setenv CVS_RSH ssh
# use a temporary directory for the module creation
> cd /tmp
> mkdir config
> cd config

# tzz is the "vendor name" and initial is the "release tag", they can
# be anything; the -m flag tells CVS not to ask us for a message

# if this fails due to SSH problems, see the Resources
> cvs import -m '' config tzz initial
No conflicts created by this import
# now let's do a test checkout
> cd ~
> rm -rf /tmp/config
> cvs co config
cvs checkout: Updating config
# check everything is correct
> ls config
CVS

Now you have a copy of the config CVS module checked out in your home directory; we'll use that as our starting point. I'll use my user name tzz and home directory /home/tzz in this article, but, of course, you should use your own user name and directory as appropriate.

Let's create a single file. The CVS options file, cvsrc, seems appropriate since we'll be using CVS a lot more.

Listing 3. Create and add the cvsrc file
 

> cd ~/config
> echo "cvs -z3" > cvsrc
> echo "update -P -d" >> cvsrc
> cvs add cvsrc
# you really don't need log messages here
> cvs commit -m ''
> ln -s ~/config/cvsrc ~/.cvsrc

From this point on, all your CVS options will live in ~/config/cvsrc, and you will update that file instead of ~/.cvsrc. The specific options you added tell CVS to retrieve directories when they don't exist, and to prune empty directories. This is usually what users want. For the remaining machines you want to set up this way, you need to check out the config module again and make the link again.

Listing 4. Check out the config module and make the cvsrc link
 

> cd ~
# set the following two for remote access
> setenv CVSROOT ...
> setenv CVS_RSH ...
# now check out "config" -- this will get all the files
> cvs checkout config
> cd ~/config
> ln -s ~/config/cvsrc ~/.cvsrc

You may also know that Linux allows for hard links in addition to the symbolic ones you just created. Because of the limitations of hard links, they are not suitable to this scheme. For instance, say you create a hard link, ~/.cvsrc, to ~/config/cvsrc and later you remove ~/config/cvsrc (there are many ways this could happen). The ~/.cvsrc file would still hold the old contents of what used to be ~/config/cvsrc. Now, you check out ~/config/cvsrc again. The ~/.cvsrc file, however, will not be updated. That's why symbolic links are better in this situation.

Let's say you change cvsrc to add one more option:

Listing 5. Modify and commit cvsrc
 

> cd ~/config
> echo "checkout -P" > cvsrc
> cvs commit -m ''

Now, to update ~/.cvsrc on every other machine you use, just do the following:

Listing 6: Modify and commit cvsrc
 


> cd ~/config
> cvs update

This is nice and easy. What's even nicer is that the CVS update shown above will update every file in ~/config, so all the files you keep under this CVS scheme will be up-to-date at once with one command. This is the essence of the configuration scheme shown here; the rest is just window dressing.

Note that once you've checked out a module, there's a directory in it called "CVS." The CVS directory has enough information about the CVS module that you can do update, commit, and other CVS operations without specifying the CVSROOT variable.

Automatic updates and commits

For automatic updates and commits, I have written a very simple Perl program, maintain.pl. The longest part of the program is the help text, so you can imagine it's not full of complex code. I will go through it regardless, but keep in mind that a shell script could do the same job if needed.

The only thing maintain.pl does not do is make the symbolic links. Since that has to be done just once, and on some systems you do not want the links wholesale, the complexity of the task compared to the simplicity of doing it manually was simply too much. I know because I wrote the symbolic link code and got rid of it later.

I had to write and maintain yet another configuration file that mapped out many filenames. There were many exceptions; for example, two Linux and Solaris systems I use have radically different setups. There were just too many things to worry about, and I found that manually installing the links was much easier. Of course, your experience may vary -- I encourage you to try to find the most appropriate approach for your own environment.

... ... ...

Conclusion


I hope you found this article interesting and useful. Take what you can from it -- I've spent years perfecting my setup, and it should serve you in good stead.

Convert to this scheme a little at a time, don't get overwhelmed. You can easily spend days rewriting your configurations -- so do it gradually and you'll enjoy the process.

The greatest benefit you'll see is the automatic update function. On any of your machines, you can commit a file and it will show up everywhere else the next time maintain.pl is run! Even if you disagree with the directory structure, think about the power of the automatic updates and how they can be useful to you.

The second benefit you get is configuration archiving. Every version of your configurations will be in the revision control system! If you make a mistake, you can go back to an earlier version. If you lose a whole machine to, say, disk failure -- you can recover all the time-consuming configuration files you wrote for it in minutes.

Don't be tempted to convert everything to this scheme. Convert just the things you want to keep or reuse. Binary files don't work well with CVS -- at the very least, you won't have the diff capability that CVS provides for text files. Also, CVS has trouble with renaming directories, although it's certainly possible if you also rename the directory in the repository.

Finally, keep good backups of your CVSROOT repository, wherever it is. I hope you never need them.

Resources

About the author
Teodor Zlatanov graduated with an M.S. in computer engineering from Boston University in 1999. He has worked as a programmer since 1992, using Perl, Java™, C, and C++. His interests are in open source work, Perl, text parsing, three-tier client-server database architectures, and UNIX system administration. Suggestions and corrections are welcome; contact Ted at tzz@bu.edu

[May 23, 2007] freshmeat.net Project details for MID

The Machine Inventory Database (MID) is a Perl-based CGI interface to manage the machines on and off your network, both from the IP assignment perspective and the asset-tracking perspective. On top of acting as a frontend to a handful of MySQL tables, it handles IP assignment and acts as a frontend to the configuration files for BIND, YP, and DHCPD to reduce the chance for typos in the configuration files which tend to bring down service.

[May 22, 2007]  Linux Distros with CVS-RCS for Config Files

 Slashdot

Just do it (Score:1)
by choi (189590) on Monday July 19, @07:47PM (#9743061)

nothing prevents you from just installing cvs and importing/checking out your config directories. i think it's really not that much work to justify a distro on its own.

Do it yourself (Score:1) Matt Perry (793115) on Monday July 19, @07:51PM (#9743096)

Why not just do it yourself? I keep all of my config files in CVS on my Debian and RedHat boxes. It's pretty easy to set things up to do this.

Gentoo does this. (Score:4, Informative)
by djcapelis (587616) on Monday July 19, @07:54PM (#9743121)
(http://new.se.foml.inodetech.com/) 

Gentoo offers several choices in managing the configuration files in /etc, one of these options is the dispatch-conf script which keeps all changes in RCS. This is mostly for updating... so it's not everything, but it's definitely a strong start and you could likely use the same system to keep track of your own modifications.

Nothing is stopping you from doing this. (Score:5, Informative)
by Feztaa (633745) on Monday July 19, @08:02PM (#9743198)
(http://rbpark.ath.cx/ | Last Journal: Wednesday June 30, @04:56AM) 

Just go into your /etc/, do a 'mkdir RCS', and then start checking your config files in and out of RCS to edit them. There's no code anywhere in linux that says 'if there's a directory I don't recognize, then crash spectacularly', so just adding the RCS directory itself isn't going to adversely affect anything.

That's actually a really good idea, too, I'm not sure why I never thought of it myself...

works for my user accounts (Score:5, Interesting)
by x00101010x (631764) on Monday July 19, @08:04PM (#9743224)
(http://slashdot.org/~x00101010x/ | Last Journal: Monday February 16, @04:44AM) 

I keep my entire home directory in a Subversion repository. Works great for linux and my windows boxes. Firefox and thunderbird user directories are compatible across platforms.

I just add 'svn up' to my login script and 'svn ci --message "%HOST%@%TIME%%DATE%"' to my logout script.
No reason it shouldn't work for a whole system with an initial 'svn up' somewhere in rc.local and periodic updates in a chron job. Just do a commit whenever you change things on your template system and 5 minutes later it'll be on all your boxen.

There was a slashdot article about putting a home directory under version control a few months ago from which I got the idea, too lazy to find the link at the moment though. 

BitKeeper (Score:2) 
by twoflower (24166) on Monday July 19, @08:36PM (#9743457) Larry McVoy designed BitKeeper with the specific aim of doing this. I believe they also offer special single-user free licenses for this; you may want to check the BitKeeper documentation to see if there are any Linux distributions who actually took him up on this. [ Reply to This ] Most distros have CVS installed, right?
by Neil Blender (555885) on Monday July 19, @08:41PM (#9743490) so:

[user@localhost]# su
password:
[root@localhost]# cd /
[root@localhost]# cvs import . -m 'my linux distro' mydistro username start

Yes, Gentoo... (Score:2, Informative)
by andrewdk (760436) on Monday July 19, @08:49PM (#9743568)
(http://turbogfx.homelinux.org/)

YEs, Gentoo can do this. Just emerge rcs, make an /etc/config-archive dir, setup /etc/dispatch-conf.conf, and just do dispatch-conf in place of etc-update.

An old idea for modern times... (Score:3, Insightful)
by Deagol (323173) on Monday July 19, @11:24PM (#9744742)
(http://slashdot.org/)

I think it was OpenVMS (fuzzy memories of a freshman computer class) that had version control built into the filesystem. I'm amazed that this hasn't been introduced into the more popular filesystem(s) yet. I've wished for it on many occasions.

Or am I just being impatient? Will Reiser4 provide this capability?

FreeBSD (Score:2, Interesting)
by Scythe0r (197724) on Tuesday July 20, @12:14AM (#9745206)

You should really check out a utility for FreeBSD called mergemaster. You run it after rebuilding/upgrading your system and it compares the latest "vanilla" system configuration files to what you've got.

You can choose to overwrite your file, keep your file or merge the two together. I like to think of it as the ultimate choice in system housekeeping.

System Restore (Score:2)
by yotaku (26455) on Tuesday July 20, @01:51AM (#9745748)
(http://yotaku.homeip.net/)

As many people have pointed out having versioning on the config of a system is hardly a new idea. If you think about what might happen if you try to make this idea simple and easy to use it might end up being something like System Restore for Windows, which stores versions before updates, and if you're smart you make a check point before installing any questionable software or drivers. And then allows you to roll back if something goes wrong and the uninstall doesn't fix it.

changetrack (Score:1) 
by Christopher Cashell (2517) on Tuesday July 20, @04:19AM (#9746397)
(http://www.zyp.org/ | Last Journal: Saturday August 18, @01:39AM)

sudo apt-get install changetrack
For non-Debian users, download changetrack [sourceforge.net] from SourceForge.

changetrack uses RCS as it's backend, not CVS (support for CVS is on the Todo list), but the end result is the same. It is specifically intended for tracking system files like those in /etc.

dispatch-conf (Score:1)
by trickycamel (696375) on Tuesday July 20, @09:21AM (#9747791)

Gentoo does this for your files in /etc. Use dispatch-conf and forget about etc-update. You can set it to use RCS, so no more overwrites of your configs.

RCS and vim (Score:1) 
by wolf31o2 (778801) on Tuesday July 20, @10:43AM (#9748872)
(http://www.gentoo.org/)

At work, we have a simple wrapper for vim that does all of the RCS stuff for us, like checking in and checking out files. We use it on all of our production servers, as it gives use nice revision control over our files.

#!/bin/bash

ORIGVI=rcsvi

case $1 in
-r[0-9]*) VERSION=$1; shift ;;
esac

[ $# -eq 1 ] || { echo usage: vi [-rrev] filename >&2; exit 1; }
DIR=`dirname $1`
FILE=`basename $1`

### let vi handle error conditions
cd $DIR || exec $ORIGVI $1
[ -d $FILE ] && exec $ORIGVI $FILE

### skip certain directories
{ [ -r $HOME/.rcsvirc ] && . $HOME/.rcsvirc; } ||
{ [ -r /etc/rcsvirc ] && . /etc/rcsvirc; } ||
EXCLUDE="/tmp | /tmp/* | /etc/skel | /etc/skel/* | /home | /home/* | /usr/home | /usr/home/*"
[ -n "$EXCLUDE" ] && eval "case $PWD in $EXCLUDE) exec $ORIGVI $FILE ;; esac"

### create RCS directory if not exist
[ -d RCS ] || { mkdir RCS || exit $?; }

### check $FILE for existence, break possible lock or exit, check in
[ -e $FILE ] && { [ -e RCS/$FILE,v ] && { rcs -l $FILE || exit 1; }
ci -q -l $FILE </dev/null; }
[ -n "$VERSION" ] && { co $VERSION $FILE; chmod u+w $FILE; }

### edit $FILE
$ORIGVI $FILE

### check in $FILE
ci -u $FILE

# EOF

cfengine (Score:2, Informative) 
by bandix (184495)    on Tuesday July 20, @06:36PM (#9754240)
(http://www.geekpunk.net/)

You'll spend years fooling around with RCS and CVS for configuration versioning before realizing that what you really need is cfengine. CVS or svn for source code, cfengine for configuration. Cut to the chase:

http://www.cfengine.org/

[May 21, 2007] freshmeat.net Project details for cvs2cl.pl

Perl script to generate GNU-style ChangeLogs for CVS

cvs2cl.pl generates GNU-style ChangeLogs for a CVS working copy. There are many options to control the output.

[Dec 3, 2006] UsingCfrubyTutorial - SciRuby

Cfruby allows managed system administration using Ruby by David Powers and PjotrPrins. It is both a library of Ruby functions for system administration and an Cfengine-like clone. Cfruby is current deployed on servers, clusters and workstations. See below for an introduction on both.

Cfruby can be downloaded from http://rubyforge.org/projects/cfruby/ as a gem. You can also access the SVN repository through the Rubyforge web interface.

It is important to understand that Cfruby is really two in one:

  1. Cfrubylib is a pure Ruby library with classes and methods for system administation. This includes file copying, finding, checksumming, package management, user management etc. etc.
  2. Cfenjin is a simple scripting language for system administration - allowing for scripting of configuration tasks (without knowledge of Ruby). Naturally Cfenjin uses Cfrubylib itself.

So, if you are looking for a Ruby API check out Cfrubylib. But if you are looking for a scripting language check out Cfenjin.

To confuse matters more: you can use Ruby mixed with Cfenjin style scripting - but that is for those who have a weird streak - also known as geekishness.

Cfrubylib

Cfrubylib is a Ruby library for system administration. It can do most of the common tasks like file tidying, editing etc. etc. Best to study the API and code in:

http://cfruby.rubyforge.org/cfrubylib/

and the source repository:

http://rubyforge.org/viewvc/lib/libcfruby/?root=cfruby

More written documentation can be found in the source repository:

http://rubyforge.org/viewvc/documentation/libcfruby/?root=cfruby

Why reinvent the wheel? And you'll find it gives a lot more power than most configuration tools. Cfrubylib includes cfyaml - a YAML configurator. And support for FreeBSD Portage, Linux Debian, Linux Gentoo and OS-X Fink packages. Adding support for your favourite package manager should be straightforward.

Cfenjin

Cfenjin is a GNU Cfengine clone written in Ruby. It does not offer a full replacement for Cfengine (for one we don't have a client/server protocol, though cfrubylib has some support for that itself) - but it is Ruby and consists of few lines of code using Cfrubylib.

Documentation has been written, bits and pieces, but for now it is probably the best idea to study the examples in:

http://rubyforge.org/viewvc/documentation/cfenjin/examples/?root=cfruby

after reading the tutorial below.

Enjoy!

[Oct 7, 2005] mValent ¦ Powerful Change Control

mValent Integrity tracks changes to deployed servers and monitors configuration drift alerting IT teams to potentially critical problems. By comparing application environments in mValent Integrity for differences in granular configuration items, IT teams rapidly isolate root causes of production incidents. These teams can then model fixes to problems to validate their impact and automatically deploy them.

[Jul 13, 2005] System Configuration Collector

Configuration Collector (SCC) is yet another configuration collector. It consists of a client and a server part. The client collects configuration data in a structured snapshot, compares the new snapshot with the previous one, and adds differences to a logbook. Then the snapshot and the logbook are converted to HTML for local inspection. Optionally, the data can be sent to a system running the server software. On the server, summaries of the data are generated, and search/compare operations on the snapshots and logbooks are available via a Web interface.

Changes: This release will not update the keep file when running in interactive mode. It ignores differences in the main log file when moving data to "split" hosts. Split conditions have been extended with a simple process check. A correction for Debian for large lines with many fields. Include files have been added for logrotate.conf. Includes for Apache have been corrected. Netscape Fasttrack server has been added.

alphaWorks Remote System Management Tool Overview

Remote Server Management Tool is an Eclipse plug-in that provides an integrated graphical user interface (GUI) environment and enables testers to manage multiple remote servers simultaneously.

What is Remote System Management Tool?

Remote Server Management Tool is an Eclipse plug-in that provides an integrated graphical user interface (GUI) environment and enables testers to manage multiple remote servers simultaneously. The tool is designed as a management tool for those who would otherwise telnet to more than one server to manage the servers and who must look at different docs and man pages to find commands for different platforms in order to create or manage users and groups and to initiate and monitor processes. This tool handles these operations on remote servers by using a user-friendly GUI; in addition, it displays configuration of the test server (number of processors, RAM, etc.). The activities that can be managed by this tool on the remote and local server are divided as follows:

How does it work?

This Eclipse plug-in was written with the Standard Widget Toolkit (SWT). The tool has a perspective named Remote System Management; the perspective consists of test servers and a console view. The remote test servers are mounted in the Test Servers view for management of their resources (process, file system, and users or groups).

At the back end, this Eclipse plug-in uses the Software Test Automation Framework (STAF). STAF is an open-source framework that masks the operating system-specific details and provides common services and APIs in order to manage system resources. The APIs are provided for a majority of the languages. Along with the built-in services, STAF also supports external services. The Remote Server Management Tool comes with two STAF external services: one for user management and another for proving system details.

[Apr 18, 2005] Taking the Configuration Management Database to the Next Level The Federated Data Model - Computerworld by Doug Mueller...

>APRIL 18, 2005 (COMPUTERWORLD) -  With the growing interest in adopting best practices across IT departments, particularly according to standards such as the Information Technology Infrastructure Library (ITIL), many organizations are deciding to implement a configuration management database (CMDB). A CMDB should help them discover and manage the elements in their IT infrastructure so they can better understand the relationships among components and facilitate changes effectively. This is important because there is a significant business value in having a single "source of record" that provides a logical model of the IT infrastructure to identify, manage and verify all configuration items in the environment.

Having reliable data requires more than a database. It requires a well-conceived configuration management strategy; without knowing what's in your environment, you can't hope to control it, maintain it or improve it.

Since configuration items are at the heart of the CMDB, it's important to understand what they encompass. A configuration item is an instance of a physical, logical or conceptual entity that is part of your environment and has configurable attributes specific to that instance. Examples of configuration items would be a computer system (attributes could include a serial number or IP address) or even an employee (with configurable attributes such as hours worked and department number).

Getting Started: Developing the Right Strategy

Once you have determined that you may need a CMDB, how do you select the approach that's best for you? Everything begins with ITIL, the industry framework for IT service management. To get started on developing a configuration management strategy, set your objectives according to ITIL goals, which state that configuration management accounts for all the IT assets and configurations within the organization and its services. According to ITIL, the ideal CMDB should also provide accurate information on configurations and their documentation to support all the other service management processes. In addition, it must provide a sound basis for incident management, problem management, change management and release management. It must be able to verify the configuration records against the infrastructure and correct any exceptions. If you think that creating a CMDB is a major undertaking, you're right. But it can be done effectively if you follow the right approach for your organization.

Lessons Learned: The Evolution of the CMDB

The concept of a CMDB has evolved over the years from a collection of isolated data stores to integrated data stores to a single, central database. Each time, it gets closer to being the source of record for configuration data without taking a toll on the infrastructure. However, those who have tried these approaches find that they have serious drawbacks that make them difficult or impossible to scale. A better alternative is the federated data model. This approach features a centralized database linked to other data stores with a common data model that carries information from one point to another, without the need to rewrite code. I will describe this model in more detail after providing an overview of how it evolved.

The predecessors to CMDBs, popular in the 1990s, consisted of several applications that stored their own data, including configuration data. This approach could meet ITIL's goal of accounting for IT assets and services, but because the data wasn't integrated, the approach fell short of other objectives, such as understanding dependencies and relationships among configuration items. With isolated data stores, your asset management application may not see data from a discovery application, and your service-impact management application may not be able to modify service-level agreements.

IT organizations also tried to create CMDBs by directly integrating their various data sources and applications, connecting each data consumer to each provider from which it needed data. This approach allowed different configuration management processes to share data, greatly improving the CMDB's usefulness as a means to integrate applications and IT processes they support. But it required a lot of resources to create and maintain what tend to be brittle, hard-coded connections between systems.

Recently, vendors have been offering a single, all-encompassing CMDB to hold configuration data that's accessible by all applications that need the data. But an all-encompassing database isn't feasible in a large, distributed organization. It creates an access bottleneck because all requests for and updates to data pass through the same path. It also requires a massive migration to get all of your data into the single database, creating a complicated data model that must change if any application integrated with the CMDB changes.

Putting It All Together With a Federated Data Model

The most effective approach is the federated data model. It's the best way to share configuration data without the high setup and maintenance costs associated with the pure centralized approach. It puts primary and widely shared configuration-item data in a common data store and federates other noncritical attribute data from other application databases. According to a recent Gartner Inc. study ("Defining a Configuration Management Database," by P. Adams and R. Colville, November 2004), "A practical approach for a successful implementation of a configuration management database will require a federated data model with a consistent view that receives at least some data from element-specific tools (for example, desktop configuration management, server configuration management, network management and storage management)."

This federated approach to a CMDB offers a single, common set of information on each configuration item and its relationships with other configuration items in a manner that can be leveraged by all relevant IT processes -- creating cost-saving synergy among different service management functions. A federated data model enables you to fully integrate critical service and infrastructure management applications and break down the traditional functional silos that often exist within an IT organization, all of which streamlines delivery of IT services.

Important Benefits of a Federated Approach

What should this federated model look like?

This model refines ITIL's idea of a CMDB by breaking up the CMDB and its infrastructure into three layers. These are the CMDB itself; related data linked to or from the CMDB, called the CMDB Extended Data; and applications that interact with these two layers, called the CMDB Environment.

The CMDB and CMDB Extended Data layers together contain the information ITIL suggests be stored in a CMDB. Separating this information into two layers is what distinguishes the federated CMDB approach from other, less-successful CMDB approaches. The CMDB holds only configuration items and their relationships. However, not all available configuration-item attributes must be stored in the CMDB. In fact, to keep the CMDB scalable and manageable, you should store only the key attributes here and link to the less-important ones in the CMDB Extended Data.

The CMDB Extended Data layer holds related data, such as help desk tickets, change events, contracts, service-level agreements, a definitive software library and much more. Although these things aren't configuration items, they contain information about your configuration items and form an important part of your IT infrastructure. In addition, the CMDB Extended Data layer holds any configuration-item attributes judged as unnecessary to be stored in the CMDB.

The data in the CMDB Extended Data layer is linked to the configuration item data in the CMDB. By definition, federated configuration-item attributes are linked from their instances in the CMDB, allowing requests to the CMDB to reach these attributes. But for other types of extended data, the link can be in either or both directions. For example, a change-request record could have a link through which you can access the instances of the configuration items it will change, and each configuration-item instance could have a link through which you can access the change requests that affect it.

To pursue ITIL's goals for configuration management, you should consider the advantages of a federated data model and what it can do for you.

Doug Mueller is the chief technology officer at the Service Management business unit of BMC Software Inc. and a co-founder of Remedy Corp., now a part of BMC.

Doug Mueller is the chief technology officer for the Service Management Business Unit of BMC Software and a co-founder of Remedy, now a part of BMC.

[Jan 21, 2005] InformationWeek Enterprise Systems Management BMC Debuts Configuration-Management Database

The software is designed to help businesses unify service- and infrastructure-management tools to promote database management consistency and simplified integration among processes.

By Darrell Dunn,  InformationWeek
Jan. 21, 2005
URL: http://www.informationweek.com/story/showArticle.jhtml?articleID=57702869

BMC Software on Monday will announce the availability of its Atrium Configuration Management Database (CMDB), intended to help customers unify their service and infrastructure management.

Based on industry-standard IT Infrastructure Library requirements for enterprisewide database management with consistency and simplified integration among different management processes, the CMDB is also the first offering by BMC to be branded under the Atrium name, says Andrej Vlahcevic, senior product marketing manager for change and configuration management at BMC.

Over the course of the year, BMC plans to introduce other management products under the Atrium brand. "A lot of people see a CMDB as a common set of information that captures data on the configuration and relationship of items in your IT environment," Vlahcevic says. "We believe it has to be more." The Atrium database was designed to integrate both service and infrastructure-management applications, he says, as well as complement the company's existing line of discovery tools.

The Atrium CMDB includes a reconciliation engine that lets users combine input from multiple data sources and identify and reconcile any differences to establish a configuration profile. "If you don't have strong reconciliation, the CMDB will end up with repetitive data that ultimately will create confusion," Vlahcevic says.

The Atrium CMDB was designed with industry standards in mind, he says, including those endorsed by the Distributed Management Task Force and the Common Information Model. The platform supports all primary IT Infrastructure Library configuration item classes and more than 80 potential relationship types that can be leveraged to characterize an IT environment.

The Atrium CMDB is integrated with eight existing BMC applications, including the IT Discovery Suite, Service Impact Manager version 5.0, and Remedy IT Service Management Suite version 6.0. It's available now and can be purchased as part of any BMC Remedy IT Service Management version 6.0 products and the Service Impact Manager version 5.0.

PIKT - system monitoring, configuration management software  PIKT's initial release was in 1998. Written in C.

 PIKT® is a registered trademark of the University of Chicago. Copyright © 1998-2005 Robert Osterlund.  All rights reserved.  PIKTis cross-categorical, multi-purpose software to monitor and configure computer systems, report and fix problems, manage system security, arrange job scheduling, format documents, install files, assist command-line work, and perform many other common systems administration tasks. PIKT is used primarily for system monitoring, and secondarily for configuration management, but its flexibility and extendibility evoke many other uses limited only by your imagination. One reviewer said of PIKT, "this is by far one of the most interesting/powerful tools I have seen for Linux administration."  Another wrote that PIKT "excels at handling a diverse collection of machines, saves time and eliminates repetition, and gives you a global view of your site." PIKT has been compared favorably to commercial software costing hundreds of thousands of dollars.  Yet PIKT costs you nothing! Who uses PIKT?  The answer might surprise you. To learn more, read the Introduction pages.  For example uses and configurations, visit the Samples pages. 

What is PIKT

PIKT is Open Source software distributed under the GNU GPL.

What is PIKT not?


Why the name "PIKT"?
PIKT is like a military picket, "a group of soldiers or a single soldier stationed, usually at an outpost, to guard a body of troops from surprise attack" (Webster's New World College Dictionary).  A pickets' primary mission is to warn of the enemy's advance, but to fight if necessary.  Similarly, PIKT's primary task is to warn of problems, but to fix those problems when needed.

How do you pronounce "PIKT"?
"PIKT" rhymes with "ticket".

Kickstart, APT and RGANG usage note for farm administration Mirko Corosu INFN Genova, Alex Barchiesi, Marco Serra INFN Roma

Contents

1 Introduction

This document is a basic introduction about few useful tools for a sysadmin that wants to install OS, to perform simultaneous operations on multiple machines via ssh and to upgrade a machine already installed using an automatic (or manual) procedure. For more detailed information please refer to the bibliography added in the following paragraphs.
IMPORTANT: this document is based on our experience with a farm running Scientific Linux CERN 3.0.4 and should not be considered a general guide, i2ady described in another document:

how it is possible to setup a kickstart installation server. Here we will add only few notes about the customization of the kickstart file, providing an example:

that must be changed accordingly with a specific site configuration. This example was written with the idea to install a Scientific Linux CERN OS, from which we removed few packages (or turned off few services) not strictly needed for machines not located at CERN. To find all the possible options for a kickstart file please refer to:

2.1 Add/Remove groups or single package

In our kickstart file example it is shown how it is possible to add (or remove) different groups of packages, for example:

@ Text-based Internet

add the packages: mutt, fetchmail and elink.
It is possible to use a graphical tool redhat-config-packages to show the full list of package in a group like Text-based Internet.
To add/remove a single rpm it is possible to use a single line like:

-phone

to exclude the installation of the phone rpm. Vice-versa to add a rpm it is possible to use:

+<package name>

for example if you want to install wget it is sufficient to add:

+wget

2.2 Start/Stop services

In our example of kickstart file there are few services explicitly started or stopped using chkconfig.
The pcmcia service is turned off:

chkconfig pcmcia off

vice-versa ntpd is turned on:

chkconfig ntpd on

to have time synchronization.

2.3 Post-install examples

In the kickstart file it is possible to include operations to be performed after the OS installation, at the first reboot. In our kickstart file few example are present as a reference, in the section %post. We will comment about them in the APT section.
As an example if you want to configure the INFN AFS cell add the following lines in the post-install section of the kickstart file:

mv /usr/vice/etc/ThisCell /usr/vice/etc/ThisCell.orig
cat >> /usr/vice/etc/ThisCell <<EOF
infn.it
EOF

3 APT in the Scientific Linux CERN

The CERN Scientific Linux distribution uses the APT tool as package manager. You can find more detailed information about APT here:

In this distribution by default APT comes with CERN configuration to use the CERN RPM repository. Details of this configuration, with some explanation about apt commands, are available here:

3.1 Local RPM repository for APT

In our kickstart file example we included a post-install section to re-configure APT in order to use a local RPM repository (see also http://grid-it.cnaf.infn.it/fileadmin/sysadm/akserver/akserver.html).
You can change the APT sources.list.d configuration via post-install:

mv /etc/apt/sources.list.d/dag.list /etc/apt/sources.list.d/dag.list.orig
cat >> /etc/apt/sources.list.d/local.list <<EOF
# Your local repository
rpm http://<YOUR_KICKSTART_SERVER> rep/slc304-i386 os updates extras localrpms
EOF

where <YOUR_KICKSTART_SERVER> is your RPM server configured for APT usage. Our re-configuration will add a local repository (localrpms) that could be used to customize your OS including for example ``private'' RPMs (example: ssh configuration, tools, ....).

3.2 Update/upgrade

Follows few examples of how to run APT manually from a node you want to upgrade.
To check the available updated packages run:

apt-get update

To perform necessary dependency resolution, download packages and install them run:

apt-get upgrade

Alternatively you can configure APT to automatically update your machines using the apt-autoupdate too. It is possible to run it by hand:

apt-autoupdate

or to configure it as a service:

chkconfig --add apt-autoupdate

3.3 Kernel

Please notice that kernel upgrade is not included in the previous section commands and you have to force it in this way:

apt-get upgrade-kernel

3.4 Pin preference for local repository

If in your installation you need to give preferences to some RPMs it is possible to use the APT ``pin'' feature, for details refer to:

In our kickstart example we included the APT preferences modification to give higher priority to all the RPMs in the localrpms section of the repository.

mv /etc/apt/preferences /etc/apt/preferences.orig
cat >> /etc/apt/preferences <<EOF
# Maximum priority to local rpms
Package: *
Pin: release c=localrpms
Pin-Priority: 1001
EOF

For example in the CERN-SL, pine is installed via a CERN customized rpm. If you will put a ``plain'' pine rpm in localrpms repository - after apt-autoupdate will run for the first time - this one will replace the previous one.

Also if will be available a higher version of ``CERN'' pine in the CERN-SL apt-autoupdate will preserve the ``localrpms'' one.

It is also possible to use a pin mechanism for a single rpm instead of a directory, for example for sylpheed package including in the APT preferences:

Package: sylpheed
Pin: version 0.4.99*

4. Introduction to RGANG

Nearly every system administrator tasked with operating a cluster of Unix machines will eventually find or write a tool which will execute the same command on all of the nodes.
At Fermilab has been created a tool called "rgang", written by Marc Mengel, Kurt Ruthmansdorfer, Jon Bakken (who added "copy mode") and Ron Rechenmacher (who included the parallel mode and "tree structure").
The tools was repackaged in an rpm and it is available here:

It relies on files in /etc/rgang.d/farmlets/ which define sets of nodes in the cluster.
For example, "all" (/etc/rgang.d/farmlets/all) lists all farm nodes, "t2_wn" lists all your t2_wn nodes, and so forth.
The administrator issues a command to a group of nodes using this syntax:

rgang farmlet_name command arg1 arg2 ... argn

On each node in the file farmlet_name, rgang executes the given command via ssh, displaying the result delimited by a node-specific header.
"rgang" is implemented in Python and works forking separate ssh children which execute in parallel. After successfully waiting on returns from each child or after timing out it displays the output as the OR of all exit status values of the commands executed on each node.
To allow scaling to kiloclusters it can utilize a tree-structure, via an "nway" switch. When so invoked, rgang uses ssh to spawn copies of itself on multiple nodes. These copies in turn spawn additional copies.

4.1 Required Hardware and Software

Users will need to have python (tested on Python 1.5.2 and 2.3.4) installed too. It is also supplied a "frozen" version of rgang that does not need any additional package and can be found in /usr/lib/rgang/bin/.

4.2 Product Installation

Install the rpm and that's it.

rpm -iv rgang.rpm

It has been created a "pre-script" (/usr/bin/rgang ) that sets the appropriate environmental variables and then execs the python script or "frozen" version. You have to change the name of the executable depending on the one you are planning to use. In the python case:

#!/bin/sh
pathToRgang=/usr/lib/rgang/bin
rgOpts="--rsh=ssh --rcp=scp"
# this has to be uncommented if you have a Python version over 2.3
#pyOpts="-W ignore::FutureWarning" 
exec python $pathToRgang/rgang.py $rgOpts "$@"

if you need to use the frozen version modify the pre-script 
  as follows: 
#!/bin/sh
pathToRgang=/usr/lib/rgang/bin
rgOpts="--rsh=ssh --rcp=scp"
# this has to be uncommented if you have a Python version over 2.3
#pyOpts="-W ignore::FutureWarning" 
exec $pathToRgang/rgang $rgOpts "$@"

4.3 Running the Software

In the following lines it's shown by examples the typical usage of 'rgang' refer to the documentation or usage/help from 'rgang -h' for the whole of the options.

5 Troubleshooting

6 Appendix

6.1 Setting the RSA keys

It could be useful to distribute the RSA-key from your mother-node to your target-nodes so that you can use ssh-agent for authentication.
To create a key on your mother-node:

ssh-keygen -t dsa

then to copy the public key to the target-nodes in interactive mode (``-pty''):

rgang --pty -c <nodes-spec> /root/.ssh/id_dsa.pub /root/.ssh/authorized_keys

then on your mother-node:

ssh-agent <your shell>
ssh-add

and type the pass-phrase you choose when created the key, then use 'rgang' as usual (no interactive option).

Freshmeat admin script selection:

[Mar 25, 2004] Interview with Siem Korteweg System Configuration Collector By Benjamin D. Thomas


3/25/2004

In this interview we learn how the System Configuration Collector (SCC) project began, how the software works, why Siem chose to make it open source, and information on future developments.

Introduction:

Have you ever noticed changes on your departmental server, but couldn't quite pinpoint what exactly happened? How many times have staff forgotten to make an entry in the log-book, or the entries made were not detailed enough? Administrators are faced with these problems on a day-by-day basis. The System Configuration Collector (SCC) project attempts to automate this process. Rather than depending on staff to keep accurate records, SCC enables a system to record all changes taking place. Additionally, the software has the functionality to send all configuration data to a central server so that it can be analyzed when needed.

System Configuration Collector Project Website: http://www.open-challenge.nl/scc/index.html

LinuxSecurity.com: Please tell us about the SCC project and how it began. When did it start, and who are some of the key contributors?

Siem Korteweg: In 2001 a younger colleague asked whether it was possible to automatically track the changes that were made to the configuration of a system. I told him that was impossible due to variable nature of the output of the commands we have to use to show the configuration of a system. Being a much younger colleague he accepted this answer. But I did not like to say it was "impossible" and it kept nagging me.

I thought that when I could split the variable and fixed parts of the output of system commands, I would be able to track changes. I started a small, hobby project by collecting configuration data and preceding each line with "fix:" or "var:". After some time I was able to detect some changes made to configuration. But when a kernel parameter was changed, all I saw was a change from 128 to 256. I had to search in the snapshot to find out what part of the configuration had changed. Therefore I extended the fix-var classification with a hierarchy of keywords indicating the nature of the data.

The development continued and the customer where I was developing the software, was wondering how to maintain this software without hiring me indefinitely. By that time I realized that this software also could/should be used by others. I talked to the manager of the customer and to the manager of the company I am working for and suggested to make SCC a GPL project. They both agreed and from then on, SCC was an Open Source project. To extend the collection of configuration data I looked at the code of cfg2html and check.sh (HP specific) and the FAQ's of several newsgroups. At the customer site where I started developing SCC, we deployed the software on some 300 systems. This gave us a great opportunity to tune the "fixed" and "variable" parts of the configuration to avoid unnecessary changes.

The first versions of the software collected configuration data and converted the data and logbook to HTML on a per system basis. At the customer site, Bram Lous started to collect all snapshots and logbooks on a server and built the first version of the CGI-interface. Later on, Paul te Vaanholt contributed much for the HP OpenView modules. His main contribution is the analysis and conversion to SCC-format of the Operations Center database. A colleague Oscar Meijer wrote the Windows version of the SCC-client, based on WMI and WSH. The configuration of the data we are collecting on Windows systems still needs to be tuned. The software itself is stable, but it detects too many changes. The whole process of tuning what data is "fixed" and what data is "variable" takes quiet some time.

LinuxSecurity.com: What is the most important benefit an administrator can get out of SCC? How can this improve the overall security of a network or host?

Siem Korteweg: Each administrator should document his/her systems. We all know that, but we all lack time to do this properly. SCC automates the documentation process. For HP-UX systems SCC collects more than 95% of the configuration of the system is covered by SCC. For other system the percentage is somewhat lower at the moment.

The logbooks and snapshots can assist administrators in finding the cause of an incident. Configuration changes can have unwanted side-effects (on other systems). By examining the logbooks for the changes during the last days/weeks an administrator might find the cause of an incident easier/faster. Another way of using the SCC-data to find the cause of an incident is to compare (parts of) the configuration of a system with a comparable system that does function correctly.

Comparing the configuration of systems can also be used to assure that the systems in a cluster are consistent and identical. Do they run the same (versions of) software? Do they have the same kernel-configuration? It is also possible to check your security policies. Just check the snapshots on the server for the aspects of the policies. By default the server checks and signals accounts without a password.

Another use of the SCC-data on the server is to quickly identify systems. After an advisory from Sun, I was able to identify within one minute the 100 systems that needed to be addressed out of a total of 600 systems. Because the selection was automated and because the collection of SCC-data was accurate and outdate, I did not miss a system. This obviously contributes to the safety of the network.

LinuxSecurity.com: How difficult is it to get started? How long would it take for an administrator to get the system fully setup? Can you describe at a high level the steps necessary to setup SCC?

Siem Korteweg: The easiest way to start and get the feeling of the software is to install only the client part and keep the data and logbook on the client. Just create a simple cron-job after the installation of the client and you are finished. This way you are able to pilot the software before you deploy it more widely.

The setup of the server takes some more steps. First you have to decide how to transport the SCC-data from the clients to the server. Supported mechanisms are email (optionally encrypted, using OpenSSL), scp, rcp and cp. Then setup the webserver to display the data. To achieve this, you have to indicate the path under the document-root and indicate the CGI-script of SCC. Then schedule a cron-job to transfer the SCC-data that is sent by the clients from the transfer-area to the website Finally all cronjobs of the clients have to be extended with the proper options to transfer the SCC-data to the scc-server.

For several systems I recorded the entire process of configuring the server in logbooks. These logbooks are present at the website. For our HP-UX 11.i system: http://www.open-challenge.nl/scc/scc-web-demo/scc.hpux11i.log.html  

LinuxSecurity.com: What improvement would you like to make in the future? What direction is this project heading?

Siem Korteweg: When running SCC on a system that uses clustering software, like MC ServiceGuard from HP, switching a "package" from one system to another, results in changes of the SCC-data for both systems involved in switching. We want to make the software cluster-aware by extracting the configuration data for each package and sending it separately to the scc-server.

Another future extension is the collection of the configuration of network devices like routers and switches.

LinuxSecurity.com: What advantage does SCC have over using a typical pen & paper log book for recording system changes?

Siem Korteweg: It is automated, so it does not "forget" to record a change (supposing the changed attribute is part of the SCC-snapshot). It is not lazy (once you run it through cron). - The pen & paper logbook is a physical item that can only be at one place. Each admin of a group of systems can be at a different place, without access to the paper logbook. Suppose 7x24 systems, where the admins "follow the sun". - By consolidating all snapshots on a system with scc-srv, you obtain much data that can be searched automatically. This enables you to quickly identify the systems that need an update or to compare two systems when one of them does not function correctly. This is impossible with pen & paper.  

LinuxSecurity.com: What operating systems does SCC run on? What type of license is it under?

Siem Korteweg: HP-UX, Solaris, AIX, Linux (RedHat, Suse, Gentoo). As the code of SCC only uses "standard" Unix tools, I think it runs on almost all Unix/Linux systems. The coverage of the configuration data depends on the OS. For example the coverage of HP-UX configuration is more than 90%. For other systems this will be less. The license is GPL.

LinuxSecurity.com: If an administrator needs assistance setting up or configuring SCC is support available? If so, how can support be obtained?

Siem Korteweg: Besides the documentation on our website, SCC comes with documentation and manual pages. We offer an implementation service, where a consultant visits a customer and installs the server and at most 5 clients and introduces the software to the admins of the customer. This is only feasible in the Netherlands. Otherwise, support via email is possible. When the requested support is more than a few simple questions, we have to agree upon payment.

LinuxSecurity.com: How does SCC differ from other similar configuration collectors? What are some of the strengths and weaknesses of SCC?

Siem Korteweg: SCC collects configuration data without formatting it immediately to HTML. Instead it prefixes each line of configuration data with fix/var and a hierarchical classification. This makes it easy to process the snapshots. The processing consists of comparing consecutive snapshots to generate the logbook, formatting the snapshot to HTML and comparing the snapshots of two systems to determine the differences.

The philosophy of SCC is to collect data, not to judge its value or correctness. Stupid configuration errors in Apache/Samba are not detected in scc, this should be done at the server where all snapshots are collected. Some might question the value of all the data in the snapshots. It is true that a considerable part of the snapshots will never change during the lifetime of a system. Nevertheless this data is collected, just in case someone needs it sometimes.

One commercial configuration collector works by allowing remote root-access to all clients from their server. This is not very security minded. I had security in mind when coding scc and scc-srv.

A weakness of SCC is that I coded the classifications of all collected configuration data. This classification has to be used when an admin wants to view specific information. I decided to store cron configuration data under classification "software:cron:" and swap info under classification "system:swap:". Each user of SCC has to follow my intuition.

Another weak point is that the clients are autonomous. The scc-srv can be DOSed by mailing much snapshots from seemingly different systems. Therefore, I suggest to install scc-srv only in a "trusted" network. Finally, scc has to do "reverse engineering" to collect for example the Apache configuration. Apache can be installed and configured in dozens of different locations. We have to determine the correct paths and files from the running processes.  

LinuxSecurity.com: How can the project benefit from the open source community?

Siem Korteweg: The project can benefit from the open source community when admins use it and contribute their extensions. These extensions can be specific applications/hardware/OS they use or new features. At the moment some people already contribute knowledge of specific software. Feedback concerning the strong and weak aspects admins experience while they are using SCC, is also valuable.

Area's for future extensions are SAN/NAS and network devices. I am looking for people and organisations that are willing to contribute in any way in these areas.

LinuxSecurity.com: I wish to thank Siem, and other contributors to the System Configuration Collector project. We at LinuxSecurity.com would like to wish you the best of luck!

Brains2Bytes Consulting

About: Alist is a program that collects hardware and software information about systems and stores it in a database for users to browse and search via a Web interface. The program consists of three parts: a client portion that collects the information, a daemon that receives data sent from clients, and a CGI that displays and lets you search for information. Clients for Solaris, Linux, FreeBsd, OpenBSD, and Mac OS X are currently available.

Changes: There is a new Windows module (MSWIN32.pm), a new Irix module (irix.pm), bugfixes for the Linux module on Debian, and bugfixes for client/alist and hpux.pm.

Alist is written entirely in Perl 5. The server portion has been tested on Linux, Solaris, and Mac OS X, and should run without any problems on any modern Unix OS, but may not work on non-Unixlike operating systems, due to calls to fork(). The server needs to have a web server, Perl 5, and the Perl CGI.pm module.

The client portion requires Perl 5, but no modules outside the core distribution are required. There are currently clients for Solaris, Linux, OS X, FreeBSD, and OpenBSD, Windows, HP-UX and Irix. Clients explicitly tested can be found here.

SSGDOC - System Administration at cs.unm.edu

BitKeeper - The Scalable Distributed Software Configuration Management System

BitMover builds and markets enterprise level development tools for software and web developers. Our flagship product is BitKeeper, a powerful replicated and distributed configuration management system. BitKeeper is supported on most platforms, such as Microsoft Windows as well as the various commercial and free Unix platforms. See the products section for more information about BitKeeper and our other products.

Never used BitKeeper? Take the test drive and see how easy it is to get started!

Please enjoy our web site and let us know if there is anything we can

SourceForge.net Project Info - ITracker

About: ITracker is a Java J2EE issue/bug tracking system designed to support multiple projects with independent user bases. It supports features such as multiple versions and project components, detailed histories, issue searching, file attachments, dynamic reports with charts, and multiple email notifications.

Team Development with WebSphere Studio Application Developer -- Part 3 Installing and Configuring CVS on RedHat Linux 7 as an SCM Repository 

>This article, the third one in a series on team development in IBM® WebSphere® Studio Application Developer, focuses on installing and configuring CVS on RedHat Linux 7 as an SCM Repository. WebSphere Studio Application Developer (hereafter called Application Developer) works seamlessly with CVS, the dominant open-source, network-transparent version control system. CVS runs on most platforms, including Windows®, Linux, AIX®, and UNIX®. Installing it with Application Developer on RedHat Linux has several advantages:
 

[Jan 04, 2002] O'Reilly Network: Introduction to CVS

LinuxProgramming: GNOME 2.0 Summary (How to compile GNOME 2.0 from CVS)(May 02, 2001)
LinuxPlanet: Don't Trip on the Red Carpet, Evolve with GNOME CVS(Feb 23, 2001)
Advogato: CVS mixed-tagging for massive Open Source Project Management(Feb 21, 2001)
zez.org: Version Control management with CVS - Part 2(Nov 26, 2000)
zez.org: Version Control Management with CVS - Part 1(Nov 07, 2000)
 

developerWorks/Automating UNIX system administration with Perl

... ... ...

The tool cfengine

If you are serious about automating system administration, cfengine is a tool you should know. Ignoring cfengine is a viable option only if you like to spend your days in the vi editor.

cfengine is a system configuration engine. It takes configuration scripts as input, and then takes actions based on these scripts. It is currently at version 1.6.3 (a very stable release), and version 2.0 is on the horizon. For more information on cfengine development, visit the cfengine Web site (see Resources later in this article).

You don't have to use everything cfengine offers, and you will probably not need the whole thing all at once. Your cfengine configuration files should start out simple, and grow as you discover more things that you want automated.

From the cfengine command reference, here are its most notable features:

Even though you can do with Perl all the things that cfengine does, why would you want to reinvent the wheel? Editing files, for instance, can be a simple one-liner if you want to replace one word with another. When you start allowing for system subtypes, logical system divisions, and all the other miscellaneous factors, your one-liner could end up being 300 lines. Why not do it in cfengine, and produce 100 lines of readable configuration code?

From my own experience, introducing cfengine to a site is quite easy, because you can start out with a minimal configuration file and gradually move things into cfengine over time. No one likes sudden change, least of all system administrators (because they will get blamed if anything goes wrong, of course).

Configuration file management

Managing configuration files is tough. You can start by considering whether cfengine is adequate for the task. Unfortunately, cfengine's editing is line oriented, so complex configuration files will probably not be a good match for it. But simple files such as the TCP wrappers configuration file /etc/hosts.allow are best done through cfengine.

Usually, you will want to keep more than one version of configuration files. For instance, you may need two sets of DNS configurations in /etc/resolv.conf, one for external, and another for internal machines. The external DNS resolv.conf file could, naturally, go into a directory called "external", while the internal resolv.conf could go into the corresponding "internal" directory. Let's assume both directories are under a global "spec" directory, which is a sort of root for configuration files.

The following code will traverse the spec directory, searching for a filename suitable for a given machine. It will start at /usr/local/spec and go down, looking for files that match the one requested. Furthermore, it will check whether or not each directory's name is the same as the class belonging to some machine. Thus, if we request locate_global('resolv.conf', 'wonka'), the function will look under /usr/local/spec for files named resolv.conf that are in either the root directory, or in children of the root directory whose names match the classes that the "wonka" machine belongs to. So, if "wonka" belongs to the "chocolate" class, and if there is a /usr/local/spec/chocolate/resolv.conf file, then locate_global() will return "/usr/local/spec/chocolate/resolv.conf".

If locate_global() finds multiple matching versions of a file (for instance, /usr/local/spec/chocolate/resolv.conf and /usr/local/spec/resolv.conf), it will give up. The assumption is that we are better off with no configuration than with one of the two wrong ones. Also, note that machines can belong to more than one class.

You can build on this structure. For instance,

will contain files for external and internal "chocolate" and "sugar" machines. You just have to set up the your machine_belongs_to_class() function correctly.

Once locate_global() returns a file name, it's pretty simple to copy it to the remote system with scp or rsync. Remember, always preserve the permissions and attributes of the file. Scp needs the "-p" flag, and rsync needs the "-a" flag. Consult the documentation for the file copy command you want to use. And there you have a unified configuration file tree.

Listing 1: Spec directory traversal

# {{{ locate_global: use spec directory to find a file matching the current class
sub locate_global($$)
{
 # this code uses File::Find
 my $spec_dir = '/usr/local/spec';
 my $file = shift || return undef;      # file name sought
 my $machine = shift || return undef;   # machine name
 my @matches;
 my $find_sub =
  sub
  {
   print "found file $_\n";
  
   push @matches, $File::Find::name if ($_ eq $file);
   # the machine_belongs_to_class sub returns true if a machine
   # belongs to a class; we stop traversing down otherwise
   $File::Find::prune = 1 unless
    machine_belongs_to_class($machine, $_) || $_ eq '.';
  };

 find($find_sub, $spec_dir);

 if (scalar @matches > 1)
 {
  print "More than one match for file $file,",
        "machine $machine found: @matches\n" ;
  return undef;
 }
 elsif (scalar @matches == 1)
 {
  return $matches[0];                   # this is the right match
 }
 else
 {
  return undef;                         # no files found
 }
}
# }}}

One challenge once you set up this sort of /usr/local/spec structure is: how do we know that resolv.conf should go into /etc? You either have to do without the nice hierarchical structure shown here, adapt it (replace "/" with "+", for instance -- a risky and somewhat ugly approach), or maintain a separate mapping between symbolic names and real names. For instance, "root-profile" can be the symbolic name for "~root/.profile". The last approach is the one I prefer, because it flattens out filenames and eliminates the problem of having hidden filenames. Everything is visible and tidy, under one directory structure. Of course, it's a little more work every time you add a file to the list. The program has to know that "resolv.conf" should be copied to "/etc/resolv.conf" on the remote system, and "dfstab" should go to "/etc/dfs/dfstab" (the Solaris file for sharing NFS filesystems).

Now let's talk about what you can do once you have this spec directory hierarchy set up. You could, if you wanted to, look for all the users named Joe:

Listing 2: Find all password files and grep them for Joe


grep Joe `find /usr/local/spec -name passwd`

Or you can use a tool such as rep.pl (link to rep.pl), written by David Pitts, to replace every word with another:

Listing 3: Find all hosts files and change "wonka" to "willy"


find /usr/local/spec -name hosts -exec rep.pl wonka willy {} \;

Now, you can write both Listing 2 and 3 in Perl, if you want; the find2perl utility was written just for that. It's much simpler, however, to just use find from the start. It really is a wonderful utility that every system administrator should use. More importantly, it took me 5 minutes to write the two listings. How long would it take you to figure out how to use find2perl, store the code it produces in a file, then run that file? Try it and see for yourself!

Task automation
Task automation is an extremely broad topic. I will limit this section to only simple automation of non-interactive UNIX commands. For automation of interactive commands, Expect is the best tool currently available. You should either learn its syntax, or use the Perl Expect.pm module. You can get Expect.pm from CPAN; see Resources for more details.

With cfengine, you can automate almost any task based on arbitrary criteria. Its functionality, however, is a lot like the Makefile functionality in that complex operations on variables are hard to do. When you find that you need to run commands with parameters obtained from a hash, or through a separate function, it's usually best to switch to a shell script or to Perl. Perl is probably the better choice because of its functionality. You shouldn't discard shell scripts as an alternative, though. Sometimes Perl is overkill and you just need to run a simple series of commands.

Automating user addition is a common problem. You can write your own adduser.pl script, or you can use the adduser program provided with most modern UNIX systems. Make sure the syntax is consistent between all the UNIX systems you will use, but don't try to write a universal adduser program interface. It's too hard, and sooner or later someone will ask for a Win32 or MacOS version when you thought you had all the UNIX variants covered. This is one of the many problems that you just shouldn't solve entirely in Perl, unless you are very ambitious. Just have your script ask for user name, password, home directory, etc. and invoke adduser with a system() call.

Listing 4: Invoking adduser with a simple script


#!/usr/bin/perl -w

use strict;

my %values;                             # will hold the values to fill in

# these are the known adduser switches
my %switches = ( home_dir => '-d', comment => '-c', group => '-G',
                 password => '-p', shell => '-s', uid => '-u');

# this location may vary on your system
my $command = '/usr/sbin/adduser ';

# for every switch, ask the user for a value
foreach my $setting (sort keys %switches, 'username')
{
 print "Enter the $setting or press Enter to skip: ";
 $values{$setting} = ;
 chomp $values{$setting};
 # if the user did not enter data, kill this setting
 delete $values{$setting} unless length $values{$setting};
}

die "Username must be provided" unless exists $values{username};

# for every filled-in value, add it with the right switch to the command
foreach my $setting (sort keys %switches)
{
 next unless exists $values{$setting};
 $command .= "$switches{$setting} $values{$setting} ";
}

# append the username itself
$command .= $values{username};

# important - let the user know what's going to happen
print "About to execute [$command]\n";

# return the exit status of the command
exit system($command);

Another task commonly done with Perl is monitoring and restarting processes. Usually, this is done with the Proc::ProcessTable CPAN module, which can go through the entire process table, and give the user a list of processes with many important attributes. Here, however, I must recommend cfengine. It offers much better process monitoring and restarting options than a quick Perl tool does, and if you get serious about writing such a tool, you are just reinventing the wheel (and cfengine is stealing your hubcaps). If you do not want to use cfengine for your own reasons, consider the pgrep and pkill utilities that come with most modern UNIX systems. pkill -HUP inetd will do in one concise command as much as a Perl script four or more lines long. This said, you should definitely use Perl if the process monitoring you are doing is very complex or time sensitive.

For the sake of completeness, here is a Proc::ProcessTable example that shows how to use the kill() Perl function. The "9" as a parameter is the strongest kill() argument, meaning roughly "kill process with extreme prejudice, then feed it to the piranhas." Do not run this as root, unless you really want to kill your inetd processes.

Listing 5: Running through the processes, and killing all inetds


use Proc::ProcessTable;

$t = new Proc::ProcessTable;

foreach $p (@{$t->table}) 
{
 # note that we will also kill "xinetd" and all processes
 # whose command line contains "inetd"
 kill 9, $p->pid if $p->cmndline =~ 'inetd';
}

Host Factory (white paper)

A typical Unix contains 20,000 files. A typical large site contains 100 or more hosts. Keeping each of the resultant 2 million files correct and consistent is a difficult version control problem. Often the problem is not solved, and each host becomes a unique collection of files from differing operating system versions. Reliability plummets as versions of programs interact that vendors never tested for interoperability, and the cost of maintenance soars as the same problem is solved differently for each host. What is needed is a place to store operating system distributions under version control, a place to generate configuration files that differ between hosts, and a method to install these files onto running systems with minimum interruption and maximum automation. The Host Factory software from Working Version fulfills all of these needs. Components of Host Factory include the Pgfs version control filesystem, a Host Profile developed for your site, and the Pdist filesystem replicator.

netSwitch 0.1.3 A boot-time network configuration tool for Linux laptops.

Helix Setup Tools 0.2.0 A simplified interface for Unix workstation configuration.

Information Resource Manager - IRM is a Web-based asset and problem tracking system built for IT departments and helpdesks. It keeps detailed information, both hardware and software, about each computer, as well as a complete history of all work requests ever placed.

SFI Director - The SFI Director is a tool for managing distributed, hetergeneous UNIX Systems.

Its functionality includes System Configuration, Application Distribution, NIS & NIS+ Management, User Creation and Dynamic System Documentation.  

LANdb: The Network Administration DB

LANdb is a network administration CGI package written in Perl. It uses a RDBMS (ie MySQL or Oracle) to store information on all network hardware, connections, and connection statuses.

cfengine daemon - Perl-cfd is an superior implementation of the cfengine 1.x server daemon. It has been tested with cfengine v1.4.17 and v1.5.3 clients. It should work with older v1.4.x and other v1.5.x clients.   

SysWatch - SysWatch is a Perl CGI to display current information about your UNIX system. It can display drive partitions, drive use, as well as resource hogs, and what current users are doing.
 

Large Scale System Administration

[This article is essentially a compacted-for-LINK.bnl version of one of the topics covered by MIX (Monthly Information eXchange) Meeting Notes - 09/24/97, written by Susan Sevian. The speaker for this topic was Jim Flanagan of CCD's Advanced Technology and Planning Section.

Notes from any of our MIXes -- generally more detailed than what we provide in LINK.bnl -- are available on the web. Please see the reference to MIXed Notes at the bottom of our MIX page.]

Tools for large scale system administration are being developed in conjunction with the RCF (RHIC Computing Facility) / CCD effort to set up and manage computing systems for RHIC. With a large number (hundreds) of RHIC computers, such system administration tools are needed in order to avoid tedious and error-prone manual efforts to synchronize operating system and node configuration changes.

Under the strategy adopted by RHIC/CCD, configuration information is kept in a hierarchical, class-based central repository, with the configuration of each node viewed as a specialization of more abstract configuration classes. The tool being developed for manipulating this repository is SyRCS, a wrapper around the Revision Control System (RCS), written in Perl. SyRCS provides simple, familiar commands (emulating such UNIX and RCS commands as ls, ci, co), which are used to maintain and inspect the repository and to check node configurations against the repository for "undisciplined" or unauthorized changes.

Unix SysAdm Resources Automated Unix SysMgmt Software

[May 12, 2001] Sys Admin Magazine Online Automatic UNIX Documentation with unixdoc by Roman Marxer

There's no need to spend days documenting your servers. I've written a program that can help. unixdoc collects all the configuration files and other information about your computers into an HTML file and sends it to a display server where it can be viewed with a browser. It works on Solaris 2.6/7/8 and on HP-UX 10.20. On the display server, you can see an overview page with all your systems as shown in Figure 1. By selecting a computer, the unixdoc HTML page of this computer will be displayed as shown in Figure 2.

The unixdoc HTML file of a Solaris computer consists of the following 18 sections:

  1. Hardware
  2. Eeprom
  3. Kernel
  4. Networking
  5. Software
  6. Nameservices
  7. Bootup
  8. Disk
  9. Disk Hardware
  10. Users
  11. dmesg
  12. Printers
  13. Cron
  14. Rhosts
  15. Quota
  16. Syslog
  17. Xntpd
  18. Sendmail

The information in these sections consists of either config files or the output of a command. With unixdoc, it is easy to compare the configuration of two servers. You just have to open the two unixdoc HTML pages of the servers and compare the content, section after section. You don't have to do a login on the two servers, or to remember all those commands to display the configuration. I find subsection 4.1.1 ifinfo helpful, because it provides a good overview of all the network interfaces (speed, mode, etc.). (Subsection 4.1.1 is shown in Figure 3.) The information in this subsection is very useful when verifying the speed/mode settings between your switches and servers. An example of the entire unixdoc HTML page can be found at:
http://www.net.li/article The software can be found at: http://www.net.li/article

[Mar 19, 2001] In Daniel Robbins' newest tutorial, learn to use CVS to check out the latest software sources, or begin using CVS as a full-fledged developer. (Linux)


Document Management Systems

[Apr 04, 2001] Ecora -- very nice package that includes Solaris documenter with HTML output

Whether you are an IT manager, systems integrator, consultant, or reseller, the demands on the IT environments you support are considerable and complex. Preparing for an IT audit, for example, is a time-consuming and tedious process. Our Documentor and IT Auditor products automatically create a comprehensive, natural-language report of your IT infrastructure. This can be used to create an audit trail to meet HIPAA requirements, prepare for a security audit or provide thorough documentation for a system audit. We invite you to experience for yourself the benefits of documentation. Click here to download an .exe file to document a server for free.

Benefits to system documentation:

Ecora Documentors Available as Downloadable Software for: Cisco  Information Sheet  Free Download!     Lotus Domino  Information Sheet  Free Download!     Microsoft Exchange  Information Sheet  Free Download!     Oracle Instances  Information Sheet  Free Download!     Sun SPARC Solaris  Information Sheet  Free Download!     Windows NT/2000  Information Sheet  Free Download!     

www.perl.com - Perl Rescues a Major Corporation

Company B received a contract to develop a new piece of hardware. As part of this contract, they were to supply their documents online.

First, company B looked into a Commercial, Off-The-Shelf (COTS) document management system. It seemed to meet all of their needs, until they found out that the cost was over $600,000. The price was way too high, in fact it was higher than the original budget for the whole contract!

Next, they decided to go with a proprietary document management system (DMS) that the company had an enterprise license for. This DMS was supposed to be the "do-all, end-all" DMS that would solve all of their problems. And since it was a commercial product and they had an enterprise license for it, the managers of the project assumed that there must be plenty of support available for it.

Company B spent over 6 months installing, configuring, and tweaking this DMS system on the new hardware that they had to buy in order to run it. When they ran into trouble, they called the people within company B who were supposed to be experts on the system for help. These experts didn't know the system any better than the group working on the project and support from the software company was either too pricey, or not much help. So much for the availability of support for this COTS product!

After 6 months of frustration, they gave up on the company standard DMS and implemented a "solution" using File Manager. This solution provided no features of a DMS, was cumbersome and documents were hard to find.

Perl to the Rescue

At this point I came along - and I was completely confident that I could solve their dilemma using a web-based solution with Perl. What other language would I use?

I talked with the program managers and we discussed what the needs of the DMS were. Next, I gathered user input, which, in my opinion, is the most important factor. When developing a system that is going to impact the way your users work on a system, it is important to understand their needs. After considering the needs of users and management, I proposed a Web-based DMS which management quickly approved. Now all I had to figure out was: how am I going to pull this off?

I started to develop the new system and the pieces seemed to fall into place. Eight weeks later, when we rolled out the new Perl DMS system, I completely shut off the existing File Manager access so users had no choice but to use the new system. It was a rather brutal way to force them onto the new system, but one that I felt was necessary.

The New System

The new Perl DMS system has the following features (and more):

[Sep 30, 2000] Linux PR OpenWatcom.org to Use Perforce the Fast Software Configuration Management System

The Open Watcom project requires an industrial strength source control system, that's why we selected Perforce for the job.

ALAMEDA, Calif., Sept. 29 /PRNewswire/ -- Perforce Software, Inc. today announced that SciTech Software has selected the Perforce source code control system to manage the Open Watcom source code base. The Perforce software will enable the large team of developers participating in the Open Watcom worldwide to have up-to-the-minute access to the latest Open Watcom source code via the Internet.

"Perforce itself has benefited tremendously from Open Source software, and we feel it is only fitting that we return the favor. We're especially happy to be supporting the Watcom C++ compiler, which powers a number of our platforms," said Christopher Siewald, president and chief technology officer of Perforce Software.

Perforce Software makes its Fast Software Configuration Management System available at no charge to bona fide organizations developing freely available software, such as OpenWatcom.org. The Open Watcom code base consists of nearly three million lines of code.

"The Open Watcom project requires an industrial strength source control system, that's why we selected Perforce for the job," said Kendall Bennett, Director of Engineering at SciTech Software, Inc. "SciTech uses Perforce for internal projects, so we know that it can handle the massive demands that the Open Watcom project is going to place on a distributed source control system."

Developers wishing to access the Open Watcom Perforce system can register at Open Watcom's web site ( http://www.openwatcom.org ) to be automatically notified when it comes online.

About Open Watcom

Open Watcom is the result of the Open Source release of the Sybase Watcom C/C++ and Fortran compilers. The Open Watcom products are the first mass market, proprietary compilers to be open sourced and, weighing in at nearly three million lines of source code, represent one of the largest pools of commercial source code of any type ever released under an Open Source license. Sybase, Inc. developed the original Watcom code and SciTech Software, Inc. is the official maintainer of the project. The project has already stirred tremendous interest among thousands of developers worldwide, who will use and contribute to its further development. Open Watcom supports software development in Windows, DOS, OS/2, Netware, QNX, and other operating systems. A Linux version of Open Watcom is planned. The Open Watcom web address is http://www.openwatcom.org.

BitKeeper - Distributed source management and version control

A scalable configuration management system, supporting globally distributed development, disconnected operation, compressed repositories, change sets, and named lines of development (branches).

Distributed means that every developer gets their own personal repository and the tool handles moving changes between repositories. SSH, RSH, and/or SMTP can all be used as communication transports between repositories; or, if both are local, the system just uses the file system. For example, this resyncs from a local file system to a remote system using ssh:

bk resync /home/lm/bk bitmover.com:/home/bk

Other features: file names are revisioned and propogated just like contents; graphical interfaces are provided for merging, browsing, and creating changes; changes are logged to a private or public change server for centralized tracking of work; bug tracking is in the works and will be integrated.

Autoconfiscating Amd Automatic Software Configuration of the Berkeley Automounter -- a very interesting paper.

Process Improvement -- slides

Wilma 1.xMN

Wilma is a suite of CGI scripts that allows you to easily manage a list of items (broken into discrete categories) on the Web. With Wilma, you can make lists of bookmarks, resources, reviews, classified ads, 'what's new' lists, bulletin boards and much more. Anything that needs to be indexed and easily maintained is a good candidate for Wilma.

Version 1.xMN of Wilma is independent of the original distribution by E-doc. It is free for non-commercial use (i.e., as long as you don't make money off it-- see the license), and requires Perl 5 on a Unix machine.

Using Wilma

Wilma is extremely flexible. You can have a public submission facility, to allow anyone to add resources, or you can password protect it (with .htaccess) to restrict access to selected people; in this way, you can manage lists of meeting minutes, job offerings or items for sale. You can even use Wilma (or several Wilmas) to manage an entire site's index. By keeping control over the organization of a site with Wilma while allowing people to add and update pages at will, you can take the headache out of Intranet management.

Downloading Wilma

The most current version of Wilma is 1.36MN, which includes bugfixes and several new features. It's probably a good idea to read some documentation first. Wilma is available in a tarred, gzipped archive. To unpack it, move it to the desired directory and type

$ gzip -d wilma1.36.tar.gz $ tar -xvf wilma1.36.tar

I'd love to hear what you think of my version of Wilma; drop me a line!

About this Version

This version of Wilma is by Mark Nottingham, and is unsupported by E-doc. While there have been many enhancements, none of it would be possible without their generous contribution of the original software to the 'net. Thanks, Andrew and Daniel! Support queries and bug reports should go to Mark Nottingham. Please check the FAQ before mailing. If you're upgrading from a previous version, you'll find that changing to this version only requires entering your values to the new wilma.conf file, as well as copying your data directory over. Please pay attention to the license information found in the docs/ directory, as use of this software implies responsibilities to the current author, as well as the original authors. Enjoy!

5/12/97


Tutorials


Recommended Links

Softpanorama Top Visited

Softpanorama Recommended


FAQs


Recommended Papers

Love/Hate

Although, or perhaps because, I quit my first real job (at a quickly defunct startup company called Enfoprise, building "business workstations") on the first day because they had changed my job assignment from UNIX driver writing to "Systems Integration", I have had a longstanding love/hate relationship with configuration management tools like SCCS and RCS.

Boxes

My first published paper was "Boxes, Links, and Parallel Trees: Elements of a Configuration Management System" in the first USEnix Workshop on Software Management. In this I described a centralized RCS database, with multiple "views" and hardlink cloning to save space and time, as used by Gould Computer Systems Division's UNIX team.

Dissed by CVS

Brian Berliner (who preceded me at Gould, before he left for Prisma) deprecates my approach in one of the CVS papers, mainly because he advocates an optimistic concurrency control approach, whereas he thought that I advocated locking. Actually, I advocate optimistic concurrency control, but I also advocate locking in case the optimistic version gets into livelock; and, I usually insist that there be a single, identified, serial schedule of source code checkins so that testing can proceed in a linear manner. I require programmers to test that their new code works in a system with all previous fixes applied. (Although I recognize that even this requirement can be relaxed.) I am amused that locking has slowly been creeping back into CVS.

ITworld.com - How to manage system files (and anything else) with SCCS

How often does this happen to you? You add a new Web server to the network, inserting its IP address in   /etc/hosts with plenty of time to spare before the Demo For Big People. At T-minus one hour to demo, your browser can't resolve the hostname. Neither can anyone else's. 

Frantic, you check everything before finally coming back to /etc/hosts. Your change is gone, probably because someone else edited the file around the same time and overwrote or removed your edits. You either need some strong configuration control, or a truly loud warning bell that signals anyone's attempt to modify a critical file. Text editors aren't databases -- they don't impose transactional consistency or concurrency control for multiple updates. This doesn't affect you one bit if you're the sole system manager at your site, but as soon as two or more people are chartered to maintain the environment, you need some sort of control system to serialize and document configuration changes. The downside is that you'll spend a non-trivial amount of time deciphering changes made by your peers or un-doing valid work that conflicts with items on your own task list.

In this feature we look at the source code control system, or SCCS, bundled into nearly every Unix operating system and a staple of simple configuration control.

After explaining the basics of SCCS file administration, we'll look at the more difficult issues of merging changes and dealing with files owned by root. Our goal is to reduce the mystery and annoyance factor of SCCS, and make it a viable tool for producing an electronic version of your "site book" documenting the who, what, and why of system-configuration changes.

Rewriting history
SCCS is really a collection of tools that control updates to ASCII files. You can use SCCS with binary data, which will be converted into ASCII form using uuencode, but we'll limit this discussion to ASCII data since that's the source for most configuration files. SCCS lets you put files under configuration control, check out read-only copies, acquire write locks for updates, check in and document changes, print histories, and identify and combine specific updates. Any text file can be put under SCCS's control, making it useful for managing plain text documentation and meeting notes.

Before going into the functional details, here's a bit of terminology:

When you place a file under SCCS control, SCCS creates the history file. To change the file, you check it out for editing, and then each subsequent change to the file is annotated in the history file when you check the modified version back in. SCCS locks the history file while one user is editing it to prevent concurrent updates.

Bones of contention
Let's walk through some basic SCCS operations to see how the components fit together, and then get into the grittier problems that make SCCS more of a benefit than an added burden. First, you'll need to have /usr/ccs/bin in your path, since that's where the SCCS commands live (in SunOS, they're part of /usr/bin).

You can call the individual SCCS commands, or use the sccs front-end tool to simplify life. We'll use the front-end for illustrative purposes, but you can also call the SCCS subcommands directly. Make sure you have an obvious place to store history files, such as a subdirectory called SCCS. SCCS commands look for this subdirectory if you don't give an explicit history file location.

Take a vanilla ASCII file and put it under SCCS control, using the admin command:

 huey% sccs admin -ihosts hosts 
This creates an SCCS history file called  hosts initialized with the content of the file named  hosts. You want the history file and the actual file to be namesakes unless you're particularly good at associating strange path names with your  /etc files. You can choose any file you want for the initialization; if you've just sorted your hosts file into   /tmp/hosts.sorted, the above command line might be:  


	
 huey% sccs admin -i/tmp/hosts.sorted hosts 
If all goes well,  sccs admin returns quietly to the shell prompt. The most common complaint is that the initial file doesn't contain any ID keywords, which are magic strings filled in by SCCS with the file name, delta numbers, and date and time stamps. We'll talk about the keywords and how to maximize your enjoyment of them shortly. Successful submission of a file to SCCS creates a new s-file in the SCCS directory. The file is primarily ASCII text, with SCCS records marked with an ASCII SOH (start of header) character, showing up as   control-A in most editors. All revisions, delta histories and access control information goes into the s-file.  

When you're ready to use the file, check out a read-only copy:

 huey% sccs get hosts 1.2 10 lines 
SCCS tells us the current SID of the file and its size. The   get operation produces a read-only file in the current directory, and it will complain if there's a writeable version of the file already present. After you initialize a history file, be sure to rename or remove the initial file to prevent problems on your first check-out operation.  


	

Edit the file by checking out a writeable version, using sccs get -e or the shorthand sccs edit:

 huey% sccs edit hosts 1.2 new delta 1.3 10 lines 

This time, we're told the new delta number to be created by our editing session. If someone else is editing the file at the time, SCCS produces an error:

 huey% sccs edit hosts 1.2 ERROR [SCCS/s.hosts]: being edited: `1.2 1.3 stern 95/06/16 17:41:22' (ge17) 

Our first contention point is removed: any request to edit a file that is already being consumed by another system administrator is met with a cryptic yet gentle slap on the keyboard. If you want to find out who is currently editing SCCS-controlled files, use the info subcommand:

 huey% sccs info hosts: being edited: 1.2 1.3 stern 95/06/16 17:41:22 aliases: being edited: 1.45 1.46 wendyt 95/06/17 14:50:33 

Make your changes a part of the file's permanent record using sccs delta:

 huey% sccs delta hosts comments? added two new host entries 1.3 2 inserted 0 deleted 10 unchanged 

Your writeable source file is removed when you file the deltas, so you have to do another sccs get to fetch the latest, read-only copy, or merge the delta and get operations together with sccs delget hosts.

At this point, you can feed the read-only file into whatever system management step comes next: running an NIS make, executing newaliases, or restarting a daemon with its new configuration file.

Letters of intent
How can you determine the version number of a file, or if it's even SCCS controlled? When you check a file out, the get subcommand fills in SCCS keywords with values such as the SID, pathname of the history file, date, and time. The SCCS magic cookie indicating a keyword is a single, capital letter between percent signs, such as %Z%. Put the SCCS keywords in a comment header in your file, and you have a built-in identification scheme. Here's a sample header for a configuration file that uses the pound sign (#) as a comment character:

 # %M% %I% %H% %T% 

This set of keywords gives you the filename (M), the file revision or SID (I), the current date (H), and the time of checkout (T). You may also choose to insert the pathname to the s-file (P). (Here is a partial list of SCCS magic cookies.) The %W% keyword generates the filename and SID prefixed with the string @(#), which is assumed to be unique to the SCCS system. The what utility searches for the SCCS prefix and prints any information after it, allowing you to quickly identify any number of files.

To include other information to be picked up by what, use the %Z% keyword to insert an SCCS cookie and then build your own identification string. A more verbose version of the example above is easily found by what:

 # %Z% common hosts file revision %I% of %H% at %T% 

>what> is smart enough to look in the string tables of executables and libraries, so it will identify the SCCS versions of each object component. Bundle an SCCS string into a C program with a global definition like this:

 char *sccs_id = "%Z% %I% %H% %T%"; 

While peeking at the SID and file origins is useful for quick sanity checks, reviewing the delta history of a file is more likely to tell you who changed something and why. When you create the delta, SCCS asks for a comment which is then recorded with your login in the history file. Dump the delta history using sccs prs:

 huey% sccs prs hosts SCCS/s.hosts: D 1.2 95/06/16 16:49:32 stern 2 1 00002/00002/00008 COMMENTS: added alias for wind, new host shower D 1.1 95/06/16 16:43:30 stern 1 0 00010/00000/00000 COMMENTS: date and time created 95/06/16 16:43:30 by stern 
The line introducing each delta shows you the SID, date and time of change, and the login of the person making the change. The slash-separated numbers are the line counts of new, deleted and unchanged lines. The manual pages for the  prs subcommand also list all of the possible SCCS keywords and their expanded values.  


	

Merge ahead
We still haven't tackled two of the hardest problems in change management: how do you get multiple users to access SCCS files, particularly when the files are owned by root, and how do you merge changes together? The first problem doesn't have an easy solution. You can keep all of your SCCS history files in /etc/SCCS, and insist that system administrators include their user names when making changes as root. Since this is fairly unlikely, the next step is to make the SCCS history files group-writeable by members of your system management group (creating a new user group if you need to). Create private SCCS work areas for each system manager using symbolic links to the actual history file location: >

huey% mkdir ~stern/etc
huey% ln -s /etc/SCCS ~stern/etc/SCCS
huey% cd ~stern/SCCS
huey% sccs edit hosts

Within ~stern/SCCS, an sccs edit hosts picks up the s-file /etc/SCCS/s.hosts, giving me a private copy of the hosts file to work on.

When I check it back in, the single host-specific copy is returned where other managers (and the system) can find it, but it has my user name attached to changes instead of root. To publicize the changes, I need to su to root, cd into /etc, and then do an sccs get hosts to fetch my latest changes and install the file. Note that the symbolic link points to a machine-specific location, which means I have to be logged on to the machine on which I want to make the edits before doing the checkout. I can always move SCCS files around, as long as files get installed on the appropriate machines.

If you're worried about giving up some measure of security regarding permissions on /etc/hosts, remember that only root can install the file in /etc and rebuild NIS maps or restart daemons. For an added layer of safety, using the SCCS access control feature, explicitly name allowed users with sccs admin -a:

 huey% sccs admin -astern huey% sccs admin -awendyt 

But the opening question still lingers: how do I find out what happened to my hosts file at 3:30 on Friday afternoon June 16, and who did it? The easiest way is to look at the delta history since that time:

 huey% sccs prs -l -c95-06-16-15-30 hosts 

The -l flag says I'm interested in things that occurred after the time specified with the -c flag. The time and date are given in YYMMDDHHMM format, with any non-white space character separating the items. This example shows me the revision history comments and the user names responsible for making changes.

If I want to see the actual line by line edits, it's sccs diffs to the rescue:

 huey% sccs diffs -c95-06-16-15-30 hosts 

Like the diff command, this compares the current working copy of a file to any older delta, identified by SID or by a timestamp. In this example, I'll see the list of changes between the current hosts file and the one that existed at 3:30 PM on June 16. Want to regenerate the hosts file, minus a few changes? get lets you include or exclude any SID, providing a simple mechanism to drop changes from the current copy of a file:

 huey% sccs get -x1.6,1.7 hosts 

The current hosts file is retrieved without the changes applied in SIDs 1.6 and 1.7. If you want to extract the changes made in those deltas, generate the differences with context in a form that can be later fed to sed, just like the output of the standard Unix diff command:

 huey% sccs get -r1.6 hosts huey% sccs diffs -r1.5 hosts > hosts.sed.6 

If you plan on applying the patches at a later time, when the hosts file may have undergone some additional minor edits, you'll need to generate context differences that can be fed through patch:

 huey% sccs diffs -C -r1.5 hosts > hosts.sed.6 

>diff takes the -c flag for generating context differences, but sccs diffs takes -C to avoid conflict with the timestamp flag.

Control freaks
Like all powerful system administration tools, SCCS has a number of poorly documented but interesting features and subtle caveats:

There's certainly much more that can be done with SCCS. In the last issue of Advanced Systems, Chuck Musciano suggested using a Web browser front end for checking files in and out, and viewing the history. A bit of creative perl or awk programming lets you generate HTML out of the sccs prt output. Send us your marriage proposals for HTML and SCCS, and we'll attach the interesting submissions to this page.

The hidden agenda of using SCCS is accountability. You want to know who inflicted a change, and why, and under whose authority. A rigorous policy for attributing changes and accepting responsibility for their implementation and effects is fundamental to any robust, mission-critical environment.

Dan Geer, noted security expert and frequent speaker, tells the story of an investment bank executive who demanded a systems change to circumvent normal reporting and control code. The hole was later exploited to execute trades that violated various internal and external regulations. Who was responsible?

Tracing the changes from idea to deployment gives you the first measure of accountability. It's a good thing to have when you hear those warning bells.


Remote System Management Tool

alphaWorks Remote System Management Tool Overview

What is Remote System Management Tool?

Remote Server Management Tool is an Eclipse plug-in that provides an integrated graphical user interface (GUI) environment and enables testers to manage multiple remote servers simultaneously. The tool is designed as a management tool for those who would otherwise telnet to more than one server to manage the servers and who must look at different docs and man pages to find commands for different platforms in order to create or manage users and groups and to initiate and monitor processes. This tool handles these operations on remote servers by using a user-friendly GUI; in addition, it displays configuration of the test server (number of processors, RAM, etc.). The activities that can be managed by this tool on the remote and local server are divided as follows:

How does it work?

This Eclipse plug-in was written with the Standard Widget Toolkit (SWT). The tool has a perspective named Remote System Management; the perspective consists of test servers and a console view. The remote test servers are mounted in the Test Servers view for management of their resources (process, file system, and users or groups).

At the back end, this Eclipse plug-in uses the Software Test Automation Framework (STAF). STAF is an open-source framework that masks the operating system-specific details and provides common services and APIs in order to manage system resources. The APIs are provided for a majority of the languages. Along with the built-in services, STAF also supports external services. The Remote Server Management Tool comes with two STAF external services: one for user management and another for proving system details. About the technology author(s):
Geetha Adinarayan is an advisory software specialist from IBM Software Labs, Bangalore, India. She has five years of experience in IBM messaging middleware products. Ms. Adinarayan holds a degree in information systems from BITS, Pilani, India; she is also a Certified Software Test Engineer and IBM Certified System Administrator for WebSphere Business Integration Message Broker 5. Currently, Ms. Adinarayan works with the High Performance On Demand Solutions (HiPODs) team in India. Her interests are in performance analysis of complex customer solutions and in autonomic computing.

Shashi K. Dalmia is a staff software engineer from IBM Software Labs, Bangalore, India. He has been with IBM for five years and in the IT field for a total of ten years. He has experience in application development, systems software, and messaging middleware. Mr. Dalmia holds a master's degree in software systems from BITS, Pilani, India, and he is an IBM Certified Systems Administrator for Websphere Business Integrator 2.1. Currently, he works on Websphere Business Integrator, Message Broker 6.0, with the Systems Test team in India. His interests include learning new technologies and creating tools to help ease the work of testers and developers.

Rahul Gupta is a computer science engineer from the National Institute Of Engineering, Mysore. He is skilled in the Software Test Automation System (STAF) and Eclipse plug-in development.

Sreenandan Iyengar is a computer science engineer from National Institute Of Engineering, Mysore. He is skilled in the Software Test Automation System (STAF) and Eclipse plug-in development.


PIKT Intro  The Big Picture


HP SCR

scr+dmi - summary

System Configuration Repository (SCR) capture and store information about your system's configuration on your request or at a scheduled times. Desktop Management Interface (DMI) operates between your management software and your system's components. The DMI standard gives technical support personnel, IT managers, and individual users a common path to access information about all aspects of a computer system.
Version B.11.11.32, B.11.00.32 and version B.10.20.32 of SCR+DMI for HP-UX are now available free for download and use from this Web site. There is also a CD containing the product that you can order. Select the link above to see how.

InterWorks 99 Session 027 - Managing System Config Data

The System Configuration Repository (SCR) is an application that tracks changes in a system's configuration over time. SCR can take snapshots of system configuration information periodically or manually before and after major configuration changes. SCR provides tools to filter and compare snapshots from different times or from different machines.

The information that is stored in snapshots comes from DMI, and is stored in a database. Currently, the configuration information available through DMI includes system information such as devices, volume groups, file systems, kernel parameters, etc., and information about software products, including information such as bundles and filesets. (Developers can write their own DMI instrumentation in order to expand the information stored in SCR.)

SCR is highly configurable and can be used in many ways. For example, SCR can be used to maintain consistency on a system or across systems, or to a recover a machine's configuration information in case of disaster, or to maintain consistency between test systems and production systems,...

Included in this presentation is an overview of SCR, future directions, and example scripts for how to use SCR most efficiently. In addition, we will be soliciting input on additional APIs and additional data coverage.

SCR+DMI for HP-UX

Etc

Depot -- a discontinued project

Host Factory

Working Version

Creating multiple, identical copies of a system can be hard work; it becomes even harder if patches and diffs need to be maintained. Multiply this by hundreds of computers ... and Unix sysadmins go crazy.

The Working Version company has created a system version control and distribution mechanism to manage entire installed system versions.

Safari

Infrastructure A Prerequisite for Effective Security



Etc

Society

Groupthink : Understanding Micromanagers and Control Freaks : Toxic Managers : BureaucraciesHarvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Two Party System as Polyarchy : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

Skeptical Finance : John Kenneth Galbraith : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Oscar Wilde : Talleyrand : Somerset Maugham : War and Peace : Marcus Aurelius : Eric Hoffer : Kurt Vonnegut : Otto Von Bismarck : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Oscar Wilde : Bernard Shaw : Mark Twain Quotes

Bulletin:

Vol 26, No.1 (January, 2013) Object-Oriented Cult : Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks: The efficient markets hypothesis : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

 

The Last but not Least


Copyright © 1996-2014 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Site uses AdSense so you need to be aware of Google privacy policy. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine. This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting hosting of this site with different providers to distribute and speed up access. Currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.

Disclaimer:

The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: August 05, 2013