Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Cron and Crontab commands (Vixie Cron)

News Job schedulers Recommended Links Reference

/etc/crontab

crontab command
Crontab structure cron.allow cron.deny Crontab specification tips at  command batch command
Macros Ordinary users in SLES 11 and 10 are denied access to cron logrotate Grid engine tmpwatch Parallel command execution
Troubleshooting Perl Admin Tools Tips Sysadmin Horror Stories

Humor

Etc

Introduction

Cron is a standard scheduler for Unix servers. There are several cron implementations. All follow the principle of fire & forget. They have no ability to monitor jobs success/failures and to provide dependency-based scheduling. The only thing they can do is to mail the output to the user under which job was invoked (typically root). Along with calendar based invocation, cron has basic batch facilities (see  batch command ), but  grid engine which is also open source component of Unix/Linux provides much more sophisticated and powerful batch environment. Batch command is actually an example of dependency based scheduling as command is executed when load of the server falls below specified for atd daemon threshold (1.5 by default). There are several queues that you can populate and each queue can be populated with multiple jobs which will be executed sequentially, one after another.  See batch command for more details.  

Absence of dependency-based scheduling is less of a limitation then one might think. A simple status-based dependency mechanism is easy to implement in shell. In this case the start of each job can depend of existence of a particular "status-file" that was created (or not, in case of failure) during some previous step. Those status-files can survive reboot or crash of the server.

With at command the same idea can be implemented by cancelling the execution of next scheduled step from a central location using ssh . In this case ssh can serve as central scheduling daemon and at daemon as local agent. With the universal adoption of ssh daemon remote scheduling is as easy as local. For example, if a backup job fails it did not create the success file, then each job dependent on it should checks the existence of the file and exit if the file is not found. In more general way one can implement script "envelope" -- a special script that creates status file and send messages at the beginning and at the end of each step to the monitoring system of your choice. Using at commands allow you to cancel or move to a different time all at commands dependent on successful completion of the backup.

Another weakness of cron is that its calendaring function is not sophisticated enough to understand national holidays, furlough, closure, maintenance periods, plants shut downs, etc. Avoiding running workload on holidays or specific days (e.g. an inventory day) is relatively easy to implement via standard envelope which performs such checks. Additionally you can store all the commands with parameters in /root/cronrun.d or similar directory and run checks within them via functions or external scripts. You can also specify in cron command  /usr/local/bin/run with appropriate parameters (the first parameter is the name of script command to run, everything parameters passed to the script). For example:

@dayly /usr/local/bin/run tar cvzf /var/bak/etc`date%y%m%d`.tgz
@daily /root/cronrun/backup_image

You can also prefix command with the "checking script" (let's name in canrun) script,  and exit, if it returns "false" value. For example

@daily /usr/local/bin/run && tar cvzf /var/bak/etc`date%y%m%d`.tgz

Another, more flexible, but slightly more complex way way is to use the concept of "scheduling day" with particular start time and length (in some cases it is beneficial to limit length of scheduling day to just one shift or extend it to 48 hours period). In this case the "start of scheduling day" special script filters a sequence of at commands that is scheduled that day and then propagates this script to all servers that need such commands via ssh or some  parallel command execution command.  In a simplest case you can generate such a sequence using  cron itself, when all running command are prefixed by echo command that generated at command with the appropriate parameter and redirects output to "the schedule of the day" file.   For example

@hourly echo 'echo "uptime" | at' `date +%H:%M` >> /root/schedule_of_the_day

This allows to centralize all such the checks on a single server -- mothership.

After this one time "feed" servers become autonomous and execute the generated sequence of jobs, providing built-in redundancy mechanism based on independence of local cron/at daemons from the scheduler on the "mothership" node. Failure of the central server (aka mothership) does not affect execution of jobs of satellite servers until the current scheduling day ends.

Due to day light saving time changes it is prudent not to schedule anything important between midnight and 3 AM.

Due to day light saving time changes it is prudent not to schedule anything important between midnight and 3 AM.

Linux and OpenSolaris use Vixie cron which is somewhat richer in facilities than traditional SysV cron daemon. Two important extensions are slash notation for specified period ("*/5" means each five minutes) and "@" keywords such as @reboot which allow to specify scripts that will be executed after reboot. This is a simpler way then populating /etc/init.d/local script which serves for the same purpose.

Without those two extensions Vixie cron is a compatible with SysV cron.

To determine if Vixie cron is installed, use the rpm -q cron command on Suse 10 and rpm -qi vixie-cron on Red Hat.

To determine if the service is running, your can use the command /sbin/service crond status. Cron is one of the few daemons that does not require restart when the configuration changed via crontab command.

Vixie cron extensions include:

@reboot echo `hostname` was rebooted at `date` | mail -s "Reboot notification" joe.admin@some-corp.com

Other macros include:

Other features of Vixie cron are pretty standard.

Linux distributions make cron somewhat more complex and flexible by splitting crontab into several include file with predefined names cron.daily, cron.hourly,  cron.weekly and cron.monthly that are executed from the "central" crontab. 

  1. /etc/crontab (Red Hat & Suse10) -- the master crontab file which like /etc/profile is always executed first and contains several important settings including shell to be used as well as invocation of the script which process predefined five directories:

    Implementations of this feature are different in Red Hat and Suse 10. See below for details of each implementation.

  2. etc/cron.allow - list of users for which cron is allowed.  The files "allow" and "deny" can be used to control access to the "crontab" command (which serves for listing and editing of crontabs; direct access to them is discouraged). Cron does not need to be restarted of send HUP signal to reread those files.
     
  3. /etc/cron.deny - list of users for which cron is denied. Note: if both cron.allow and cron.deny files exist the cron.deny is ignored.
     
  4. crontab files: there are multiple (one per user) crontab files which list tasks and their invocation conditions. All crontab files are stored in a read-protected directory, typically /var/cron).  Those files is not edited directly. There is a special command crontab which serves for listing and editing them.

/etc/crontab

In both Suse and Red Hat there is a master crontab file /etc/crontab which like /etc/profile is always executed first (or to be more correct is a hidden prefix of  "root" crontab file).

By default it contains several settings and invocation of script which runs each 15 min) and processes four predefined directories (which represent "Linux extension" of standard cron functionality).

Here is example from Suse:

SHELL=/bin/sh
PATH=/usr/bin:/usr/sbin:/sbin:/bin:/usr/lib/news/bin
MAILTO=root
#
# check scripts in cron.hourly, cron.daily, cron.weekly, and cron.monthly
#
-*/15 * * * *   root  test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons >/dev/null 2>&1

Notes:

As we already mentioned, details of implementation are different for Red Hat and Suse.

Red Hat implementation of /etc/crontab

In Red Hat the master crontab file /etc/crontab uses /usr/bin/run-parts script to execute content of predefined directories. It contains the following lines

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly

Bash script run-parts contains for loop which executes all components of corresponding directory one by one.  This is a pretty simple script. You can modify if you wish:

#!/bin/bash
# run-parts - concept taken from Debian
# keep going when something fails
set +e

if [ $# -lt 1 ]; then
        echo "Usage: run-parts <dir>"
        exit 1
fi

if [ ! -d $1 ]; then
        echo "Not a directory: $1"
        exit 1
fi

# Ignore *~ and *, scripts
for i in $1/*[^~,] ; do
        [ -d $i ] && continue
        # Don't run *.{rpmsave,rpmorig,rpmnew,swp} scripts
        [ "${i%.rpmsave}" != "${i}" ] && continue
        [ "${i%.rpmorig}" != "${i}" ] && continue
        [ "${i%.rpmnew}" != "${i}" ] && continue
        [ "${i%.swp}" != "${i}" ] && continue
        [ "${i%,v}" != "${i}" ] && continue

        if [ -x $i ]; then
                $i 2>&1 | awk -v "progname=$i" \
                              'progname {
                                   print progname ":\n"
                                   progname="";
                               }
                               { print; }'
        fi
done
exit 0

This extension allows adding cronjobs by simply writing a file containing an invocation line into appropriate directory.  For example, by default /etc/cron.daily directory contains:

# ll
total 48
-rwxr-xr-x 1 root root  379 Dec 18  2006 0anacron
lrwxrwxrwx 1 root root   39 Jul 24  2012 0logwatch -> /usr/share/logwatch/scripts/logwatch.pl
-rwxr-xr-x 1 root root  118 Jan 18  2012 cups
-rwxr-xr-x 1 root root  180 Mar 30  2011 logrotate
-rwxr-xr-x 1 root root  418 Mar 17  2011 makewhatis.cron
-rwxr-xr-x 1 root root  137 Mar 17  2009 mlocate.cron
-rwxr-xr-x 1 root root 2181 Jun 21  2006 prelink
-rwxr-xr-x 1 root root  296 Feb 29  2012 rpm
-rwxr-xr-x 1 root root  354 Aug  7  2010 tmpwatch
You can study those scripts to add your own fragments or rewrite them completely.

Invocation of tmpwatch from /etc/cron.daily

Another "standard" component of /etc/cron.daily  that deserves attention is a cleaning script for standard /tmp locations called tmpwatch:

cat tmpwatch
flags=-umc
/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \
        -x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix \
        -X '/tmp/hsperfdata_*' 240 /tmp
/usr/sbin/tmpwatch "$flags" 720 /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
    if [ -d "$d" ]; then
        /usr/sbin/tmpwatch "$flags" -f 720 "$d"
    fi
done
The utility can is also invoked from several other scripts. For example cups script provides for removal of temp files from /var/spool/cups/tmp tree using /usr/sbin/tmpwatch utility
#!/bin/sh
for d in /var/spool/cups/tmp
do
    if [ -d "$d" ]; then
        /usr/sbin/tmpwatch -f 720 "$d"
    fi
done
exit 0
If you do not use caps it is redundant, and you can rename it to ~cups to exclude from execution. You can also use it as a prototype for creating your own script(s) to clean temp directories of applications that you do have on the server.

Suse implementation of /etc/crontab

Suse 10 and 11 have identical implementations of this facility.

There is also "master" crontab at /etc/crontab that like /etc/profile is always executed. But details and the script used (/usr/lib/cron/run-crons) are different from Red Hat implementation:

SHELL=/bin/sh
PATH=/usr/bin:/usr/sbin:/sbin:/bin:/usr/lib/news/bin
MAILTO=root
#
# check scripts in cron.hourly, cron.daily, cron.weekly, and cron.monthly
#
-*/15 * * * *   root  test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons >/dev/nul
l 2>&1

Suse also uses the same five directories as Red Hat. Each of those can contain scripts which will be executed by shell script /usr/lib/cron/run-crons. The latter is invoked each 15 minutes like in Red Hat.  But the  script itself is quite different, and  as is typical for Suse much more complex then one used by Red Hat. To make things more confusing for sysadmins, at the beginning of its execution it sources the additional file /etc/sysconfig/cron which contains additional settings which control the script (and by extension cron) behavior:

/home/bezroun # cat /etc/sysconfig/cron
## Path:        System/Cron/Man
## Description: cron configuration for man utility
## Type:        yesno
## Default:     yes
#
# Should mandb and whatis be recreated by cron.daily ("yes" or "no")
#
REINIT_MANDB="yes"

## Type:        yesno
## Default:     yes
#
# Should old preformatted man pages (in /var/cache/man) be deleted? (yes/no)
#
DELETE_OLD_CATMAN="yes"

## Type:        integer
## Default:     7
#
# How long should old preformatted man pages be kept before deletion? (days)
#
CATMAN_ATIME="7"
## Path:        System/Cron
## Description: days to keep old files in tmp-dirs, 0 to disable
## Type:        integer
## Default:     0
## Config:
#
# cron.daily can check for old files in tmp-dirs. It will delete all files
# not accessed for more than MAX_DAYS_IN_TMP. If MAX_DAYS_IN_TMP is not set
# or set to 0, this feature will be disabled.
#
MAX_DAYS_IN_TMP="0"

## Type:        integer
## Default:     0
#
# see MAX_DAYS_IN_TMP. This allows to specify another frequency for
# a second set of directories.
#
MAX_DAYS_IN_LONG_TMP="0"

## Type:        string
## Default:     "/tmp"
#
# This variable contains a list of directories, in which old files are to
# be searched and deleted. The frequency is determined by MAX_DAYS_IN_TMP
#
TMP_DIRS_TO_CLEAR="/tmp"

## Type:        string
## Default:     ""
#
# This variable contains a list of directories, in which old files are to
# be searched and deleted. The frequency is determined by MAX_DAYS_IN_LONG_TMP
# If cleaning of /var/tmp is wanted add it here.
#
LONG_TMP_DIRS_TO_CLEAR=""

## Type:        string
## Default:     root
#
# In OWNER_TO_KEEP_IN_TMP, you can specify, whose files shall not be deleted.
#
OWNER_TO_KEEP_IN_TMP="root"

## Type:        string
## Default:     no
#
# "Set this to "yes" to entirely remove (rm -rf) all  files and subdirectories
# from the temporary directories defined in TMP_DIRS_TO_CLEAR on bootup.
# Please note, that this feature ignores OWNER_TO_KEEP_IN_TMP - all files will
# be removed without exception."
#
# If this is set to a list of directories (i.e. starts with a "/"), these
# directories will be cleared instead of those listed in TMP_DIRS_TO_CLEAR.
# This can be used to clear directories at boot as well as clearing unused
# files out of other directories.
#
CLEAR_TMP_DIRS_AT_BOOTUP="no"

## Type:         string
## Default:      ""
#
# At which time cron.daily should start. Default is 15 minutes after booting
# the system. Example setting would be "14:00".
# Due to the fact that cron script runs only every 15 minutes,
# it will only run on xx:00, xx:15, xx:30, xx:45, not at the accurate time
# you set.
DAILY_TIME=""

## Type:         integer
## Default:      5
#
# Maximum days not running when using a fixed time set in DAILY_TIME.
# 0 to skip this. This is for users who will power off their system.
#
# There is a fixed max. of 14 days set,  if you want to override this
# change MAX_NOT_RUN_FORCE in /usr/lib/cron/run-crons
MAX_NOT_RUN="5"

## Type:        yesno
## Default:     no
#
# send status email even if all scripts in
# cron.{hourly,daily,weekly,monthly}
# returned without error? (yes/no)
#
SEND_MAIL_ON_NO_ERROR="no"

## Type:        yesno
## Default:     no
#
# generate syslog message for all scripts in
# cron.{hourly,daily,weekly,monthly}
# even if they haven't returned an error? (yes/no)
#
SYSLOG_ON_NO_ERROR="yes"

## Type:       yesno
## Default:    no
#
# send email containing output from all successful jobs in
# cron.{hourly,daily,weekly,monthly}. Output from failed
# jobs is always sent. If SEND_MAIL_ON_NO_ERROR is yes, this
# setting is ignored.  (yes/no)
#
SEND_OUTPUT_ON_NO_ERROR="no"

From the content of the script you can see, for example, that time of execution of scripts in /etc/cron.daily is controlled by env. variable $DAILY_TIME which can be set in /etc/sysconfig/cron system file. This way you can run daily jobs at time slot different from  hourly, weekly and monthly jobs. 

This is an example of unnecessary complexity dictated by desire to create a more flexible environment, but which   just confuses most sysadmin, which neither appreciate, no use provided facilities.  

As you can see from the content of the file listed above, considerable attention in /etc/sysconfig/cron is devoted to the problem of deletion of temp files. Which is an interesting problem in a sense that you can not just delete temporary files on running system. Still you can use an init task that runs tmpwatch utility before any application daemons are started.

Editing cron files with crontab command

NOTE: Creating a backup file before any non-trivial crontab modification is a must.

To list and edit cron file one should use crontab command which copies the specified file or standard input if no file is specified, into a directory that holds all users' crontabs. It can also invoke the editor with option -e to edit exiting crontab.

crontab Command Switches

If option -u is not specified crontab operates with cron files for the current user.

There are two man pages for crontab. You can view them via WWW browser from die.net:

Users are permitted to use crontab, if their names appear in the file /etc/cron.allow. If that file does not exist, the file /etc/cron.deny is checked to determine if the user should be denied access to crontab. If neither file exists, only a process with appropriate privileges is allowed to submit a job. If only /etc/cron.deny exists and is empty, global usage is permitted. The cron.allow and cron.deny files consist of one user name per line.

See Controlling access to cron for more info.

Crontab structure

A crontab file can contain two types of instructions to the cron daemon:

Each user has their own crontab, and commands in any given crontab will be executed as the user who owns the crontab.

Blank lines and leading spaces and tabs are ignored. Lines whose first non-space character is a pound-sign (#) are comments, and are ignored. Note that comments are not allowed on the same line as cron commands, since they will be taken to be part of the command. Similarly, comments are not allowed on the same line as environment variable settings.

An active line in a crontab can be either an environment setting line or a cron command line.

  1. An environment setting should be in the form, name = value .  The spaces around the equal-sign (=) are optional, and any subsequent non-leading spaces in value will be interpreted as the value assigned to name.

    The value string may be placed in quotes (single or double, but matching) to preserve leading or trailing blanks. The name string may also be placed in quote (single or double, but matching).

    NOTE: Several environment variables should be set via /etc/crontab which we discussed above (SHELL, PATH, MAILTO). Actually in view of /etc/crontab existence, setting environment variables via cron looks redundant and should be avoided.
     

  2. Each cron command is a single line that consists of six fields. One line of cron table specifies one cron job. A cron job is a specific task that runs a certain number of times per minute, day, week, or month. For example, you can use a cron job to automate a daily MySQL database backup. The main problem with cron jobs is that if they aren't properly configured and all start at the same time they can cause high server loads. So it is prudent to have different time for start of hour and daily jobs. In case of important jobs which run once a day it makes sense to configure your cron job so that the results of running the scheduled script are emailed to you, not to root under which the job was run.

    There are two main ways by which you create a cron job. On is using your Web administration panel ( most *nix webhosting providers offer Web-base interface to cron) or using shell access to your server.

    A crontab expression is a string comprising 6 or 7 (with year) fields separated by white space. Its structure has the structure shown in the table:

    Field Meaning Allowed range Allowed special characters Example
    1 Minutes that have to pass after the selected hour in order to execute the task 0-59 * / , - 30, which means 30 minutes after the selected hour
    2 Hours at which the task has to be executed 0-23 * / , - 04, which means at 4 O'clock in the morning
    3 Days of the month on which this task has to be executed 1-31 * / , -  *, which means that every day of the selected month
    4 Months during which the task has to be executed 1-12 * / , - 3-5, which means run the task in the months of March, April & May First 3 letters of the Month name. Case doesn't matter. E.g. Jan
    5 Days of the week on which this task has to be run 0-7 * / , -  * means all days of the selected weeks Numeric value or first 3 letters of the Day name. Case doesn't matter (Sun or sun)
    (0 or 7 is Sun, 1 is Mon...)
    6 Name of the program (task) to be executed Any program

    %

    absolute path to executable required

Special characters

Support for each special character depends on specific distributions and versions of cron

Asterisk ( * )
The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 4th field (month) would indicate every month.
Slash ( / )
Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field.
Percent ( % )
Percent-signs (%) in the command, unless escaped with backslash (\), will be changed into newline characters, and all data after the first % will be sent to the command as standard input.
Comma ( , )
Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays.
Hyphen ( - )
Hyphens are used to define ranges. For example, 2000-2010 would indicate every year between 2000 and 2010 CE inclusive.

Crontab specification tips

The last field (the rest of the line) specifies the command to be run. The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the /etc/crontab

Percent-signs (%) in the command, unless escaped with backslash (\), will be changed into newline characters, and all data after the first % will be sent to the command as standard input.

Note: The day of a command's execution can be specified by two fields --- day of month, and day of week. If both fields are restricted (i.e, aren't *), the command will be run when either field matches the current time. For example, "30 4 1,15 * 5'' would cause a command to be run at 4:30 am on the 1st and 15th of each month, plus every Friday.

Recommended header for crontab

To avoid mistakes it is recommended to include the following header in the crontab

# minute (0-59),
# |      hour (0-23),
# |      |       day of the month (1-31),
# |      |       |       month of the year (1-12),
# |      |       |       |       day of the week (0-6 with 0=Sunday, 1-Monday).
# |      |       |       |       |       commands

Macros

Instead of the first five fields, in vixie-cron you can use one of eight predefined macros:

string meaning
------ -------
@reboot Run once, at startup.
@yearly Run once a year, "0 0 1 1 *".
@annually (same as @yearly)
@monthly Run once a month, "0 0 1 * *".
@weekly Run once a week, "0 0 * * 0".
@daily Run once a day, "0 0 * * *".
@midnight (same as @daily)
@hourly Run once an hour, "0 * * * *".

The most useful is @reboot which does provide new functionality. Usefulness of others macros is questionable.

Note about daytime savings

If you are in one of the countries that observe Daylight Savings Time, jobs scheduled during the rollback or advance will be affected. In general, it is not a good idea to schedule jobs from 1pm to 3pm.

For US timezones (except parts of IN, AZ, and HI) the time shift occurs at 2AM local time. For others, the output of the zdump(8) program's verbose (-v) option can be used to determine the moment of time shift.

Controlling access to cron

Cron has a built in feature of allowing you to specify who may, and who may not use it. It does this by the use of /etc/cron.allow and /etc/cron.deny files:

  1. /etc/cron.allow - list of users for which cron is allowed.  The files /etc/cron.allow  and /etc/cron.deny can be used to control access to the "crontab" command (which serves for listing and editing of crontabs; direct access to them is discouraged). Cron does not need to be restarted of send HUP signal to reread those files.
  2. /etc/cron.deny - list of users for which cron is denied.

Note: if both cron.allow and cron.deny files exist the cron.deny is ignored.

 If you wanted that only selected users can use cron, you could add the line ALL to the cron.deny file and put the list of those users into cron.allow file:

echo ALL >>/etc/cron.deny
If you want user apache to be able to use cron you need to add the appropriate line to /etc/cron.allow file. For example:
echo apache >>/etc/cron.allow

If there is neither a cron.allow nor a cron.deny file, then the use of cron is unrestricted (i.e. every user can use it). If you put a name (or several names) into cron.allow file, without creating a cron.deny file, it would have the same effect as creating a cron.deny file with ALL in it. This means that any subsequent users that require cron access should be put in to the cron.allow file.

For more information about cron.allow and cron.deny file see Reference item cron.allow and cron.deny

Output from cron

By default the output from cron gets mailed to the person specified in the MAILTO variable. If this variable is not defined it is mailed to the owner of the process.  If you want to mail the output to someone else, you can just pipe the output to the command mailx command. For example:
echo test | mail -s "Test of mail from cron" joe.user@firm.com

If you have a command that is run often, and you don't want to be emailed the output every time, you can redirect the output to a log file (or /dev/null, if you really don't want the output).  For example

cmd >> log.file

Now you can create a separate cron job that analyses the log file and mail you only if this is an important message. Or just once a day, if the script is not crucial. This way you also can organize log rotation.  See logrotate for details

Backup and restore

The loss of crontab is a serious trouble. This is one of a typical sysadmin blunders (Crontab file - The UNIX and Linux Forums)

mradsus

Hi All,
I created a crontab entry in a cron.txt file accidentally entered

crontab cron.txt.

Now my previous crontab -l entries are not showing up, that means i removed the scheduling of the previous jobs by running this command "crontab cron.txt"

How do I revert back to previously schedule jobs.
Please help. this is urgent.,
Thanks.

In this case, if you do not have a backup,  you only remedy is to try to extract cron commands from /var/log/messages.

For classic cron backup and restore are simple. You just pipe the crontab in backup file or pipe the backup from the crontab backup file.

To backup the current crontab settings for current user:
crontab -l >  `whoami`.crontab.`date +%y%m%d`

Or from root:

crontab -l -u USERNAME > `whoami`.crontab`date +%%y%m%d`

If we have a set of files named for example root.crontab131010root.crontab131011, root.crontab131012, you can restore content of the most recent backup of the crontab using the command:

ls -tr `whoami`.crontab* | tail -1 | crontab -u `whoami` -

The “-“ specifies that crontab will use the standard input.

With Suse and RHEL structure /etc/cron.d directory should be backed up too. Remember that crontab data can exist for each user of the system, not just root.  For example the user oracle often has crontab entries.

You can delete older files and backup new on continued basis. Here are some hints for the implementation  (How to backup and restore crontab Andres Montalban):

...I had to come up with a solution for a customer using AWS/EC2 to make their crontab resilient to server re-builds in case it goes down by autoscaler or other situations. That’s why I created a script that is called daily that backups the crontab of a specific user to a file in an EBS mounted volume maintaining a 7 days library just in case:

#!/bin/bash
find /data -maxdepth 1 -name ‘bkp-crontab-*’ -mtime 7 -exec rm {} \; 
crontab -l -u YOUR_USER_GOES_HERE > /data/bkp-crontab-`date +%Y-%m-%d`

Then to recover the last backup of crontab for the user you can put this in your server script when you are building it:

cat `ls -tr /data/bkp-crontab-* | tail -1` | crontab -u YOUR_USER_GOES_HERE -

This will load the last backup file in the users crontab.

I hope this helps you to have your crontabs backed up

You can also operate with files in /var/spoon/cron/tabs (in Linux) and /var/spoon/cron/crontabs in classic Unixes.

 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Nov 01, 2017] Cron best practices by Tom Ryder

May 08, 2016 | sanctum.geek.nz

The time-based job scheduler cron(8) has been around since Version 7 Unix, and its crontab(5) syntax is familiar even for people who don't do much Unix system administration. It's standardised , reasonably flexible, simple to configure, and works reliably, and so it's trusted by both system packages and users to manage many important tasks.

However, like many older Unix tools, cron(8) 's simplicity has a drawback: it relies upon the user to know some detail of how it works, and to correctly implement any other safety checking behaviour around it. Specifically, all it does is try and run the job at an appropriate time, and email the output. For simple and unimportant per-user jobs, that may be just fine, but for more crucial system tasks it's worthwhile to wrap a little extra infrastructure around it and the tasks it calls.

There are a few ways to make the way you use cron(8) more robust if you're in a situation where keeping track of the running job is desirable.

Apply the principle of least privilege

The sixth column of a system crontab(5) file is the username of the user as which the task should run:

0 * * * *  root  cron-task

To the extent that is practical, you should run the task as a user with only the privileges it needs to run, and nothing else. This can sometimes make it worthwhile to create a dedicated system user purely for running scheduled tasks relevant to your application.

0 * * * *  myappcron  cron-task

This is not just for security reasons, although those are good ones; it helps protect you against nasties like scripting errors attempting to remove entire system directories .

Similarly, for tasks with database systems such as MySQL, don't use the administrative root user if you can avoid it; instead, use or even create a dedicated user with a unique random password stored in a locked-down ~/.my.cnf file, with only the needed permissions. For a MySQL backup task, for example, only a few permissions should be required, including SELECT , SHOW VIEW , and LOCK TABLES .

In some cases, of course, you really will need to be root . In particularly sensitive contexts you might even consider using sudo(8) with appropriate NOPASSWD options, to allow the dedicated user to run only the appropriate tasks as root , and nothing else.

Test the tasks

Before placing a task in a crontab(5) file, you should test it on the command line, as the user configured to run the task and with the appropriate environment set. If you're going to run the task as root , use something like su or sudo -i to get a root shell with the user's expected environment first:

$ sudo -i -u cronuser
$ cron-task

Once the task works on the command line, place it in the crontab(5) file with the timing settings modified to run the task a few minutes later, and then watch /var/log/syslog with tail -f to check that the task actually runs without errors, and that the task itself completes properly:

May  7 13:30:01 yourhost CRON[20249]: (you) CMD (cron-task)

This may seem pedantic at first, but it becomes routine very quickly, and it saves a lot of hassles down the line as it's very easy to make an assumption about something in your environment that doesn't actually hold in the one that cron(8) will use. It's also a necessary acid test to make sure that your crontab(5) file is well-formed, as some implementations of cron(8) will refuse to load the entire file if one of the lines is malformed.

If necessary, you can set arbitrary environment variables for the tasks at the top of the file:

MYVAR=myvalue

0 * * * *  you  cron-task
Don't throw away errors or useful output

You've probably seen tutorials on the web where in order to keep the crontab(5) job from sending standard output and/or standard error emails every five minutes, shell redirection operators are included at the end of the job specification to discard both the standard output and standard error. This kluge is particularly common for running web development tasks by automating a request to a URL with curl(1) or wget(1) :

*/5 * * *  root  curl https://example.com/cron.php >/dev/null 2>&1

Ignoring the output completely is generally not a good idea, because unless you have other tasks or monitoring ensuring the job does its work, you won't notice problems (or know what they are), when the job emits output or errors that you actually care about.

In the case of curl(1) , there are just way too many things that could go wrong, that you might notice far too late:

The author has seen all of the above happen, in some cases very frequently.

As a general policy, it's worth taking the time to read the manual page of the task you're calling, and to look for ways to correctly control its output so that it emits only the output you actually want. In the case of curl(1) , for example, I've found the following formula works well:

curl -fLsS -o /dev/null http://example.com/

This way, the curl(1) request should stay silent if everything is well, per the old Unix philosophy Rule of Silence .

You may not agree with some of the choices above; you might think it important to e.g. log the complete output of the returned page, or to fail rather than silently accept a 301 redirect, or you might prefer to use wget(1) . The point is that you take the time to understand in more depth what the called program will actually emit under what circumstances, and make it match your requirements as closely as possible, rather than blindly discarding all the output and (worse) the errors. Work with Murphy's law ; assume that anything that can go wrong eventually will.

Send the output somewhere useful

Another common mistake is failing to set a useful MAILTO at the top of the crontab(5) file, as the specified destination for any output and errors from the tasks. cron(8) uses the system mail implementation to send its messages, and typically, default configurations for mail agents will simply send the message to an mbox file in /var/mail/$USER , that they may not ever read. This defeats much of the point of mailing output and errors.

This is easily dealt with, though; ensure that you can send a message to an address you actually do check from the server, perhaps using mail(1) :

$ printf '%s\n' 'Test message' | mail -s 'Test subject' you@example.com

Once you've verified that your mail agent is correctly configured and that the mail arrives in your inbox, set the address in a MAILTO variable at the top of your file:

MAILTO=you@example.com

0 * * * *    you  cron-task-1
*/5 * * * *  you  cron-task-2

If you don't want to use email for routine output, another method that works is sending the output to syslog with a tool like logger(1) :

0 * * * *   you  cron-task | logger -it cron-task

Alternatively, you can configure aliases on your system to forward system mail destined for you on to an address you check. For Postfix, you'd use an aliases(5) file.

I sometimes use this setup in cases where the task is expected to emit a few lines of output which might be useful for later review, but send stderr output via MAILTO as normal. If you'd rather not use syslog , perhaps because the output is high in volume and/or frequency, you can always set up a log file /var/log/cron-task.log but don't forget to add a logrotate(8) rule for it!

Put the tasks in their own shell script file

Ideally, the commands in your crontab(5) definitions should only be a few words, in one or two commands. If the command is running off the screen, it's likely too long to be in the crontab(5) file, and you should instead put it into its own script. This is a particularly good idea if you want to reliably use features of bash or some other shell besides POSIX/Bourne /bin/sh for your commands, or even a scripting language like Awk or Perl; by default, cron(8) uses the system's /bin/sh implementation for parsing the commands.

Because crontab(5) files don't allow multi-line commands, and have other gotchas like the need to escape percent signs % with backslashes, keeping as much configuration out of the actual crontab(5) file as you can is generally a good idea.

If you're running cron(8) tasks as a non-system user, and can't add scripts into a system bindir like /usr/local/bin , a tidy method is to start your own, and include a reference to it as part of your PATH . I favour ~/.local/bin , and have seen references to ~/bin as well. Save the script in ~/.local/bin/cron-task , make it executable with chmod +x , and include the directory in the PATH environment definition at the top of the file:

PATH=/home/you/.local/bin:/usr/local/bin:/usr/bin:/bin
MAILTO=you@example.com

0 * * * *  you  cron-task

Having your own directory with custom scripts for your own purposes has a host of other benefits, but that's another article

Avoid /etc/crontab

If your implementation of cron(8) supports it, rather than having an /etc/crontab file a mile long, you can put tasks into separate files in /etc/cron.d :

$ ls /etc/cron.d
system-a
system-b
raid-maint

This approach allows you to group the configuration files meaningfully, so that you and other administrators can find the appropriate tasks more easily; it also allows you to make some files editable by some users and not others, and reduces the chance of edit conflicts. Using sudoedit(8) helps here too. Another advantage is that it works better with version control; if I start collecting more than a few of these task files or to update them more often than every few months, I start a Git repository to track them:

$ cd /etc/cron.d
$ sudo git init
$ sudo git add --all
$ sudo git commit -m "First commit"

If you're editing a crontab(5) file for tasks related only to the individual user, use the crontab(1) tool; you can edit your own crontab(5) by typing crontab -e , which will open your $EDITOR to edit a temporary file that will be installed on exit. This will save the files into a dedicated directory, which on my system is /var/spool/cron/crontabs .

On the systems maintained by the author, it's quite normal for /etc/crontab never to change from its packaged template.

Include a timeout

cron(8) will normally allow a task to run indefinitely, so if this is not desirable, you should consider either using options of the program you're calling to implement a timeout, or including one in the script. If there's no option for the command itself, the timeout(1) command wrapper in coreutils is one possible way of implementing this:

0 * * * *  you  timeout 10s cron-task

Greg's wiki has some further suggestions on ways to implement timeouts .

Include file locking to prevent overruns

cron(8) will start a new process regardless of whether its previous runs have completed, so if you wish to avoid locking for long-running task, on GNU/Linux you could use the flock(1) wrapper for the flock(2) system call to set an exclusive lockfile, in order to prevent the task from running more than one instance in parallel.

0 * * * *  you  flock -nx /var/lock/cron-task cron-task

Greg's wiki has some more in-depth discussion of the file locking problem for scripts in a general sense, including important information about the caveats of "rolling your own" when flock(1) is not available.

If it's important that your tasks run in a certain order, consider whether it's necessary to have them in separate tasks at all; it may be easier to guarantee they're run sequentially by collecting them in a single shell script.

Do something useful with exit statuses

If your cron(8) task or commands within its script exit non-zero, it can be useful to run commands that handle the failure appropriately, including cleanup of appropriate resources, and sending information to monitoring tools about the current status of the job. If you're using Nagios Core or one of its derivatives, you could consider using send_nsca to send passive checks reporting the status of jobs to your monitoring server. I've written a simple script called nscaw to do this for me:

0 * * * *  you  nscaw CRON_TASK -- cron-task
Consider alternatives to cron(8)

If your machine isn't always on and your task doesn't need to run at a specific time, but rather needs to run once daily or weekly, you can install anacron and drop scripts into the cron.hourly , cron.daily , cron.monthly , and cron.weekly directories in /etc , as appropriate. Note that on Debian and Ubuntu GNU/Linux systems, the default /etc/crontab contains hooks that run these, but they run only if anacron(8) is not installed.

If you're using cron(8) to poll a directory for changes and run a script if there are such changes, on GNU/Linux you could consider using a daemon based on inotifywait(1) instead.

Finally, if you require more advanced control over when and how your task runs than cron(8) can provide, you could perhaps consider writing a daemon to run on the server consistently and fork processes for its task. This would allow running a task more often than once a minute, as an example. Don't get too bogged down into thinking that cron(8) is your only option for any kind of asynchronous task management!

[Oct 31, 2017] Bash job control by Tom Ryder

Jan 31, 2012 | sanctum.geek.nz

Oftentimes you may wish to start a process on the Bash shell without having to wait for it to actually complete, but still be notified when it does. Similarly, it may be helpful to temporarily stop a task while it's running without actually quitting it, so that you can do other things with the terminal. For these kinds of tasks, Bash's built-in job control is very useful. Backgrounding processes

If you have a process that you expect to take a long time, such as a long cp or scp operation, you can start it in the background of your current shell by adding an ampersand to it as a suffix:

$ cp -r /mnt/bigdir /home &
[1] 2305

This will start the copy operation as a child process of your bash instance, but will return you to the prompt to enter any other commands you might want to run while that's going.

The output from this command shown above gives both the job number of 1, and the process ID of the new task, 2305. You can view the list of jobs for the current shell with the builtin jobs :

$ jobs
[1]+  Running  cp -r /mnt/bigdir /home &

If the job finishes or otherwise terminates while it's backgrounded, you should see a message in the terminal the next time you update it with a newline:

[1]+  Done  cp -r /mnt/bigdir /home &
Foregrounding processes

If you want to return a job in the background to the foreground, you can type fg :

$ fg
cp -r /mnt/bigdir /home &

If you have more than one job backgrounded, you should specify the particular job to bring to the foreground with a parameter to fg :

$ fg %1

In this case, for shorthand, you can optionally omit fg and it will work just the same:

$ %1
Suspending processes

To temporarily suspend a process, you can press Ctrl+Z:

$ cp -r /mnt/bigdir /home
^Z
[1]+  Stopped  cp -r /mnt/bigdir /home

You can then continue it in the foreground or background with fg %1 or bg %1 respectively, as above.

This is particularly useful while in a text editor; instead of quitting the editor to get back to a shell, or dropping into a subshell from it, you can suspend it temporarily and return to it with fg once you're ready.

Dealing with output

While a job is running in the background, it may still print its standard output and standard error streams to your terminal. You can head this off by redirecting both streams to /dev/null for verbose commands:

$ cp -rv /mnt/bigdir /home &>/dev/null

However, if the output of the task is actually of interest to you, this may be a case where you should fire up another terminal emulator, perhaps in GNU Screen or tmux , rather than using simple job control.

Suspending SSH sessions

As a special case, you can suspend an SSH session using an SSH escape sequence . Type a newline followed by a ~ character, and finally press Ctrl+Z to background your SSH session and return to the terminal from which you invoked it.

tom@conan:~$ ssh crom
tom@crom:~$ ~^Z [suspend ssh]
[1]+  Stopped  ssh crom
tom@conan:~$

You can then resume it as you would any job by typing fg :

tom@conan:~$ fg %1
ssh crom
tom@crom:~$

[Sep 29, 2017] Writing Recurring Scripts

...The crontab (chronological table) command maintains a list of jobs for cron to execute. Each user has his or her own crontab table. The -l (list) switch lists currently scheduled tasks. Linux reports an error if you don't have permission to use cron. Because jobs are added or removed from the crontab table as a group, always start with the -l switch, saving the current table to a file.

$ crontab -l > cron.txt
				

After the current table is saved, the file can be edited. There are five columns for specifying the times when a program is to run: The minute, hour, day, month, and the day of the week. Unused columns are marked with an asterisk, indicating any appropriate time.

Times are represented in a variety of formats: Individually (1), comma-separated lists (1,15), ranges (0-6, 9-17), and ranges with step values (1-31/2). Names can be used for months or days of the week.

The final column contains the name of the command to execute. The following line runs a script called cleanup.sh at 1:00 AM every morning.

*       1       *       *       *       /home/kburtch/cleanup.sh

Environment variables can also be initialized in the crontab. When a shell script is started by cron, it is not started from a login session and none of the profile files are executed. Only a handful of variables are defined: PWD, HOSTNAME, MACHTYPE, LOGNAME, SHLVL, SHELL, HOSTTYPE, OSTYPE, HOME, TERM, and PATH. You have to explicitly set any other values in the script or in the crontab list.

PATH is defined as only /usr/bin:/bin. Other paths are normally added by profile files and so are unavailable.

Because a script running under cron is not in a login session, there is no screen to write standard output to. Anything that is normally written to standard output is instead captured by cron and mailed to the account owning the cron script. The mail has the unhelpful subject line of cron. Even printing a blank line results in a seemingly empty email being sent. For this reason, scripts designed to run under cron should either write their output to a log file, or should create and forward their own email with a meaningful subject line. It is common practice to write a wrapper script to capture the output from the script doing the actual work.

#!/bin/bash
#
# show_users.sh: show all users in the database table "users"

shopt -s -o nounset

declare -rx SCRIPT=${0##*/}
declare -r SQL_CMDS="sort_inventory.sql"
declare -rx ON_ERROR_STOP

if [ ! -r "$SQL_CMDS" ] ; then
   printf "$SCRIPT: the SQL script $SQL_CMDS doesn't exist or is not \
 readable" >&2
   exit 192
fi

RESULTS=`psql -user gordon -dbname custinfo  –quiet -no-align -tuples-only \
 -field-separator "," -file "$SQL_CMDS"`
if [ $? -ne 0 ] ; then
   printf "$SCRIPT: SQL statements failed." >&2
   exit 192
fi 


					  
#!/bin/bash
# show_users_wrapper.sh - show_users.sh wrapper script

shopt -s -o nounset

declare -rx SCRIPT=${0##*/}
declare -rx USER="kburtch"
declare -rx mail="/bin/mail"
declare -rx OUTPUT='mktemp /tmp/script_out.XXXXXX'
declare -rx SCRIPT2RUN="./show_users.sh"

# sanity checks

if test ! -x "$mail" ; then
   printf "$SCRIPT:$LINENO: the command $mail is not available - aborting" >&2
   exit 1
fi

if test ! -x "$SCRIPT2RUN" ; then
   printf "$SCRIPT: $LINENO: the command $SCRIPT2RUN is not available\
 - aborting" >&2
   exit 1
fi

# record the date for any errors, and create the OUTPUT file

date > $OUTPUT

# run the script

$SCRIPT2RUN 2>&1 > "$OUTPUT"

# mail errors to USER

if [ $? -ne 0 ] ; then
   $mail -s "$SCRIPT2RUN failed" "$USER" < "$OUTPUT"
fi

# cleanup

rm "$OUTPUT"
exit 0
				

					  

[Apr 17, 2014] CronHowto - Community Help Wiki

It is possible to run gui applications via cronjobs. This can be done by telling cron which display to use.

00 06 * * * env DISPLAY=:0 gui_appname

The env DISPLAY=:0 portion will tell cron to use the current display (desktop) for the program "gui_appname".

And if you have multiple monitors, don't forget to specify on which one the program is to be run. For example, to run it on the first screen (default screen) use :

00 06 * * * env DISPLAY=:0.0 gui_appname

The env DISPLAY=:0.0 portion will tell cron to use the first screen of the current display for the program "gui_appname".

Note: GUI users may prefer to use gnome-schedule (aka "Scheduled tasks") to configure GUI cron jobs. In gnome-schedule, when editing a GUI task, you have to select "X application" in a dropdown next to the command field.

Note: In Karmic(9.10), you have to enable X ACL for localhost to connect to for GUI applications to work.

 ~$ xhost +local:
non-network local connections being added to access control list
 ~$ xhost
access control enabled, only authorized clients can connect
LOCAL:
...

Tips

crontab -e uses the EDITOR environment variable. to change the editor to your own choice just set that. You may want to set EDITOR in you .bashrc because many commands use this variable. Let's set the EDITOR to nano a very easy editor to use:

export EDITOR=nano

There are also files you can edit for system-wide cron jobs. The most common file is located at /etc/crontab, and this file follows a slightly different syntax than a normal crontab file. Since it is the base crontab that applies system-wide, you need to specify what user to run the job as; thus, the syntax is now:

minute(s) hour(s) day(s)_of_month month(s) day(s)_of_week user command

It is recommended, however, that you try to avoid using /etc/crontab unless you need the flexibility offered by it, or if you'd like to create your own simplified anacron-like system using run-parts for example. For all cron jobs that you want to have run under your own user account, you should stick with using crontab -e to edit your local cron jobs rather than editting the system-wide /etc/crontab.

Crontab Example

Below is an example of how to setup a crontab to run updatedb, which updates the slocate database: Open a term, type "crontab -e" (without the double quotes) and press enter. Type the following line, substituting the full path of the application you wish to run for the one shown below, into the editor:

45 04 * * * /usr/bin/updatedb

Save your changes and exit the editor.

Crontab will let you know if you made any mistakes. The crontab will be installed and begin running if there are no errors. That's it. You now have a cronjob setup to run updatedb, which updates the slocate database, every morning at 4:45.

Note: The double-ampersand (&&) can also be used in the "command" section to run multiple commands consecutively, but only if the previous command exits successfully. A string of commands joined by the double-ampersand will only get to the last command if all the previous commands are run successfully. If exit error-checking is not of a concern, string commands together, separated with a semi-colon (;)

45 04 * * * /usr/sbin/chkrootkit && /usr/bin/updatedb

The above example will run chkrootkit followed by updatedb at 4:45am daily - providing you have all listed apps installed. If chkrootkit fails, updatedb will NOT be run.

[Oct 17, 2013] Crontab file

The UNIX and Linux Forums

mradsus

Hi All,
I created a crontab entry in a cron.txt file accidentally entered

crontab cron.txt.

Now my previous crontab -l entries are not showing up, that means i removed the scheduling of the previous jobs by running this command "crontab cron.txt"

How do I revert back to previously schedule jobs.
Please help. this is urgent.,
Thanks.

lestat_ecuador:
Try to go to /tmp and do
Code:
ls -ltr cron*
Then will appear kind a history of the cron, see the creation dates an look into the one you want. For example I have:
Code:
usuario: > cd /tmp/
usuario: /tmp > ls -ltr cro*
-rw-------   1 ecopge   ecuador      859 Jul 25 08:33 crontabJhaiKP
-rw-------   1 ecppga   ecuador        0 Jul 28 16:00 croutFNZCuVqsb

I already modify my crontab file, it is into croutFNZCuVqsb but my last crontab file is into [B]crontabJhaiKP[B].

try it and let me know how are you going.

[Oct 17, 2013] How to Recover User crontab -r

House of Linux

This is a quick advice of those who have ran crontab -r accidentally and would like to restore it. Well letter r and letter e is close together therefore this is a very easy mistake to do.

If you don't have a backup of /var/spool/cron/*user* the only way you could find to recover you crontab commands would be taking a look at the logs of your system.

For example in RedHat / CentOS Linux distributions you can issue: # cat /var/log/cron And you will see a list of commands executed and the time, from there you will be able to rebuild your crontab.

Hope this will help someone...

[Mar 22, 2011] incron-cron-inotify

Unfortunately incron doesn't have recursive auto subscriptions, i.e. if I want to watch an entire tree and automatically subscribe to new directories being created. look at lsyncd, although it is primarily meant as a syncing tool, you can configure any action on an event.

incron is very similar in concept and usage to using cron, as the interface is a clone of it.

Each user who is allowed to use incron may use the incrontab command to view, or edit, their rule list. These rules are processed via the daemon, and when a match occurs the relevant command is executed.

To list the current rules you've got defined run "incrontab -l", and to edit them use "incrontab -e". If you do that just now you'll receive the following error message:

rt:~# incrontab  -l
user 'root' is not allowed to use incron

This error may be fixed in one of two ways:

Allow the root user to make use of incron: By editing /etc/incron.allow, adding 'root' to it.

Allowing all local users the ability to use incron: By removing the file /etc/incron.allow.

The user table rows have the following syntax (use one or more spaces between elements):

[Path] [mask] [command]

Where:

The full list of supported flags for mask include:

The mask may additionally contain a special symbol IN_NO_LOOP which disables events occurred during processing the event (to avoid loops).

The command may contain these wildcards:

Example Usage

/tmp/spool IN_CLOSE_WRITE /usr/local/bin/run-spool $@/$#

This says "Watch /tmp/spool, and when an IN_CLOSE_WRITE event occurs run /usr/local/bin/run-spool with the name of the file that was created".

Create your backups

This small script backup all files in the etc and myProject directory.

 vi /root/inotify.sh

--

 #!/bin/sh

 # Create a inotify backup dir (if not exists)
 #
 mkdir /var/backups/inotify

 # Make a copy off the full path and file
 #
 cp -p --parents $1  /var/backups/inotify

 # move the file to a file with datetime-stamp
 #
 mv /var/backups/inotify$1 /var/backups/inotify$1_`date +'%Y-%m-%d_%H:%M'

Make the file executable for root

 chmod 755 /root/inotify.sh

Open

 incrontab -e

And add:

 /etc IN_CLOSE_WRITE,IN_MODIFY /root/inotify.sh $@/$#
 /home/andries/myProject IN_CLOSE_WRITE /root/inotify.sh $@/$#

So every time a file is wrote in the watched directory it's also saved in the given directory

Selected Comments

joe :

I am not sure if your script will work as intended

1. mkdir -p /var/backups/inotify The -p will make sure that the dir is created when it not exists

2. cp -p –parents $1 /var/backups/inotify i have no idea why you need parents but -a (archive) may be more useful

2a. make use of the cp backup facility cp –backup=numbered will make simply number your backups automatically.

[Mar 21, 2011] Gentoo Linux cron Guide

Good tutorial
Gentoo Linux Documentation

Troubleshooting

If you're having problems getting cron to work properly, you might want to go through this quick checklist.

[Jan 30, 2010] Cron - cron-like scheduler for Perl subroutines

This module provides a simple but complete cron like scheduler. I.e this modules can be used for periodically executing Perl subroutines. The dates and parameters for the subroutines to be called are specified with a format known as crontab entry (METHODS, add_entry() and crontab(5))

The philosophy behind Schedule::Cron is to call subroutines periodically from within one single Perl program instead of letting cron trigger several (possibly different) perl scripts. Everything under one roof. Furthermore Schedule::Cron provides mechanism to create crontab entries dynamically, which isn't that easy with cron.

Schedule::Cron knows about all extensions (well, at least all extensions I'm aware of, i.e those of the so called "Vixie'' cron) for crontab entries like ranges including 'steps', specification of month and days of the week by name or coexistence of lists and ranges in the same field. And even a bit more (like lists and ranges with symbolic names).

[Jan 27, 2010] Using at (@) and Percentage (%) in Crontab

There is an easy way to start a program during system boot. Just put this in your crontab:
@reboot /path/to/my/program
The command will be executed on every (re)boot. Crontab can be modified by running
#crontab -e
Other available Options
string meaning
------ -----------
@reboot Run once, at startup.
@yearly Run once a year, "0 0 1 1 *".
@annually (same as @yearly)
@monthly Run once a month, "0 0 1 * *".
@weekly Run once a week, "0 0 * * 0".
@daily Run once a day, "0 0 * * *".
@midnight (same as @daily)
@hourly Run once an hour, "0 * * * *"
More information about crontab options is available in the man page check here

How to Use percentage sign (%) in a crontab entry

Usually, a % is used to denote a new line in a crontab entry. The first % is special in that it denotes the start of STDIN for the crontab entry's command. A trivial example is:
* * * * * cat - % another minute has passed
This would output the text
another minute has passed
After the first %, all other %s in a crontab entry indicate a new line. So a slightly different trivial example is:
* * * * * cat - % another % minute % has % passed 
This would output the text
another
minute
has
passed
Note how the % has been used to indicate a new line.

The problem is how to use a % in a crontab line to as a % and not as a new line. Many manuals will say escape it with a \. This certainly stops its interpretation as a new line but the shell running the cron job can leave the \ in. For example:

* * * * * echo '\% another \% minute \% has \% passed'
would output the text
\% another \% minute \% has \% passed
Clearly, not what was intended.

A solution is to pass the text through sed. The crontab example now becomes:

* * * * * echo '\% another \% minute \% has \% passed'| sed -e 's|\\||g'
This would output the text
% another % minute % has % passed
which is what was intended.

This technique is very useful when using a MySQL command within a crontab. MySQL command can often have a % in them. Some example are:

SET @monyy=DATE_FORMAT(NOW(),"%M %Y")
SELECT * FROM table WHERE name LIKE 'fred%'
So, to have a crontab entry to run the MySQL command
mysql -vv -e "SELECT * FROM table WHERE name LIKE Fred%'" member_list
would have to appear in the crontab as
echo "SELECT * FROM table WHERE name LIKE 'Fred\%'" | sed -e 's|\\||g' | mysql -vv member_list
Pulling the crontab entry apart there is:
the echo command sends the MySQL command to STDOUT where it is piped into
sed which removes any back slashes before sending the output to STDOUT where it is piped into
the mysql command processor which reads its commands from STDIN

Pycron

Windows-compatible implementation of cron in Python
This article will discuss using a Cron type system, as used on Unix and Linux systems, to bring the flexibility, scalability and a need for more out of a task automation tool, to the Win32 environment.

Internals: Replacing Task Scheduler with Cron

As our dependency upon machines for various tasks grow, time becomes an integral factor that may work against us, when pressed upon deadlines. Automation of such tasks, without the need for human intervention, becomes vital to whether one is able to square away enough time to complete more and more tasks with little help from human hands.

Task Scheduler, which comes bundled with Windows attempts to make automation of tasks effortless. Unfortunately, it is not very configurable and basic in what it is capable of.

On Unix and Linux systems, Cron is what is used for task scheduling. This scheduler is very configurable, and is capable of well more then its Windows counterpart.

This article will discuss using a Cron type system, as used on Unix and Linux systems, to bring the flexibility, scalability and a need for more out of a task automation tool, to the Win32 environment.

Tools:

Pycron from kalab.com

Install the product and make sure to install it as a service (default option on the last dialog box of the installer) if your Win32 operating system supports this.

Then click Start->Run->services.msc (hit enter)

Scroll down to and highlight Task Scheduler->right click->Properties->Toggle to Manual->hit Stop->then Apply->OK

Then scroll up to Python Cron Service->highlight->right click->Properties->Toggle to Automatic->Apply->OK

We will be working with a file called crontab.txt for all of the scheduled entries. This file must be created in the Pycron Program directory which is located in the pycron folder under the Program Files folder.

Create a new file called crontab.txt in the Pycron Program Directory and put this in to it

* * * * * replace replace
Save your file.

Now launch the crontab editor (Start->Programs->Pycron->Pycron CronTab Editor)

By default it will load up the contents of the crontab.txt file in the Pycron Program Directory.

Screenshot

The parameters of a crontab entry from left to right are as follows.

0-59 - Minute
0-23 - Hour
1-31 - Day
1-12 - Month
0-6 - Day of the week (0 For Mon, and 6 for Sunday)

Command/Application to execute

Parameter to the application/command.

entry:
Minute Hour Day Month Day_of_the_week Command Parameter

Note:
* is a wildcard and matches all values
/ is every (ex: every 10 minutes = */10) (new in this Win32 version)
, is execute each value (ex: every 10 minutes = 0,10,20,30,40,50)
- is to execute each value in the range (ex: from 1st (:01) to 10th (:10) minute = 1-10)

Double click the 1st entry of " * * * * replace replace" to edit the entry.

Screenshot

For an example, we will run defrag on every Friday at 11:00 (23:00) PM against the C:\ volume.

On the Command line hit Browse, and navigate to your System32 Folder inside your Windows folder and double click on defrag.exe

On the Parameter line enter in c:

Always run a Test Execution to make sure your command is valid. If all was successful, you will see your command/application run and a kick back message of Successful will be observed.

For Minute, erase the * and enter in 0
For Hour, erase the * and enter in 23
For Week Day enter in 4.

Then hit OK, File->Save.

Note: You can use the wizard to enter in values for each entry as well.

Now open up a command prompt (start->run->cmd), and type:
net start pycron

You can leave it running now, and every time you append and or change your crontab.txt, the schedule will be updated.

To add another entry using the crontab GUI, add in a * * * * * replace replace to crontab.txt on the next free line, save it, then open up crontab.txt with the GUI editor and make the desired changes on the entry by double clicking to edit.

It is recommended that every time the GUI is used to edit entries, that you observe the entry in crontab. After you become comfortable with the syntax of cron entries, there will be no need for the GUI editor.

The entry for our defrag command becomes:
0 23 * * 4 "C:\WINDOWS\System32\defrag.exe" c:

This same task can be performed with the Task Scheduler with one entry.

Let us go through another example of more complexity, which would not as easily be accomplished with the Task Scheduler.

I want to back up my work files every 3 hours, from Monday through Friday, between the hours of 9AM to 6PM, for all the months of the year. The work folder is C:\WORK and the backup folder is C:\BACKUP.

Open up crontab.txt and on the next free line, enter in * * * * * replace replace, then save it.

Open up the crontab editor and import crontab.txt. Double click the "* * * * replace replace" entry.

For Command, browse to xcopy located in System32 within your Windows folder.

For Parameter: C:\WORK\* C:\BACKUP /Y

For Minute: 0
For Hour: 9-18/3
For Day: *
For Month: *
For Week Day: 0-4

Click OK->File->Save.

The entry for this task as reflected in our crontab.txt becomes
0 9-18/3 * * 0-4 "C:\WINDOWS\System32\xcopy.exe" C:\WORK\* C:\BACKUP /Y

If we were to schedule the above example with the Task Scheduler that comes with windows, then a separate entry for every 3rd hour mark in terms of time (AM/PM) between the aforementioned times would have to be entered, for the task.

Note: Cron can work with your own written batch/script files as well.

Note: You can view other examples in crontab.sample, located in the pycron program directory.

As you can see, Cron has a lot more to offer then the Task Scheduler. Not to say that the Windows application is not useable, but for those scenarios where you need the ability to be flexible, and configurable without all the extra headaches, then this is the ideal replacement for you. It proves to be much more efficient as well in practice.

Further Reading:

Cron Help Guide at linuxhelp.net

Pycron Home

OK to change time of cron.daily

Linux Forums
Scripts in /etc/cron.daily run each day from 4:02 to 0:02. I want do this so that updatedb (started by slocate.cron) finishes before people start their workday. My quesiton isn't how, but whether this will create any problems.

Does anyone see any problems with running cron.daily at 2 minutes after midnight instead of 2 minutes after 4?

The tasks that run are mostly cleaning up log files and deleting old maid Hat EL. The daily tasks are all defaults. (ls /etc/cron.daily returns: 00-logwatch 00webalizer certwatch logrotate makewhatis.cron prelink rpm slocate.cron tetex.cron tmpwatch).

Thanks in advance!

===

Originally Posted by Tim65

Does anyone see any problems with running cron.daily at 2 minutes after midnight instead of 2 minutes after 4?

This should be no problem.

However: if you need to apply any patches / security fixes to cron in the future, you will want to confirm that your changes weren't overwritten.

===

n.yaghoobi.s : Thanks. I know how to change it - I was just wondering if anyone thought it was a bad idea.

anomie : Thanks. Good point about confirming the change doesn't get undone by patches. I'm going to make the change. If I ever do discover a problem caused by this change, I'll be sure to look up this thread and post the info here.

[Jun 16, 2008] Cron Sandbox

CGI script that allow to enter a crontab command and produce forward schedule of times when it will run for testing.

[May 14, 2007] Neat crontab tricks lxpages.com blog

Linux only shortcuts.

There are several special entries, some which are just shortcuts, that you can use instead of specifying the full cron entry. The most useful of these is probably @reboot which allows you to run a command each time the computer gets reboot. You can alert yourself when server is back online after a reboot. Also becomes useful if you want to run certain services or commands at start up. The complete list of special entries are:

@monthly
Entry Description Equivalent To
@reboot Run once, at startup. None Run once a month 0 0 1 * *
@weekly Run once a week 0 0 * * 0
@daily Run once a day 0 0 * * *
@midnight (same as @daily) 0 0 * * *
@hourly Run once an hour 0 * * * *

The most useful again is @reboot. Use it to notify you when your server gets rebooted!

crontab

Users are permitted to use crontab if their names appear in the file /usr/lib/cron/cron.allow. If that file does not exist, the file /usr/lib/cron/cron.deny is checked to determine if the user should be denied access to crontab. If neither file exists, only a process with appropriate privileges is allowed to submit a job. If only cron.deny exists and is empty, global usage is permitted. The cron.allow and cron.deny files consist of one user name per line.

Sun_Solaris_AuditGuide

if both cron.allow and cron.deny files exist the cron.deny is ignored.

This can be accomplished by either listing users permitted to use the command in the file /var/spool/cron/cron.allow and the /var/spool/cron/at.allow or in the list of user not permitted to access the command in the file /var/spool/cron/cron.deny

Linux tip Controlling the duration of scheduled jobs

A very god article with a lot of examples

[ian@attic4 ~]$ cat ./runclock3.sh #!/bin/bash runtime=${1:-10m} mypid=$$ # Run xclock in background xclock& clockpid=$! echo "My PID=$mypid. Clock's PID=$clockpid" ps -f $clockpid #Sleep for the specified time. sleep $runtime kill -s SIGTERM $clockpid echo "All done"

Listing 5 shows what happens when you execute runclock3.sh. The final kill command confirms that the xclock process (PID 9285) was, indeed, terminated.


Listing 5. Verifying the termination of child processes
             
[ian@attic4 ~]$ ./runclock3.sh 5s
My PID=9284. Clock's PID=9285
UID        PID  PPID  C STIME TTY      STAT   TIME CMD
ian       9285  9284  0 22:14 pts/1    S+     0:00 xclock
All done
[ian@attic4 ~]$ kill -0 9285
bash: kill: (9285) - No such process
If you omit the signal specification, then SIGTERM is the default signal. The SIG part of a signal name is optional. Instead of using -s and a signal name, you can just prefix the signal number with a -, so the four forms shown in Listing 6 are equivalent ways of killing process 9285. Note that the special value -0, as used in Listing 4 above, tests whether a signal could be sent to a process.

Listing 6. Ways to specify signals with the kill command

             
kill -s SIGTERM 9285
kill -s TERM 9285
kill -15 9285
kill 9285

If you need just a one-shot timer to drive an application, such as you have just seen here, you might consider the timeout command, which is part of the AppleTalk networking package (Netatalk). You may need to install this package (see Resources below for details), since most installations do not include it automatically.

Other termination conditions

You now have the basic tools to run a process for a fixed amount of time. Before going deeper into signal handling, let's consider how to handle other termination requirements, such as repetitively capturing information for a finite time, terminating when a file becomes a certain size, or terminating when a file contains a particular string. This kind of work is best done using a loop, such as for, while, or until, with the loop executed repeatedly with some built-in delay provided by the sleep command. If you need finer granularity than seconds, you can also use the usleep command.

You can add a second hand to the clock, and you can customize colors. Use the showrgb command to explore available color names. Suppose you use the command xclock -bg Thistle -update 1& to start a clock with a second hand, and a Thistle-colored background.

Now you can use a loop with what you have learned already to capture images of the clock face every second and then combine the images to make an animated GIF image. Listing 7 shows how to use the xwininfo command to find the window id for the xclock command. Then use ImageMagick command-line tools to capture 60 clock face images at one-second intervals (see Resources for details on ImageMagick). And finally combine these into an infinitely looping animated GIF file that is 50% of the dimensions of the original clock.


Listing 7. Capturing images one second apart
             
[ian@attic4 ~]$ cat getclock.sh
#!/bin/bash
windowid=$(xwininfo -name "xclock"| grep '"xclock"' | awk '{ print $4 }')
sleep 5
for n in `seq 10 69`; do
  import -frame  -window $windowid clock$n.gif&
  sleep 1s
#  usleep 998000
done
convert -resize 50% -loop 0 -delay 100 clock?[0-9].gif clocktick.gif
[ian@attic4 ~]$ ./getclock.sh
[ian@attic4 ~]$ file clocktick.gif
clocktick.gif: GIF image data, version 89a, 87 x 96

Timing of this type is always subject to some variation, so the import command to grab the clock image is run in the background, leaving the main shell free to keep time. Nevertheless, some drift is likely to occur because it does take a finite amount of time to launch each subshell for the background processing. This example also builds in a 5-second delay at the start to allow the shell script to be started and then give you time to click on the clock to bring it to the foreground. Even with these caveats, some of my runs resulted in one missed tick and an extra copy of the starting tick because the script took slightly over 60 seconds to run. One way around this problem would be to use the usleep command with a number of microseconds that is enough less than one second to account for the overhead, as shown by the commented line in the script. If all goes as planned, your output image should be something like that in Figure 2.


Figure 2. A ticking xclock

This example shows you how to take a fixed number of snapshots of some system condition at regular intervals. Using the techniques here, you can take snapshots of other conditions. You might want to check the size of an output file to ensure it does not pass some limit, or check whether a file contains a certain message, or check system status using a command such as vmstat. Your needs and your imagination are the only limits.

Signals and traps

If you run the getclock.sh script of Listing 7 yourself, and you close the clock window while the script is running, the script will continue to run but will print error messages each time it attempts to take a snapshot of the clock window. Similarly, if you run the runclock3.sh script of Listing 4, and press Ctrl-c in the terminal window where the script is running, the script will immediately terminate without shutting down the clock. To solve these problems, your script needs to be able to catch or trap some of the signals discussed in Terminating a child process.

If you execute runclock3.sh in the background and run the ps -f command while it is running, you will see output similar to Listing 8.


Listing 8. Process information for runclock3.sh
             
[ian@attic4 ~]$ ./runclock3.sh 20s&
[1] 10101
[ian@attic4 ~]$ My PID=10101. Clock's PID=10102
UID        PID  PPID  C STIME TTY      STAT   TIME CMD
ian      10102 10101  0 06:37 pts/1    S      0:00 xclock
ps -f
UID        PID  PPID  C STIME TTY          TIME CMD
ian       4598 12455  0 Jul29 pts/1    00:00:00 bash
ian      10101  4598  0 06:37 pts/1    00:00:00 /bin/bash ./runclock3.sh 20s
ian      10102 10101  0 06:37 pts/1    00:00:00 xclock
ian      10104 10101  0 06:37 pts/1    00:00:00 sleep 20s
ian      10105  4598  0 06:37 pts/1    00:00:00 ps -f
[ian@attic4 ~]$ All done

[1]+  Done                    ./runclock3.sh 20s

Note that the ps -f output has three entries related to the runclock3.sh process (PID 10101). In particular, the sleep command is running as a separate process. One way to handle premature death of the xclock process or the use of Ctrl-c to terminate the running script is to catch these signals and then use the kill command to kill the sleep command.

There are many ways to accomplish the task of determining the process for the sleep command. Listing 9 shows the latest version of our script, runclock4.sh. Note the following points:


Listing 9. Trapping signals with runclock4.sh
             
[ian@attic4 ~]$ cat runclock4.sh
#!/bin/bash

stopsleep() {
  sleeppid=$1
  echo "$(date +'%T') Awaken $sleeppid!"
  kill -s SIGINT $sleeppid >/dev/null 2>&1
}

runtime=${1:-10m}
mypid=$$
# Enable immediate notification of SIGCHLD
set -bm
# Run xclock in background
xclock&
clockpid=$!
#Sleep for the specified time.
sleep $runtime&
sleeppid=$!
echo "$(date +'%T') My PID=$mypid. Clock's PID=$clockpid sleep PID=$sleeppid"
# Set a trap
trap 'stopsleep $sleeppid' CHLD INT TERM
# Wait for sleeper to awaken
wait $sleeppid
# Disable traps
trap SIGCHLD
trap SIGINT
trap SIGTERM
# Clean up child (if still running)
echo "$(date +'%T') terminating"
kill -s SIGTERM $clockpid >/dev/null 2>&1 && echo "$(date +'%T') Stopping $clockpid"
echo "$(date +'%T') All done"

Listing 10 shows the output from running runclock4.sh three times. The first time, everything runs to its natural completion. The second time, the xclock is prematurely closed. And the third time, the shell script is interrupted with Ctrl-c.


Listing 10. Stopping runclock4.sh in different ways
             
[ian@attic4 ~]$ ./runclock4.sh 20s
09:09:39 My PID=11637. Clock's PID=11638 sleep PID=11639
09:09:59 Awaken 11639!
09:09:59 terminating
09:09:59 Stopping 11638
09:09:59 All done
[ian@attic4 ~]$ ./runclock4.sh 20s
09:10:08 My PID=11648. Clock's PID=11649 sleep PID=11650
09:10:12 Awaken 11650!
09:10:12 Awaken 11650!
[2]+  Interrupt               sleep $runtime
09:10:12 terminating
09:10:12 All done
[ian@attic4 ~]$ ./runclock4.sh 20s
09:10:19 My PID=11659. Clock's PID=11660 sleep PID=11661
09:10:22 Awaken 11661!
09:10:22 Awaken 11661!
09:10:22 Awaken 11661!
[2]+  Interrupt               sleep $runtime
09:10:22 terminating
09:10:22 Stopping 11660
./runclock4.sh: line 31: 11660 Terminated              xclock
09:10:22 All done

Note how many times the stopsleep function is called as evidenced by the "Awaken" messages. If you are not sure why, you might try making a separate copy of this function for each interrupt type that you catch and see what causes the extra calls.

You will also note that some job control messages tell you about termination of the xclock command and interrupting the sleep command. When you run a job in the background with default bash terminal settings, bash normally catches SIGCHLD signals and prints a message after the next terminal output line is printed. The set -bm command in the script tells bash to report SIGCHLD signals immediately and to enable job control monitoring, The alarm clock example in the next section shows you how to suppress these messages.

An alarm clock

Our final exercise returns to the original problem that motivated this article: how to record a radio program. We will actually build an alarm clock. If your laws allow recording of such material for your proposed use, you can build a recorder instead by adding a program such as vsound.

For this exercise, we will use the GNOME rhythmbox application to illustrate some additional points. Even if you use another media player, this discussion should still be useful.

An alarm clock could make any kind of noise you want, including playing your own CDs, or MP3 files. In central North Carolina, we have a radio station, WCPE, that broadcasts classical music 24 hours a day. In addition to broadcasting, WCPE also streams over the Internet in several formats, including Ogg Vorbis. Pick your own streaming source if you prefer something else.

To start rhythmbox from an X Windows terminal session playing the WCPE Ogg Vorbis stream, you use the command shown in Listing 11.


Listing 11. Starting rhythmbox with the WCPE Ogg Vorbis stream
             
rhythmbox --play http://audio-ogg.ibiblio.org:8000/wcpe.ogg

The first interesting point about rhythmbox is that the running program can respond to commands, including a command to terminate. So you don't need to use the kill command to terminate it, although you still could if you wanted to.

The second point is that most media players, like the clock that we have used in the earlier examples, need a graphical display. Normally, you run commands with the cron and at facilities at some point when you may not be around, so the usual assumption is that these scheduled jobs do not have access to a display. The rhythmbox command allows you to specify a display to use. You probably need to be logged on, even if your screen is locked, but you can explore those variations for yourself. Listing 12 shows the alarmclock.sh script that you can use for the basis of your alarm clock. It takes a single parameter, which specifies the amount of time to run for, with a default of one hour.


Listing 12. The alarm clock - alarmclock.sh
             
[ian@attic4 ~]$ cat alarmclock.sh
#!/bin/bash

cleanup () {
  mypid=$1
  echo "$(date +'%T') Finding child pids"
  ps -eo ppid=,pid=,cmd= --no-heading | grep "^ *$mypid"
  ps $playerpid >/dev/null 2>&1 && {
    echo "$(date +'%T') Killing rhythmbox";
    rhythmbox --display :0.0 -quit;
    echo "$(date +'%T') Killing rhythmbox done";
  }
}

stopsleep() {
  sleeppid=$1
  echo "$(date +'%T') stopping $sleeppid"
  set +bm
  kill $sleeppid >/dev/null 2>&1
}

runtime=${1:-1h}
mypid=$$
set -bm
rhythmbox --display :0.0 --play http://audio-ogg.ibiblio.org:8000/wcpe.ogg&
playerpid=$!
sleep $runtime& >/dev/null 2>&1
sleeppid=$!
echo "$(date +'%T') mypid=$mypid player pid=$playerpid sleeppid=$sleeppid"
trap 'stopsleep $sleeppid' CHLD INT TERM
wait $sleeppid
echo "$(date +'%T') terminating"
trap SIGCHLD
trap SIGINT
trap SIGTERM
cleanup $mypid final
wait

Note the use of set +bm in the stopsleep function to reset the job control settings and suppress the messages that you saw earlier with runclock4.sh

Listing 13 shows an example crontab that will run the alarm from 6 a.m. to 7 a.m. each weekday (Monday to Friday) and from 7 a.m. for two hours each Saturday and from 8:30 a.m. for an hour and a half each Sunday.


Listing 13. Sample crontab to run your alarm clock
             
0 6 * * 1-6 /home/ian/alarmclock.sh 1h
0 7 * * 7 /home/ian/alarmclock.sh 2h
30 8 * * 0 /home/ian/alarmclock.sh 90m

Refer to our previous tip Job scheduling with cron and at to learn how to set your own crontab for your new alarm clock.

In more complex tasks, you may have several child processes. The cleanup routine shows how to use the ps command to find the children of your script process. You can extend the idea to loop through an arbitrary set of children and terminate each one.

If you'd like to know more about administrative tasks in Linux, read the tutorial "LPI exam 102 prep: Administrative tasks,", or see the other Resources below. Don't forget to rate this page and let us know what other tips you'd like to see.

Resources

Learn

find tip...

Date: Sat, 14 Sep 1996 19:50:55 -0400 (EDT)
From: Bill Duncan <bduncan@beachnet.org>
Subject: find tip...

Hi Jim Murphy,

Saw your "find" tip in issue #9, and thought you might like a quicker method. I don't know about other distributions, but Slackware and Redhat come with the GNU versions of locate(1) and updatedb(1) which use an index to find the files you want. The updatedb(1) program should be run once a night from the crontab facility. To ignore certain sub-directories (like your /cdrom) use the following syntax for the crontab file:

41 5 * * *  updatedb --prunepaths="/tmp /var /proc /cdrom" > /dev/null 2>&1

This would run every morning at 5:41am, and update the database with filenames from everywhere but the subdirectories (and those below) the ones listed.

#3959 crontab administration usage and troubleshooting techniques

Common Problems/Questions and Solutions/Answers:

Q: I edited the crontab file but the commands still don't get executed.

A: Be sure user is not editing the crontab file directly with a simple text editor such as vi. Use crontab -e which will invoke
the vi editor and then signal cron that changes have been made. Cron will only read the crontab file when the daemon
is started, so if crontab has been edited directly, cron will need to be killed, /etc/cron.d/FIFO removed, and the cron daemon
restarted in order to recover the situation.

'''Q: I deleted all my crontab entries using crontab -e but crontab -l shows that they are still there.'''

A: Use crontab -r to remove an entire crontab file. Crontab -e does not know what to do with empty files,
so it does not update any changes.

Q: Can I use my **** editor ?

A: Yes, by setting the environment variable EDITOR to ****.

Q: Why do I receive email when my cron job dies?
A: Because there is no standard output for it to write to. To avoid this, redirect the output of the command to a
device (/dev/console, /dev/null) or a file.

Q: If I have a job that is running and my system goes down, will that job complete once the system is brought back up?

A: No, the job will not run again or pick up where it left off.

Q: If a job is scheduled to run at a time when the system is down, will that job run once the system is brought back up?

A: No, the job will not be executed once the system comes back up.

Q: How can I check if my cron is running correctly ?

A: Add the entry * * * * * date > /dev/console to your crontab file. It should print the date in the console every minute.

Q: How can I regulate who can use the cron.
A: The file /var/spool/cron/cron.allow can be used to regulate who can submit cron jobs.

If /var/spool/cron/cron.allow does not exist, then crontab checks /var/spool/cron/cron.deny to see who should not be
allowed to submit jobs.

If both files are missing only root can run cron jobs.

TROUBLESHOOTING CRON

If a user is experiencing a problem with cron, ask the user the following few questions to help debug the problem.

1. Is the cron daemon running?

#ps -ef |grep cron

2. Is there any cron.allow/deny file?

#ls -lt /etc/cron*

3. Is it the root crontab or a non-su crontab?

#crontab -e "USER NAME"

4. If you are calling a script through crontab, does the script run from the command line?

    	Run the script at the command line and look for errors

5. Check that the first 5 fields of an entry are VALID or NOT commented out.

(minute, hours, day of the month, month and weekday)

6. Check for crontab related patches.

(check with sunsolve and the solaris version installed on the system
 for exact patch match)

7. Check for recommended and security related patches?

(recommend to the customer to install all recommended and security patches
   relevant to the OS installed)

8. How did you edit crontab?

#crontab -e "user name"

9. How did you stop/kill the cron daemon?

#/etc/init.d/cron stop and start                      

crontab_header

Many times admins forget the field order of the crontab file
and alway reference the man pages over-and-over.

Make your life easy. Just put the field definitions in your crontab file
and comment (#) the lines out so the crontab file ignores it.

# minute (0-59),
# |      hour (0-23),
# |      |       day of the month (1-31),
# |      |       |       month of the year (1-12),
# |      |       |       |       day of the week (0-6 with 0=Sunday).
# |      |       |       |       |       commands
  3      2       *       *       0,6     /some/command/to/run
  3      2       *       *       1-5     /another/command/to/run

Sun_Solaris_AuditGuide

if both cron.allow and cron.deny files exist the cron.deny is ignored.

This can be accomplished by either listing users permitted to use the command in the file /var/spool/cron/cron.allow and the /var/spool/cron/at.allow or in the list of user not permitted to access the command in the file /var/spool/cron/cron.deny.

Dru Lavigne 09/27/2000

Reference

crontab command creates, lists or edits the file containing control statements to be interpreted by the cron command (cron table). Each statement consists of a time pattern and a command. The cron program reads your crontab file and executes the commands at the time specified in the time patterns. The commands are usually executed by a Bourne shell (sh).

The crontab command reads a file or the standard input to a directory that contains all users' crontab files. You can use crontab to remove your crontab file or display it. You cannot access other users' crontab files in the crontab directory.

COMMAND FORMAT

Following is the general format of the crontab command.

     crontab    [ file ]
     crontab -e [ username ]
     crontab -l [ username ]
     crontab -r [ username ]

Options

The following options may be used to control how crontab functions.

-e Edit your crontab file using the editor defined by the EDITOR variable.
-r Removes your current crontab file. If username is specified then remove that user's crontab file. Only root can remove other users' crontab files.
-l List the contents of your current crontab file.

Crontab File Format

The crontab file contains lines that consist of six fields separated by blanks (tabs or spaces). The first five fields are integers that specify the time the command is to be executed by cron. The following table defines the ranges and meanings of the first five fields.


Field Range Meaning

1 0-59 Minutes
2 0-23 Hours (Midnight is 0, 11 P.M. is 23)
3 1-31 Day of Month
4 1-12 Month of the Year
5 0-6 Day of the Week (Sunday is 0, Saturday is 6)

Each field can contain an integer, a range, a list, or an asterisk (*). The integers specify exact times. The ranges specify a range of times. A list consists of integers and ranges. The asterisk (*) indicates all legal values (all possible times).

The following examples illustrate the format of typical crontab time patterns.


Time Pattern Description

0 0 * * 5 Run the command only on Thursday at midnight.
0 6 1,15 * 1 Run the command at 6 a.m. on the first and fifteenth of each month and every Monday.
00,30 7-20 * * * Run the command every 30 minutes from 7 a.m. to 8 p.m. every day.


NOTE:
The day of the week and day of the month fields are interpreted separately if both are defined. To specify days to run by only one field, the other field must be set to an asterisk (*). In this case the asterisk means that no times are specified.


The sixth field contains the command that is executed by cron at the specified times. The command string is terminated by a new-line or a percent sign (%). Any text following the percent sign is sent to the command as standard input. The percent sign can be escaped by preceding it with a backslash (\%).

A line beginning with a # sign is a comment.

Each command in a crontab file is executed via the shell. The shell is invoked from your HOME directory (defined by $HOME variable). If you wish to have your (dot) .profile executed, you must specify so in the crontab file. For example,

     0 0 * * 1   . ./.profile ; databaseclnup

would cause the shell started by cron to execute your .profile, then execute the program databaseclnup. If you do not have your own .profile executed to set up your environment, cron supplies a default environment. Your HOME, LOGNAME, SHELL, and PATH variables are set. The HOME and LOGNAME are set appropriately for your login. SHELL is set to /bin/sh and PATH is set to :/bin:/usr/bin:/usr/lbin.


NOTE:
Remember not to have any read commands in your .profile which prompt for input. This causes problems when the cron job executes.


Command Output

If you do not redirect the standard output and standard error of a command executed from your crontab file, the output is mailed to you.

Access

To use the crontab command you must have access permission. Your system administrator can make the crontab command available to all users, specific users, or no users. Two files are used to control who can and cannot access the command. The cron.allow file contains a list of all users who are allowed to use crontab. The cron.deny file contains a list of all users who are denied access to crontab. If the cron.allow file exists but is empty, then all users can use the crontab command. If neither file exists, then no users other than the super-user can use crontab.

Displaying your crontab

If you have a crontab file in the system crontab area, you can list it by typing crontab -l. If you do not have a crontab file, crontab returns the following message:

crontab: can't open your crontab file.

DIAGNOSTICS AND BUGS

The crontab command will complain about various syntax errors and time patterns not being in the valid range.


CAUTION:
If you type crontab and press Return without a filename, the standard input is read as the new crontab entries. Therefore, if you inadvertently enter crontab this way and you want to exit without destroying the contents of your current crontab file, press the Del key. Do not press the Ctrl-D key; if you do, your crontab file will only contain what you have currently typed.


Related Commands

/usr/sbin/cron.d The main directory for the cron process.
/usr/sbin/cron.d/log Accounting information for cron processing.
/usr/sbin/cron.d/crontab.allow A file containing a list of users allowed to use crontab.
/usr/sbin/cron.d/crontab.deny A file containing a list of users not allowed to use crontab.
/usr/spool/cron/crontabs Location of crontab text to be executed.


cron.allow and cron.deny

Crontab supports two files:

/etc/cron.allow
/etc/cron.deny

If cron.allow exists then you MUST be listed in it to use crontab (so make sure all the system accounts like root are listed), this is very effective for limiting cron to a small number of users. If cron allow does not exists, then cron.deny is checked and if it exists then you will not be allowed to use crontab unless you are listed ("locked out")

In both cases users are listed one per line, so you can use something like:

cat /etc/passwd | cut -d":" -f 1 | fgrep -v `cat cron.deny` > /etc/cron.allow

To populate it and then delete all system accounts and unnnessary user accounts.

allow users crontab access

I assume you are on a Linux system. Then, you have a small syntax error in viewing other users crontabs, try "crontab -l -u username" instead.

Here is how it works: Two config files, /etc/cron.deny and /etc/cron.allow (on SuSE systems these files are /var/spool/cron.deny and .../allow), specify who can use crontab.

If the allow file exists, then it contains a list of all users that may submit crontabs, one per line. No unlisted user can invoke the crontab command. If the allow file does not exist, then the deny file is checked.

If neither the allow file nor the deny file exists, only root can submit crontabs.

This seems to be your case, so you should create one of these files ... on my system I have a deny file just containing user "guest", so all others are allowed.

One caveat: this access control is implemented by crontab, not by cron. If a user manages to put a crontab file into the appropriate directory by other means, cron will blindly execute ...

[from the book "Linux Administration Handbook" by Nemeth/Snyder/Hein and validated locally here]

docs.sun.com System Administration Guide Advanced Administration Controlling Access to the crontab.

You can control access to the crontab command by using two files in the /etc/cron.d directory: cron.deny and cron.allow. These files permit only specified users to perform the crontab command tasks such as creating, editing, displaying, or removing their own crontab files.

The cron.deny and cron.allow files consist of a list of user names, one per line. These access control files work together as follows:

Superuser privileges are required to edit or create the cron.deny and cron.allow files.

The cron.deny file, created during SunOS software installation, contains the following user names:
daemon
bin
smtp
nuucp
listen
nobody
noaccess

None of the user names in the default cron.deny file can access the crontab command. You can edit this file to add other user names that will be denied access to the crontab command.

No default cron.allow file is supplied. So, after Solaris software installation, all users (except the ones listed in the default cron.deny file) can access the crontab command. If you create a cron.allow file, only these users can access the crontab command.

To verify if a specific user can access crontab, use the crontab -l command while you are logged into the user account. $ crontab -l

If the user can access crontab, and already has created a crontab file, the file is displayed. Otherwise, if the user can access crontab but no crontab file exists, a message such as the following is displayed: crontab: can't open your crontab file

This user either is listed in cron.allow (if the file exists), or the user is not listed in cron.deny.

If the user cannot access the crontab command, the following message is displayed whether or not a previous crontab file exists: crontab: you are not authorized to use cron. Sorry.

This message means that either the user is not listed in cron.allow (if the file exists), or the user is listed in cron.deny.

Determining if you have crontab access is relatively easy. A Unix system administrator has two possible files to help manage the use of crontab. The administrator can explicitly give permission to specific users by entering their user identification in the file:

/etc/cron.d/cron.allow

Alternatively, the administrator can let anyone use crontab and exclude specific user with the file:

/etc/cron.d/cron.deny

To determine how your system is configured, first enter the following at the command line:

more /etc/cron.d/cron.allow

If you get the message, "/etc/cron.d/cron.allow: No such file or directory" you're probably in fat city. One last step, make sure you are not specifically excluded. Go back to the command line and enter:

more /etc/cron.d/cron.deny

If the file exists and you're not included therein, skip to setup instruction. If there are entries in the cron.allow file, and you're not among the chosen few, or if you are listed in the cron.deny file, you will have to contact the administrator and tell him/her you are an upstanding citizen and would like to be able to schedule crontab jobs.

In summary, users are permitted to use crontab if their names appear in the file /etc/cron.d/cron.allow. If that file does not exist, the file /etc/cron.d/cron.deny is checked to determine if the user should be denied access to crontab. If neither file exists, only the system administrator -- or someone with root access -- is allowed to submit a job. If cron.allow does not exist and cron.deny exists but is empty, global usage is permitted. The allow/deny files consist of one user name per line.



Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: November 09, 2017