Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Find Command Webliography

News See also Tutorial Recommended Links Man pages Reference
Unix Find Tutorial/Find logical expressions Finding SUID/SGUID files Selecting files using their age Traversal control Using find for backups Usage with cpio
Usage with grep Usage with xarg Find and backup      
Renaming files with special characters     Examples Humor Etc

This is a webliography for my  Find Tutorial

Find is actually a pretty tricky utility that is different from any other Unix utility. Still it is one of the most important sysadmin tools. You need to put extra efforts to learn to use it correctly (and still without testing nothing is guaranteed in complex cases ;-).

Find is not only useful for finding files, it is also important for finding directories and can greatly simplify filesystem navigation. See Advanced Unix filesystem navigation and  NCD clones  for more details (in NCD clones find often is used for creation of the list of directories for the whole filesystem or some subtree and stored for future reference.

It uses non-standard regular expressions and have a very obscure mini-language to specify your queries. As one Softpanorama reader wrote: "find's syntax is NOT regular, does not recognize standard regex notation, and is arcane, irregular and infuriating."  

That's why along with this page we created a mini-tutorial.  I hope it can save you some time trying to make your complex find queries work (they almost never worked for me without debugging :-). 

There are some interesting options in Posix find (as described in Solaris man page)

For more information see Unix Find Tutorial


Top updates

Bulletin Latest Past week Past month
Google Search


NEWS CONTENTS

Old News ;-)

[Oct 11, 2010] Unix find command helper

A useful web form to generate typical find command options.

[Aug 11, 2009] All about Linux Input-Output redirection made simple in Linux

I can give one practical purpose for this error redirection which I use on a regular basis. When I am searching for a file in the whole hard disk as a normal user, I get a lot of errors such as : 
find: /file/path: Permission denied

In such situations I use the error redirection to weed out these error messages as follows:

# find / -iname \* 2> /dev/null
Now all the error messages are redirected to /dev/null device and I get only the actual find results on the screen.
 

Note: /dev/null is a special kind of file in that its size is always zero. So what ever you write to that file will just disappear. The opposite of this file is /dev/zero which acts as an infinite source. For example, you can use /dev/zero to create a file of any size - for example, when creating a swap file for instance.

[Mar 16, 2009] List only file and not directory - ls command

Not sure if "ls" offers any standard option to list only file names and not the directories.

Some of the ways I found useful to list only files and no directories in my current directory.

Using find:

$ find . -type f -maxdepth 1

And if you want to avoid the hidden files.

$ find . -type f -maxdepth 1 \( ! -iname ".*" \)

And one more way derived from "ls -l" output

$ ls -l | awk 'NR!=1 && !/^d/ {print $NF}'

[Sep 26, 2008] What’s GNU, Part Four find Linux Magazine by Jerry Peek

Sep 22, 2008 |  www.linux-mag.com

... ... ...

The -path test, which was added fairly early to many find versions, is a shell wildcard-type pattern match against the entire current pathname. So, the test -path '*src/*.c' gets close to what we want here: it matches any pathname containing src, followed by any number of characters and a literal .c. That could be a file ./src/foo.c, but it could also be a file ./src/subdir/bar.c, or ./TeXsrc/foo.c, or something even messier. The wide-open matching of * meaning “zero or more of any character” can cause trouble when you need a specific pathname match.

The GNU find has several other name tests:

Another new test is -lname, which matches the target of a symbolic link. (Using other name tests, like -name, matches the name of the symlink itself.) The corresponding -ilname test does case-insensitive matching of the symlink target.

There are two other new tests and options for symbolic links:

Timestamp matching

Older versions of find matched timestamps only in 24-hour intervals. For instance, the tests -mtime -3 and -mtime 2 are both true for files modified between 72 and 48 hours ago. Besides being a bit hard to understand at first, the three timestamp tests (-atime, -ctime and -mtime) also are limited to 24-hour granularity. If you needed more accuracy, you’d have to use -newer or ! -newer to match a timestamp file — often one created by touch(1). (Worse yet, many versions of find would silently ignore more than one -newer test in the same expression!)

The new -amin, -cmin and -mmin tests check timestamps a certain number of minutes ago. For instance, to find files accessed within the past hour, use -amin -60. (Note that it’s hard to test last-access times for directories. That’s because, when find searches through a file tree, it accesses all of the directories — which updates all directories’ last-access timestamps.)

Another new option, -daypart, tells find to measure times from the beginning of today instead of in 24-hour multiples. This frees you from dependence on the current time you run find.

Directory Control

Early versions didn’t give you much control over which directories find visited. Once -prune was added, you could write an expression to keep find from descending into certain directories. For instance, to keep from descending into the ./src subdirectory, you can do something like this:

find . -path ./src -prune -o -etc…

And to skip all directories named lib (and all of their subdirectories):

find . -name lib -prune -o -etc…

The -prune action is good for avoiding certain directories, but — without the regular expression tests added later, at least — it’s not so good for limiting searches to a particular depth. In particular, it may not be obvious how to process only the entries in the current directory without any recursion. (The answer with -prune is:

find . ( -type d ! -name . -prune ) 
   -o -etc…

which “prunes” all directories except the current directory “.”.)

The new -mindepth and -maxdepth options make this a lot easier. Use -maxdepth n ; to descend no more than n levels below the command-line arguments. The option -maxdepth 0 tells find to evaluate only the command-line arguments.

In the same way, -mindepth n> tells find to ignore the first n levels of subdirectories. Also, -mindepth 1 processes all files except the command-line arguments. For instance, find subdir -mindepth 1 -ls> will descend into subdir and list each of its contents, but won’t list subdir itself.

The -depth option has been in quite a few versions of find; it’s not as “new” as some of the other features we cover. It’s not related to -maxdepth or -mindepth, though. The sidebar “-depth explained” has more information about how this option is used.

-depth explained

Because find is often used to give filenames to archive programs like tar, it’s worth understanding -depth and that part of its purpose.

A tar archive is a stream of bytes that contain header information for each file (including its name and access permissions) followed by that file’s data. The archive is extracted in order from first byte to last.

Let’s say that you archive an unwritable directory. When you later extract that directory from the archive, its permissions will be set at the time it’s extracted:

One “new” addition — which is actually in a lot of find versions — is -xdev or -mount. (GNU find understands both of those.) It tells find not to descend into directories mounted from other filesystems. This is handy, for example, to avoid network-mounted filesystems.

A more specific test is -fstype, which tests true if a file is on a certain type of filesystem. For instance, ! -fstype nfs is true for a file that’s not on an NFS-type filesystem. Different systems have different filesystem names and types, though. To get a listing of what’s on your system, use the new -printf action with its %F format directive to display the filesystems from the second field of each /etc/mtab entry:

% B
/                    type ext3
/proc                type proc
/dev/pts             type devpts
/dev/shm             type tmpfs

(You’ll probably find that same data in the second and third fields of each entry in /proc/mounts.)

Text Output

Early versions of find had basically one choice for outputting a pathname: print it to the standard output. Later, -ls was added; it gives an output format similar to ls -l. The new -printf action lets you use a C-like printf format. This has the usual format specifiers like the filename and the last-modification date, but it has others specific to find. For instance, %H tells you which command-line argument find was processing when it found this entry. One simple use for this is to make your own version of ls that gives just the information you want. As an example, the following bash function, findc, searches the command-line arguments (or, if there are no arguments, the current directory . instead) and prints information about all filenames ending with .c:

findc()
{
  find "${@-.}" -name '*.c' -printf 
    'DEPTH %2d  GROUP %-10g  NAME %fn'
}

(Note that the stat(1) utility might be simpler to use if you want a recursive listing and if stat’s format specifiers give the information you want.)

The longstanding -print action writes a pathname to the standard output, followed by a newline character. If that pathname happens to contain a newline, you get two newlines. (A newline is legal in a filename.) Most shells also break command-line arguments into words at whitespace (tabs, spaces and newlines); this means that command substitution (the backquote operators) could fail if, say, a filename contained spaces. It wasn’t too long before programmers fixed this problem by adding the -print0 action; it outputs a pathname followed by NUL (a zero byte). Because NUL isn’t legal in a filename, this pathname delimiter solved the problem — when find’s output was piped to the command xargs -0, which accepts NUL as an argument separator.

Because find can do many different tests as it traverses a filesystem, it’s good to be able to choose what should be done in each individual case. For instance, if you run a nightly cron job to clean up various files and directories from all of your disks, it’s nice to do all of the tests in a single pass through the filesystem — instead of making another complete pass for each of your tests. But it’s also good to avoid the overhead of running utilities like rm and rmdir over and over, once per file, in a find job like this one using -exec:

find /var/tmp -mtime +3 ( 
  ( -type f -exec rm -f {} ; ) -o 

  ( -type d -exec ..... {} ; ) 
)

This inefficiency could be solved by replacing -exec with -print or -print0, then piping find’s output to xargs. xargs collects arguments and passes them to another program each time it has collected “enough.” But all the text from -print or -print0 goes to find’s standard output, so there’s been no easy way to tell which pathnames were from which test (which are files, which are directories…).

The new -fprintf and -fprint0 actions can solve this problem. They write a formatted string to a file you specify. For instance, the following example writes a NUL-separated list of the files from /var/tmp into the file named by $files and a list of directories into the file named by $dirs:

dirs=`mktemp`
files=`mktemp`
find /var/tmp ( 
  ( -type f -fprint0 $files ) -o 
  ( -type d -fprint0 $dirs ) 
)

Other New Tests

The -empty test is true for an empty file or directory. (An empty file has no bytes; an empty directory has no entries.) One place this is handy is for removing empty directories while you’re cleaning a filesystem. If you also use <-depth>, all of the files in a directory should be removed before find examines the directory itself. Then you can use an expression like the following:

find /tmp -depth ( 
  ( -mtime +3 -type f -exec rm -f {} ; ) 
  -o ( -type d -empty -exec rmdir {} ; ) 
)

The -false “test” is always false, and -true is always true. These are a lot more efficient than the old methods (-exec false and -exec true) that execute the external Linux commands false(1) and true(1).

The -perm test has long accepted arguments like -perm 222 (which means “exactly mode 222″ — that is, write-only) and -perm -222 (which means “all of the write (2) mode bits are set”). Now -perm also accepts arguments starting with a plus sign. It means “any of these bits are set.” For instance, -perm +222 is true when any write bit is set.

[Sep 22, 2008] How to prune multiple branches with find - UNIX for Dummies Questions & Answers - The UNIX and Linux Forums

How to prune multiple branches with find?
I wanted to prune multiple branches and only the first one is being pruned. What am I doing wrong?

I tried using "-a" instead of "-o" with no luck!

Also, is there an easier way to accommodate file names with spaces other than that sed command?
Thanks,
Siegfried

/usr/bin/find . \( -path ./Application/PRD -prune -o -path ./setup/products/PRDWeb -prune -o -path ./products/PRDWeb -prune \) -o -print | sed -e 's/^.*$/"\0"/' | xargs grep -in "PRD\\(WEB\\(UID\\|PWD\\|SERVER\\)\\|UID\\|PWD\\|USERNAME\\|PASSWORD\\|SERVER\\)"

[Sep 16, 2008] Some notes about find

Here, be careful when additionally applying a pattern match on the result of the "-prune". At least the versions since 3.8 until the stable 4.1 (and until the alpha release 4.1.5) suffer from a bug, which prevents it from working:

    $ find . ! -name . -prune -name '<pattern>' <remaining expressions>

"-name" is not only applied to the result of "-prune" but all existing entries, because it is "optimized" to the left of "-prune". This is violating the fundamental left-to-right order of evaluation.

By the way: If you negate the second "-name" then the bug is apparently not triggered anymore.
Thus, unmaking the negation by just doubling it looks like a proper workaround:

    $ find . ! -name . -prune ! \( ! -name '<pattern>' \) <remaining expressions>

(See <3D7355C4.23L11TSIK@bigfoot.de> and <3DA23EDA.LHE11XWCK@bigfoot.de>.)

What about the remaining two (incorrect) ways mentioned above?

[Sep 12, 2008] Unix-Linux find Command Tutorial

As a system administrator you can use find to locate suspicious files (e.g., world writable files, files with no valid owner and/or group, SetUID files, files with unusual permissions, sizes, names, or dates).  Here's a final more complex example (which I save as a shell script):
find / -noleaf -wholename '/proc' -prune \
     -o -wholename '/sys' -prune \
     -o -wholename '/dev' -prune \
     -o -wholename '/windows-C-Drive' -prune \
     -o -perm -2 ! -type l  ! -type s \
     ! \( -type d -perm -1000 \) -print

This says to seach the whole system, skipping the directories /proc, /sys, /dev, and /windows-C-Drive (presumably a Windows partition on a dual-booted computer).  The Gnu -noleaf option tells find not to assume all remaining mounted filesystems are Unix file systems (you might have a mounted CD for instance).  The -o is the Boolean OR operator, and ! is the Boolean NOT operator (applies to the following criteria).

... ... ...

Secondly filenames may contain spaces or newlines, which would confuse the command used with xargs.  (Again Gnu tools have options for that, find ... -print0 |xargs -0 ....)

There are POSIX (but non-obvious) solutions to both problems.  An alternate form of -exec ends with a plus-sign, not a semi-colon.  This form collects the filenames into groups or sets, and runs the command once per set.  (This is exactly what xargs does, to prevent argument lists from becoming too long for the system to handle.)  In this form the {} argument expands to the set of filenames.  For example:

find / -name core -exec /bin/rm -f '{}' +

This form of -exec can be combined with a shell feature to solve the other problem (names with spaces).  The POSIX shell allows us to use:

sh -c 'command-line' [ command-name [ args... ] ]

(We don't usually care about the command-name, so X, dummy, or inline cmd is often used.)  Here's an example of efficiently copying found files, in a POSIX-compliant way (Posted on comp.unix.shell netnews newsgroup on Oct. 28 2007 by Stephane CHAZELAS):

find . -name '*.txt' -type f \
  -exec sh -c 'exec cp -f "$@" /tmp' find-copy {} +

[Jan 08, 2008] My SysAd Blog -- UNIX Using the Common UNIX Find Command

July 07, 2007 |  esofthub.blogspot.com

Find how many directories are in a path (counts current directory)
# find . -type d -exec basename {} \; | wc -l
53

Find how many files are in a path
# find . -type f -exec basename {} \; | wc -l
120

... ... ...

Find files that were modified 7 days ago and archive
# find . -type f -mtime 7 | xargs tar -cvf `date '+%d%m%Y'_archive.tar`

Find files that were modified more than 7 days ago and archive
# find . -type f -mtime +7 | xargs tar -cvf `date '+%d%m%Y'_archive.tar`

Find files that were modified less than 7 days ago and archive
# find . -type f -mtime -7 | xargs tar -cvf `date '+%d%m%Y'_archive.tar`

Find files that were modified more than 7 days ago but less than 14 days ago and archive
# find . -type f -mtime +7 -mtime -14 | xargs tar -cvf `date '+%d%m%Y'_archive.tar`

Find files in two different directories having the "test" string and list them
# find esofthub esoft -name "*test*" -type f -ls


Find files and directories newer than CompareFile
# find . -newer CompareFile -print

Find files and directories but don't traverse a particular directory
# find . -name RAID -prune -o -print

Find all the files in the current directory
# find * -type f -print -o -type d -prune

Find files associated with an inode
# find . -inum 968746 -print
# find . -inum 968746 -exec ls -l {} \;

Find an inode and remove
# find . -inum 968746 -exec rm -i {} \;

Comment for the blog entry

ux-admin said...

Avoid using "-exec {}", as it will fork a child process for every file, wasting memory and CPU in the process. Use `xargs`, which will cleverly fit as many arguments as possible to feed to a command, and split up the number of arguments into chunks as necessary:

find . -depth -name "blabla*" -type f | xargs rm -f

Also, be as precise as possible when searching for files, as this directly affects how long one has to wait for results to come back. Most of the stuff actually only manipulates the parser rather than what is actually being searched for, but even there, we can squeeze some performance gains, for example:

- use "-depth" when looking for ordinary files and symollic links, as "-depth" will show them before directories

- use "-depth -type f" when looking for ordinary file(s), as this speeds up the parsing and the search significantly:

find . -depth -type f -print | ...

- use "-mount" as the first argument when you know that you only want to search the current filesystem, and

- use "-local" when you want to filter out the results from remote filesystems.

Note that "-local" won't actually cause `find` not to search remote file systems -- this is one of the options that affects parsing of the results, not the actual process of locating files; for not spanning remote filesystems, use "-mount" instead:

find / -mount -depth \( -type f -o -type l \) -print ...

Josh said...
From the find(1) man page:

-exec command {} +
This variant of the -exec option runs the specified command on the selected files, but the command line is
built by appending each selected file name at the end; the total number of invocations of the command will
be much less than the number of matched files. The command line is built in much the same way that xargs
builds its command lines. Only one instance of â{}â is allowed within the command. The command is exeâ
cuted in the starting directory.

Anonymous said...
the recursive finds were useful
UX-admin said...
" Josh said...

From the find(1) man page:

-exec command {} +
This variant of the -exec option runs the specified command on the selected files, but the command line is
built by appending each selected file name at the end; the total number of invocations of the command will
be much less than the number of matched files. The command line is built in much the same way that xargs
builds its command lines. Only one instance of â{}â is allowed within the command. The command is exeâ
cuted in the starting directory."

Apparently, "-exec" seems to be implementation specific, which is another good reason to avoid using it, since it means that performance factor will differ from implementation to implementation.

My point is, by using `xargs`, one assures that the script / command will remain behaving the same across different UNIX(R) and UNIX(R) like operating systems.

If you had to choose between convenience and portability+consistency, which one would you choose?

arne said...
instead of using
find ./ -name blah
I find it better to use the case-insentive form of -name, -iname:
find ./ -iname blah
Anonymous said...
You have to be careful when you remove things.
You say remove files which name is core, but lacks the "-type f" option:
find . -name "core" -exec rm -f {} \;
The same for the example with directories named "junk". Your command would delete any type of files called junk (files, directories, links, pipes...)

I did not know about "-mount", I've always used "-xdev".
Another nice feature, at least in linux find, is the "-exec {} \+", which will fork only once.

Does -prune work like -maxdepth in Unix find on AIX from the Ask Dave Taylor! Tech Support Blog

Does -prune work like -maxdepth in Unix "find" on AIX?

Dave, I purchased your Wicked Cool Shell Scripts book a month or so ago (great book), and have used it to "learn by example" in writing some shell scripts, as I've a long way to go in this area.

I need to rotate logs on an IBM AIX 5.1 Unix box, and tried using your script #55, rotatelogs for this but it didn't work, as -maxdepth is not supported in AIX's find command. So, I commented it out, and it worked, but also rotated everything in the subdirectories as well (no problem...backed up directory first, then restored). I am trying to get it to work using -prune which my search in Google found to be a good fix for the lack of �maxdepth, but it's not doing what I want. Help!

Dave's Answer:

Hmmm... I don't have access to an AIX Unix box, but are you sure that they don't have a copy of GNU find tucked away somewhere on the box? Try using something like find / -name "find" -print as root to see if there's more than one copy of the app on the system.

I'm not sure that -prune is what you want, though. Here's what the find man page on my system says about it:

-prune   This primary always evaluates to true. It causes find to not descend into the current file. Note, the -prune primary has no effect if the -d option was specified.

Is this a viable replacement for -maxdepth?

When I run it the output isn't useful:

$ find . -prune -print
.
$
Hmmmm.... replacing the "." with a "*" proves interesting (yes, I'm making the find more interesting too, just matching files that are non-zero in size):
$ find * -prune -type f -size +0c -print
African Singing.aif
Branding for Writers.doc
GYBGCH12.doc
KF BPlan-04-1123.doc
Parent Night.aif
Rahima Keynote.aif
lumxtiger_outline final.doc
master-adwords.pdf
$
Maybe that's what you need (ignore the specific files I have. You can see what I'm working on as this is my desktop. ;-)

Try what I suggested, see what kind of results you get!

Some examples of using Unix find command.

How to apply a complex selection of files (-o and -a).

find /usr/src -not \( -name "*,v" -o -name ".*,v" \) '{}' \; -print

This command will search in the /usr/src directory and all sub directories. All files that are of the form '*,v' and '.*,v' are excluded. Important arguments to note are:

The above example is shows how to select all file that are not part of the RCS system. This is important when you want go through a source tree and modify all the source files... but ... you don't want to affect the RCS version control files.

[Dec 4, 2007] xargs,  find and several useful shortcuts

See also Unix Xargs  page.

Re:pushd and popd (and other tricks) (Score:2)
by Ramses0 (63476) on Wednesday March 10, @07:39PM (#8527252)

My favorite "Nifty" was when I spent the time to learn about "xargs" (I pronounce it zargs), and brush up on "for" syntax.

    ls | xargs -n 1 echo "ZZZ> "

Basically indents (prefixes) everything with a "ZZZ" string. Not really useful, right? But since it invokes the echo command (or whatever command you specify) $n times (where $n is the number of lines passed to it) this saves me from having to write a lot of crappy little shell scripts sometimes.

A more serious example is:

    find -name \*.jsp | sed 's/^/http:\/\/127.0.0.1/server/g' | xargs -n 1 wget

...will find all your jsp's, map them to your localhost webserver, and invoke a wget (fetch) on them. Viola, precompiled JSP's.

Another:

    for f in `find -name \*.jsp` ; do echo "==> $f" >> out.txt ; grep "TODO" $f >> out.txt ; done

...this searches JSP's for "TODO" lines and appends them all to a file with a header showing what file they came from (yeah, I know grep can do this, but it's an example. What if grep couldn't?) ...and finally...

( echo "These were the command line params"
    echo "---------"
    for f in $@ ; do
          echo "Param: $f"
    done ) | mail -s "List" you@you.com ...the parenthesis let your build up lists of things (like interestingly formatted text) and it gets returned as a chunk, ready to be passed on to some other shell processing function.

Shell scripting has saved me a lot of time in my life, which I am grateful for. :^)

[May 16, 2007] Advanced techniques for using the UNIX find command by Bill Zimmerly

Mar 28, 2006 | Developerworks
Use find creatively

You can perform myriad tasks with the find command. This section provides some examples of ways you can put find to work as you manage your file system.

To keep things simple, these examples avoid -exec commands that involve the piping of output from one command to another. However, you're free to use commands like these in a find's -exec clause.

Clean out temporary files

You can use find to clean directories and subdirectories of the temporary files generated during normal use, thereby saving disk space. To do so, use the following command:

$ find . \( -name a.out -o -name '*.o' -o -name 'core' \) -exec rm {} \;

File masks identifying the file types to be removed are located between the parentheses; each file mask is preceded by -name. This list can be extended to include any temporary file types you can come up with that need to be cleaned off the system. In the course of compiling and linking code, programmers and their tools generate file types like those shown in the example: a.out, *.o, and core. Other users have similar commonly generated temporary files and can edit the command accordingly, using file masks like *.tmp, *.junk, and so on. You might also find it useful to put the command into a script called clean, which you can execute whenever you need to clean a directory.

Copy a directory's contents

The find command lets you copy the entire contents of a directory while preserving the permissions, times, and ownership of every file and subdirectory. To do so, combine find and the cpio command, like this:

Listing 2. Combining the find and cpio command

$ cd /path/to/source/dir

$ find . | cpio -pdumv /path/to/destination/dir

The cpio command is a copy command designed to copy files into and out of a cpio or tar archive, automatically preserving permissions, times, and ownership of files and subdirectories.

List the first lines of text files

Some people use the first line of every text file as a heading or description of the file's contents. A report that lists the filenames and first line of each text file can make sifting through several hundred text files a lot easier. The following command lists the first line in every text file in your home directory in a report, ready to be examined at your leisure with the less command:

Listing 3. The less command

$ find $HOME/. -name *.txt -exec head -n 1 -v {} \; > report.txt

$ less < report.txt

Maintain LOG and TMP file storage spaces

To maintain LOG and TMP file storage space for applications that generate a lot of these files, you can put the following commands into a cron job that runs daily:


Listing 4. Maintaining LOG and TMP file storage spaces
$ find $LOGDIR -type d -mtime +0 -exec compress -r {} \;

$ find $LOGDIR -type d -mtime +5 -exec rm -f {} \;

The first command runs all the directories (-type d) found in the $LOGDIR directory wherein a file's data has been modified within the last 24 hours (-mtime +0) and compresses them (compress -r {}) to save disk space. The second command deletes them (rm -f {}) if they are more than a work-week old (-mtime +5), to increase the free space on the disk. In this way, the cron job automatically keeps the directories for a window of time that you specify.

Copy complex directory trees

If you want to copy complex directory trees from one machine to another while preserving copy permissions and the User ID and Group ID (UID and GID -- numbers used by the operating system to mark files for ownership purposes), and leaving user files alone, find and cpio once again come to the rescue:

Listing 5. Maintaining LOG and TMP file storage spaces

$ cd /source/directory

$ find . -depth -print | cpio -o -O /target/directory

Find links that point to nothing

To find links that point to nothing, use the perl interpreter with find, like this:

$ find / -type l -print | perl -nle '-e || print';

This command starts at the topmost directory (/) and lists all links (-type l -print) that the perl interpreter determines point to nothing (-nle '-e || print') -- see the Resources section for more information regarding this tip from the Unix Guru Universe site. You can further pipe the output through the rm -f {} functionality if you want to delete the files. Perl is, of course, one of the many powerful interpretive language tools also found in most UNIX toolkits.

Locate and rename unprintable directories

It's possible in UNIX for an errant or malicious program to create a directory with unprintable characters. Locating and renaming these directories makes it easier to examine and remove them. To do so, you first include the -i switch of ls to get the directory's inode number. Then, use find to turn the inode number into a filename that can be renamed with the mv command:

Listing 6. Locating and renaming unprintable directories

$ find . -inum 211028 -exec mv {} newname.dir \;


List zero-length files

To list all zero-length files, use this command:

$ find . -empty -exec ls {} \;

After finding empty files, you might choose to delete them by replacing the ls command with the rm command.

Clearly, your use of the UNIX find command is limited only by your knowledge and creativity.

[Feb 9, 2007] Letter to the editor

January 24, 2007 (USA) You have a page with this almost useful command to find ascii files and grep them (omitting binaries, etc):

find . -type f -print | xargs file | grep -i text | cut -f1 -d: | xargs egrep "$*"

It works fine, until you go through files like Perl modules which may have some colons in the name. I submit this minor change and introduce another command to your future readers:

find . -type f -print | xargs file | grep -i text | cut -f1 |sed 's/:$//g' | xargs egrep "$*"

It cuts at the tab instead of colon, and then strips any colons out that end the lines without impacting colons in the filename itself. Sneaky, hey?

Kevin

find and xargs

 

find can perform a number of actions on the file(s) it finds.

Other useful find parameters:

OPERATORS

 

Examples:

xargs

Miscellaneous

[Aug 23, 2006] A Couple Quick find Tips

* Search and list all files from current directory and down for the string ABC:
find ./ -name "*" -exec grep -H ABC {} \;
find ./ -type f -print | xargs grep -H "ABC" /dev/null
egrep -r ABC *
* Find all files of a given type from current directory on down:
find ./ -name "*.conf" -print
* Find all user files larger than 5Mb:
find /home -size +5000000c -print
* Find all files owned by a user (defined by user id number. see /etc/passwd) on the system: (could take a very long time)
find / -user 501 -print
* Find all files created or updated in the last five minutes: (Great for finding effects of make install)
find / -cmin -5
* Find all users in group 20 and change them to group 102: (execute as root)
find / -group 20 -exec chown :102 {} \;
* Find all suid and setgid executables:
find / \( -perm -4000 -o -perm -2000 \) -type f -exec ls -ldb {} \;
find / -type f -perm +6000 -ls

Note: suid executable binaries are programs which switch to root privaleges to perform their tasks. These are created by applying a "stickey" bit: chmod +s. These programs should be watched as they are often the first point of entry for hackers. Thus it is prudent to run this command and remove the "stickey" bits from executables which either won't be used or are not required by users. chmod -s filename
* Find all world writable directories:
find / -perm -0002 -type d -print
* Find all world writable files:
find / -perm -0002 -type f -print
find / -perm -2 ! -type l -ls
* Find files with no user:
find / -nouser -o -nogroup -print
* Find files modified in the last two days:
find / -mtime 2 -o -ctime 2
* Compare two drives to see if all files are identical:
find / -path /proc -prune -o -path /new-disk -prune -o -xtype f -exec cmp {} /new-disk{} \;

[Jul 11, 2006] Linux.com CLI Magic Searching with find

Another useful test is -size, which lets you specify how big the files should be to match. You can specify your size in kilobytes and optionally also use + or - to specify greater than or less than. For example:
 find /home -name "*.txt" -size 100k 
 find /home -name "*.txt" -size +100k 
 find /home -name "*.txt" -size -100k 

The first brings up files of exactly 100KB, the second only files greater than 100KB, and the last only files less than 100KB.

What's New in the Solaris 9 4-04 Operating Environment

See Freeware Enhancements for information about GNU grep 2.4.2, GNU tar 1.13, GNU wget 1.6, and Ncftp Client 3.0.3  in the Solaris 9 release.

Tricks with find by Sandra Henry-Stocker

The two most common options used with find are, not surprisingly, -print and -ls. The -print option prints the name of the file from the point of view of the current directory (e.g., /export/home/sbob/junk.txt or just ./sbob/junk.txt if the search is started in /export/home). The -ls option provides the same type of information that you would see if you viewed the files listings with ls -li.

-exec

The -exec option is undoubtedly the next most popular option and is used to specify what should be done with the files that are found. When using the -exec option, the name of each file located by find is represented in the find command with the string "{}" (without the quotes). The command to print the number of lines in each of the files located by find would look like this:

boson> find . -type f -exec wc -l {} \;

The \; marks the end of the command and, running this command, you might see something like this:

      74 ./04-28-03_CPU_bottleneck.txt
      12 ./testme
       6 ./testme2
       5 ./ix

A command to look for particular strings and print the strings and the names of the files in which they would be found might look like this:

find . -type f -exec grep "before" {} \; -print

The -print in this command is put at the end of the command so that the file names are only printed when the selected test was found.

Changing Files with Find

To modify the content of files that are located with find, you might use sed or you might use an in-place Perl command such as this command that runs through all HTML files in or under the current directory to change one URL to another:

find . -type f -name "*\.html" -exec perl  -i -p -e \
  's/www.before.org/www.after.com/g;' {} \;

If you frequently have to change links on a large web site, a command such as this can save you a lot of time. If you want to provide this command to someone not likely to remember the find command, you can insert it into a script like this:

#!/bin/bash

if [ $# != 2 ]; then
    echo "usage $0  "
    exit
else
    BEFORE=$1
    AFTER=$2
fi

find . -type f -name "*\.html" -exec perl -i -p -e \
  "s/$BEFORE/$AFTER/g;" {} \;

Doing Lots with Changed Files

If you want to run a number of commands on each file that you find, there's a way to do that as well - and it doesn't require use of the -exec option. Instead, pipe the output of your normal find command to a while loop such as this and you can run as many commands as you want between the do and done markers.

> find . -type f -print | while read i
do
    file $i
    ls -i $i
    wc -l $i
done

In this loop, your file name is assigned to $i. This command should work with shells in the Bourne shell family (sh, ksh, bash and so on).

A lot of work can be accomplished with a cleverly poised find command.

Some of the ideas in this column were provided by Robert D. and Chang A.

Date: Sat, 14 Sep 1996 19:50:55 -0400 (EDT)
From: Bill Duncan <bduncan@beachnet.org>
Subject: find tip...

Hi Jim Murphy,

Saw your "find" tip in issue #9, and thought you might like a quicker method. I don't know about other distributions, but Slackware and Redhat come with the GNU versions of locate(1) and updatedb(1) which use an index to find the files you want. The updatedb(1) program should be run once a night from the crontab facility. To ignore certain sub-directories (like your /cdrom) use the following syntax for the crontab file:

 
41 5 * * *  updatedb --prunepaths="/tmp /var /proc /cdrom" > /dev/null 2>&1

This would run every morning at 5:41am, and update the database with filenames from everywhere but the subdirectories (and those below) the ones listed.

To locate a file, just type "locate filename". The filename can also do partial matching. The search only takes a few seconds typically, and I have tens of thousands of files.

The locate(1) command also has regular expression matching, but I often just pipe it through agrep(1) (a faster grep) to narrow the search if I want. Thus:

   locate locate | agrep -v man

..would exclude the manpage for example, and only show me the binary and perhaps the sources if I had them online. (The -v flag excludes the pattern used as an argument.) Or the binary alone along with a complete directory listing of it with the following command:

 ls -l `locate locate | agrep bin`

The find(1) command is a great "swiss-army knife" (and actually not that bad once you get used to it), but for the 90% of the cases where you just want to search by filename, the locate(1) command is *far* faster, and much easier to use.

The find command is much more powerful than locate  and can be given extensive options to modify the search criteria. Unlike locate, find  actually searches the disk (local or remote); thus, it is much slower but provides the most up-to-date information. The basic syntax is as follows:

find directory [options]

The most basic usage of find  is to print the files in a directory and its subdirectories:

find directory -print

After learning about the find  command, many new users quickly implement an alias or function as a replacement for locate:

find / -print | grep $1

Generally, this is a bad idea because most systems have network drives mounted, and find  tries to access them, causing not only the local machine to slow down, but also remote machines. To exclude other filesystems during a find run, try using the -xdev  option:

# find / -name Makefile -print -xdev

This searches for any file named Makefile, starting at the root  directory, but won't search other mounted or network-mounted filesystems, such as NFS (possibly mounted under the /mnt directory). Here's the correct way to get output such as locate  from find:

find directories -name name -print

For example, use this line to find all makefiles in /usr/src/:

# find /usr/src -name "[mM]akefile" -print

The -name  option accepts all shell metacharacters. An alternative to the preceding method for finding all files named Makefile  or makefile  is to use the non-case-sensitive -iname  option instead of -name. You must use quotation marks to prevent your shell from misinterpreting wildcard characters (such as ?  or *). For the truly adventurous, find  also supports a -regex  option.

In addition to specifying which filenames to find, find  can be told to look at files of a specific size, type, owner, or permissions.

To find a file by size, use the following option, where n  is the size:

-size n[bckw]

The letters stand for:

b   512-byte blocks (default)
c   Bytes
k   Kilobytes (1,024 bytes)
w   2-byte words

For example, use the following to find all files in /usr  larger than 100KB:

# find /usr -size 100k

To find files by type, use the following option:

-type x

x  is one of the following letters:

b   Block (buffered) special
c   Character (unbuffered) special
d   Directory
p   Named pipe (FIFO)
f   Regular file
l   Symbolic link
s   Socket

Therefore, use the following to find all symbolic links in /tmp:

# find /tmp -type l

In addition to simply printing out the filename, find  can be told to print file information by specifying the -ls  option. For example, the boldface code produces the following output:

# find /var -name "log" -ls
18433   1 drwxr-xr-x   7 root  root  1024 Apr 15 11:40 /var/log
391463  3 -rw-r--r--   1 news  news  2632 Apr 16 04:05 /var/spool/slrnpull/log

The output is similar in form to the output from ls -il.

The last option of interest for find  is the -exec  option, which allows for the execution of a command on each filename that matches the previous criteria. The basic syntax of the -exec  option is this:

-exec [command [options]] '{}' ';'

The -exec  option uses the string '{}'  and replaces it with the name of the current file that was matched. The ';'  string is used to tell find  where the end of the executed command is. For example, the following command line searches each C source code file under the /usr/src directory for the word Penguin, then prints the filename and line number in the file:

# find /usr/src -name "*.c" -exec fgrep --with-filename Penguin '{}' ';'/usr/src/linux-2.2.2/scripts/ksymoops/oops.c:1087:   "|(Penguin)"/usr/src/linux-2.2.2/scripts/ksymoops/oops.c:1088:   "|(Too many Penguin)"

Although you can use the find  command to search for strings of text in documents (useful for searching the HOWTOs under the /usr/doc directory), a better use for find is in automating system administration tasks. For example, to regularly search your system for core dumps, you can combine find  and the rm  command like this:

find / -xdev -name core -exec rm {} ';'

This find  command line searches the local filesystem and then delete any found core files; it can be used in the system's crontab  to recover some disk space used by core dumps


Man pages

Solaris 10 find manpage

Recommended Links

Softpanorama Top Visited

Softpanorama Recommended

See also Softpanorama Find Mini Tutorial

Find - Wikipedia, the free encyclopedia

Advanced techniques for using the UNIX find command

Finding Files - Table of Contents by David MacKenzie Edition 1.1, for GNU find version 4.1 November 1994

Find tutorial Copyright 2001 Bruce Barnett and General Electric Company

Unix-Linux find Command Tutorial

O'Reilly Network / Finding Things in Unix

One of the most useful utilities to be found on any Unix system is the find command.In the next two articles, I'd like to work you through the syntax of this command and provide you with some practical examples of its usage.

ONLamp.com: Find: Part Two [Mar. 14, 2002]

About.com: Power Commands- Find

How do I use the solaris find command - ECN Knowledge Base @ Purdue

The Shell Corner Using the Unix find Command

find . -type f -print | xargs file | grep -i text | cut -f1 -d: | xargs egrep "$*"

Ed. note: The Windows MKS toolkit K-shell file command identifies Word files as ASCII text with control files. To eliminate these files from the search, I eliminated any resulting files with keyword "control" :

find . -type f -print | xargs file | grep -i text | grep -v control | cut -f1 -d: | xargs egrep "$*"

# The above one-liner can be easily modified to other kinds of operations to text files.

#
# Consider the need to replace all instances of Linux with GNU/Linux....
#
find . -type f -print | xargs file | grep -i text | cut -f1 -d: | while read file; do
vi $file >/dev/null 2>&1 <<!
:%s,Linux,GNU/Linux,g
:wq
!
done

find a Particular User or Permission

# Use for find is locating files belonging to a particular user...
 
# This finds all the files in the current heirarchy belonging to olded
find . -user olded -print
# or with special permissions... 
# This finds all root owned suid files.
find . -user root -a -perm -4000 -print

find and Backup Files

# Use find to perform an incremental backup... consider:
find ./ -newer bckup.out -xdev -type f -print | cpio -ocvB >/dev/tape 2>bckup.out
# The -newer option only finds files newer than bckup.out... that is, only 
# files that have been modified since the last backup.а This makes the 
# command useful for doing incremental backups.
# Assuming you are root, and you do this from the / directory, this 
# should backup up everything on /.а The / is preceded with a "." to 
# make sure that we can restore the tape to any directory... not just /.

Some examples of using Unix find command.

find examples

Linux and UNIX find command help

Learn Unix The find command

Find and backup

Find can work with cpio, some version of find support options -cpio and -ncpio :
find . -depth -print | cpio -oB >/dev/rmt0
find . -depth -print | cpio -ocB >/dev/rmt0

which do the same as

find . -depth -cpio >/dev/rmt0
find . -depth -ncpio >/dev/rmt0

find and Backup Files

# Use find to perform an incremental backup... consider:
find ./ -newer bckup.out -xdev -type f -print | cpio -ocvB >/dev/tape 2>bckup.out
# The -newer option only finds files newer than bckup.out... that is, only 
# files that have been modified since the last backup.а This makes the 
# command useful for doing incremental backups.
# Assuming you are root, and you do this from the / directory, this 
# should backup up everything on /.а The / is preceded with a "." to 
# make sure that we can restore the tape to any directory... not just /.



Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2014 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Site uses AdSense so you need to be aware of Google privacy policy. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting hosting of this site with different providers to distribute and speed up access. Currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.

Disclaimer:

The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: February 19, 2014