Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Unix Sysadmin Tips

News Enterprise Unix System Administration Recommended Links Unix System Monitoring Job schedulers Unix Configuration Management Tools Perl Admin Tools and Scripts Baseliners
Bash Tips and Tricks WinSCP Tips Attaching to and detaching from screen sessions Midnight Commander Tips and Tricks WinSCP Tips Linux netwoking tips RHEL Tips Suse Tips
Filesystems tips Shell Tips How to rename files with special characters in names VIM Tips GNU Tar Tips GNU Screen Tips AWK Tips Linux Start up and Run Levels
Unix System Monitoring Job schedulers  Grub Simple Unix Backup Tools  Sysadmin Horror Stories History Humor Etc

Lazy Linux: 10 essential tricks for admins by Vallard Benincosa  Certified Technical Sales Specialist, IBM

20 Jul 2008 | IBM DeveloperWorks

How to be a more productive Linux systems administrator

Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time—and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset. But wait you say, typing reset is too close to typing reboot or shutdown. Your palms start to sweat—especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.

Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


Figure 1. GRUB screen after reboot
GRUB screen after reboot
 

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


Figure 2. Ready to edit the kernel line
Ready to edit the kernel line
 

Use the arrow key again to highlight the line that begins with kernel, and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


Figure 3. Append the argument with the number 1
Append the argument with the number 1
 

Then press Enter, B, and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door. To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.


Figure 4. Poking a hole in the firewall
Poking a hole in the firewall
 

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.

     
  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 thedude@blackbox.example.com

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh thedude@blackbox.example.com .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.

     
  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com

    This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling
Putty can forward SSH ports for tunneling
 

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

Trick 7: Checking your bandwidth

Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

So they do this. But now the question is: How much bandwidth do they really have?

Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

1Gb = 1024Mb; 1024Mb/8 = 128MB; "b" = "bits," "B" = "bytes"

But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install

On ginger, run:

# /home/bob/perf/bin/iperf -s -f M

This machine will act as the server and print out performance speeds in MBps.

On the beckham node, run:

# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

Trick 8: Command-line scripting and utilities

A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk, grep, and sed. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq

A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

free -m | grep Mem | awk '{print $2}'

That command says to:

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped (|d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system—a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool:

# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

About the author

  Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.
 
Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Oct 27, 2017] Neat trick of using su command for killing all processes for a particular user

Oct 27, 2017 | unix.stackexchange.com

If you pass -1 as the process ID argument to either the kill shell command or the kill C function , then the signal is sent to all the processes it can reach, which in practice means all the processes of the user running the kill command or syscall.

su -c 'kill -TERM -1' bob

In C (error checking omitted):

if (fork() == 0) {
    setuid(uid);
    signal(SIGTERM, SIG_DFL);
    kill(-1, SIGTERM);
}

[Oct 27, 2017] c - How do I kill all a user's processes using their UID - Unix Linux Stack Exchange

Oct 27, 2017 | unix.stackexchange.com

osgx ,Aug 4, 2011 at 10:07

Use pkill -U UID or pkill -u UID or username instead of UID. Sometimes skill -u USERNAME may work, another tool is killall -u USERNAME .

Skill was a linux-specific and is now outdated, and pkill is more portable (Linux, Solaris, BSD).

pkill allow both numberic and symbolic UIDs, effective and real http://man7.org/linux/man-pages/man1/pkill.1.html

pkill - ... signal processes based on name and other attributes

    -u, --euid euid,...
         Only match processes whose effective user ID is listed.
         Either the numerical or symbolical value may be used.
    -U, --uid uid,...
         Only match processes whose real user ID is listed.  Either the
         numerical or symbolical value may be used.

Man page of skill says is it allowed only to use username, not user id: http://man7.org/linux/man-pages/man1/skill.1.html

skill, snice ... These tools are obsolete and unportable. The command syntax is poorly defined. Consider using the killall, pkill

  -u, --user user
         The next expression is a username.

killall is not marked as outdated in Linux, but it also will not work with numberic UID; only username: http://man7.org/linux/man-pages/man1/killall.1.html

killall - kill processes by name

   -u, --user
         Kill only processes the specified user owns.  Command names
         are optional.

I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use full list of processes (doing some readdir of /proc ). I think, they will iterate over /proc digital subfolders and check every found process for match.

To get list of users, use getpwent (it will get one user per call).

skill (procps & procps-ng) and killall (psmisc) tools both uses getpwnam library call to parse argument of -u option, and only username will be parsed. pkill (procps & procps-ng) uses both atol and getpwnam to parse -u / -U argument and allow both numeric and textual user specifier.

; ,Aug 4, 2011 at 10:11

pkill is not obsolete. It may be unportable outside Linux, but the question was about Linux specifically. – Lars Wirzenius Aug 4 '11 at 10:11

Petesh ,Aug 4, 2011 at 10:58

to get the list of users use the one liner: getent passwd | awk -F: '{print $1}' – Petesh Aug 4 '11 at 10:58

; ,Aug 4, 2011 at 12:07

what about I give a command like: "kill -ju UID" from C system() call? – user489152 Aug 4 '11 at 12:07

osgx ,Aug 4, 2011 at 15:01

is it an embedded linux? you have no skill, pkill and killall? Even busybox embedded shell has pkill and killall. – osgx Aug 4 '11 at 15:01

michalzuber ,Apr 23, 2015 at 7:47

killall -u USERNAME worked like charm – michalzuber Apr 23 '15 at 7:47

[Feb 20, 2017] How to take screenshots on Linux using Scrot

Feb 20, 2017 | www.howtoforge.com

If you are looking for an even better command line utility for taking screenshots, then you must give Scrot a try. This tool has some extra features that are currently not available in gnome-screenshot. In this tutorial, we will explain Scrot using easy to understand examples.

Scrot ( SCR eensh OT ) is a screenshot capturing utility that uses the imlib2 library to acquire and save images. Developed by Tom Gilbert, it's written in C programming language and is licensed under the BSD License.

[Feb 20, 2017] Lynis: yet another Linux hardening tool written in shell

It would be interesting to see how long they will last (in active maintainance of the package). The package written in shell (old style codeing like $(aaa) dor variables. Pretty large package. Tarball is available form the site. RPM can be tricky to install on some distributions as it has dependencies, just downloading it is not enough.
Software packages are available via https://packages.cisofy.com. Requirements Shell and basic utilities
For CentOs, RHEL and similar flavors RPM is available from EPEL: download.fedora.redhat.com/pub/fedora/epel/6/x86_64/ lynis-2.4.0-1.el6.noarch.rpm
OpenSuse also has RPM
cisofy.com

Feb 20, 2017 | cisofy.com

sudo lynis

[ Lynis 2.4.0 ]

################################################################################
  Lynis comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
  welcome to redistribute it under the terms of the GNU General Public License.
  See the LICENSE file for details about using this software.

  2007-2016, CISOfy - https://cisofy.com/lynis/
  Enterprise support available (compliance, plugins, interface and tools)
################################################################################

[+] Initializing program
------------------------------------
  Usage: lynis command [options]

  Command:

    audit
        audit system                  : Perform local security scan
        audit system remote     : Remote security scan
        audit dockerfile        : Analyze Dockerfile

    show
        show                          : Show all commands
        show version                  : Show Lynis version
        show help                     : Show help

    update
        update info                   : Show update details
        update release                : Update Lynis release

  Options:

    --no-log                          : Don't create a log file
    --pentest                         : Non-privileged scan (useful for pentest)
    --profile                : Scan the system with the given profile file
    --quick (-Q)                      : Quick mode, don't wait for user input

    Layout options
    --no-colors                       : Don't use colors in output
    --quiet (-q)                      : No output
    --reverse-colors                  : Optimize color display for light backgrounds

    Misc options
    --debug                           : Debug logging to screen
    --view-manpage (--man)            : View man page
    --verbose                         : Show more details on screen
    --version (-V)                    : Display version number and quit

    Enterprise options
    --plugin-dir ""             : Define path of available plugins
    --upload                          : Upload data to central node

    More options available. Run '/usr/sbin/lynis show options', or use the man page.

  No command provided. Exiting..

[Feb 19, 2017] How to change the hostname on CentOS and Ubuntu

Feb 19, 2017 | www.rosehosting.com
To change the hostname on your CentOS or Ubuntu machine you should run the following command:
# hostnamectl set-hostname virtual.server.com
For more command options you can add the --help flag at the end.
# hostnamectl --help
hostnamectl [OPTIONS...] COMMAND ...

Query or change system hostname.

  -h --help              Show this help
     --version           Show package version
     --no-ask-password   Do not prompt for password
  -H --host=[USER@]HOST  Operate on remote host
  -M --machine=CONTAINER Operate on local container
     --transient         Only set transient hostname
     --static            Only set static hostname
     --pretty            Only set pretty hostname

Commands:
  status                 Show current hostname settings
  set-hostname NAME      Set system hostname
  set-icon-name NAME     Set icon name for host
  set-chassis NAME       Set chassis type for host
  set-deployment NAME    Set deployment environment for host
  set-location NAME      Set location for host

[Feb 19, 2017] Trash-cli A Command Line Trashcan For Unix-like Systems - OSTechNix

Feb 19, 2017 | www.ostechnix.com
Trash-cli supports the following functions:

[Feb 15, 2017] Web proxy, NAS and email server installed as appliance

Feb 15, 2017 | www.cyberciti.biz
Operating system : Linux

Purpose : Turn normal server into appliances

Download url : artica.fr

Artica Tech offers a powerful but easy-to-use Enterprise-Class Web Security and Control solution,

usually the preserve of large companies. Prices starting at 99€ / year for 5 users.

[Feb 15, 2017] Synkron – Folder synchronisation

synkron.sourceforge.net

Folder synchronisation

Synkron is an application that helps you keep your files and folders always updated. You can easily sync your documents, music or pictures to have their latest versions everywhere.

Synkron provides an easy-to-use interface and a lot of features. Moreover, it is free and cross-platform.

Features

[Feb 15, 2017] grep like tool

Feb 15, 2017 | www.cyberciti.biz
, optimized for programmers. This tool isn't aimed to "search all text files". It is specifically created to search source code trees, not trees of text files. It searches entire trees by default while ignoring Subversion, Git and other VCS directories and other files that aren't your source code.

Operating system : Cross-platform
Purpose : Search source trees
Download url : beyondgrep.com

[Feb 15, 2017] 15 Greatest Open Source Terminal Applications Of 2012

Dec 11, 2012 | www.cyberciti.biz
Last updated January 7, 2013 in Command Line Hacks , Open Source , Web Developer

Linux on the desktop is making great progress. However, the real beauty of Linux and Unix like operating system lies beneath the surface at the command prompt. nixCraft picks his best open source terminal applications of 2012.

Most of the following tools are packaged by all major Linux distributions and can be installed on *BSD or Apple OS X. #3: ngrep – Network grep

Fig.02: ngrep in action
Ngrep is a network packet analyzer. It follows most of GNU grep's common features, applying them to the network layer. Ngrep is not related to tcpdump. It is just an easy to use tool. You can run queries such as:

## grep all HTTP GET or POST requests from network traffic on eth0 interface  ##
sudo
 ngrep 
-l
-q
-d
 eth0 
"^GET |^POST "
 tcp and port 
80

## grep all HTTP GET or POST requests from network traffic on eth0 interface ## sudo ngrep -l -q -d eth0 "^GET |^POST " tcp and port 80

I often use this tool to find out security related problems and tracking down other network and server related problems.

... ... ...

#5: dtrx

dtrx is an acronmy for "Do The Right Extraction." It's a tool for Unix-like systems that take all the hassle out of extracting archives. As a sysadmin, I download source code and tar balls. This tool saves lots of time.

#6:dstat – Versatile resource statistics tool

Fig.05: dstat in action
As a sysadmin, I heavily depends upon tools such as vmstat, iostat and friends for troubleshooting server issues. Dstat overcomes some of the limitations provided by vmstat and friends. It adds some extra features. It allows me to view all of my system resources instantly. I can compare disk usage in combination with interrupts from hard disk controller, or compare the network bandwidth numbers directly with the disk throughput and much more.

... ... ..

#8:mtr – Traceroute+ping in a single network diagnostic tool

Fig.07: mtr in action
The mtr command combines the functionality of the traceroute and ping programs in a single network diagnostic tool. Use mtr to monitor outgoing bandwidth, latency and jitter in your network. A great little app to solve network problems. If you see a sudden increase in packetloss or response time is often an indication of a bad or simply overloaded link.

#9:multitail – Tail command on steroids

Fig.08: multitail in action (image credit – official project)
MultiTail is a program for monitoring multiple log files, in the fashion of the original tail program. This program lets you view one or multiple files like the original tail program. The difference is that it creates multiple windows on your console (with ncurses). I often use this tool when I am monitoring logs on my server.

... ... ...

#11: netcat – TCP/IP swiss army knife

Fig.10: nc server and telnet client in action
Netcat or nc is a simple Linux or Unix command which reads and writes data across network connections, using TCP or UDP protocol. I often use this tool to open up a network pipe to test network connectivity, make backups, bind to sockets to handle incoming / outgoing requests and much more. In this example, I tell nc to listen to a port # 3005 and execute /usr/bin/w command when client connects and send data back to the client:
$ nc -l -p 3005 -e /usr/bin/w
From a different system try to connect to port # 3005:
$ telnet server1.cyberciti.biz.lan 3005

... ... ...

#14: lftp: A better command-line ftp/http/sftp client

This is the best and most sophisticated sftp/ftp/http download and upload client program. I often use this tool to:

  1. Recursively mirroring entire directory trees from a ftp server
  2. Accelerate ftp / http download speed
  3. Location bookmarks and resuming downloads.
  4. Backup files to a remote ftp servers.
  5. Transfers can be scheduled for execution at a later time.
  6. Bandwidth can be throttled and transfer queues can be set up.
  7. Lftp has shell-like command syntax allowing you to launch several commands in parallel in background (&).
  8. Segmented file transfer, that allows more than one connection for the same file.
  9. And much more.
  10. Download lftp

... ... ...

#16: Rest Conclusion

This is my personal FOSS terminal apps list and it is not absolutely definitive, so if you've got your own terminal apps, share in the comments below.

vidir – edit directories (part of the 'moreutils' package)

[Feb 14, 2017] Three useful aliases for du command

Feb 14, 2017 | www.cyberciti.biz

Rishi G June 12, 2012, 4:01 am

Here are 4 commands i use for checking out disk usages.
#Grabs the disk usage in the current directory
alias usage='du -ch | grep total'

#Gets the total disk usage on your machine
alias totalusage='df -hl --total | grep total'

#Shows the individual partition usages without the temporary memory values
alias partusage='df -hlT --exclude-type=tmpfs --exclude-type=devtmpfs'

#Gives you what is using the most space. Both directories and files. Varies on
#current directory
alias most='du -hsx * | sort -rh | head -10'

[Feb 14, 2017] 15 Greatest Open Source Terminal Applications Of 2012

Feb 14, 2017 | www.cyberciti.biz
on December 11, 2012 last updated January 7, 2013 in Command Line Hacks , Open Source , Web Developer L inux on the desktop is making great progress. However, the real beauty of Linux and Unix like operating system lies beneath the surface at the command prompt. nixCraft picks his best open source terminal applications of 2012.

Most of the following tools are packaged by all major Linux distributions and can be installed on *BSD or Apple OS X. #3: ngrep – Network grep

Fig.02: ngrep in action

Fig.02: ngrep in action
Ngrep is a network packet analyzer. It follows most of GNU grep's common features, applying them to the network layer. Ngrep is not related to tcpdump. It is just an easy to use tool. You can run queries such as:

## grep all HTTP GET or POST requests from network traffic on eth0 interface  ##
sudo
 ngrep 
-l
-q
-d
 eth0 
"^GET |^POST "
 tcp and port 
80

## grep all HTTP GET or POST requests from network traffic on eth0 interface ## sudo ngrep -l -q -d eth0 "^GET |^POST " tcp and port 80

I often use this tool to find out security related problems and tracking down other network and server related problems.

... ... ...

#5: dtrx

Fig.04: dtrx in action

Fig.04: dtrx in action
dtrx is an acronmy for "Do The Right Extraction." It's a tool for Unix-like systems that take all the hassle out of extracting archives. As a sysadmin, I download source code and tar balls. This tool saves lots of time.

#6:dstat – Versatile resource statistics tool

Fig.05: dstat in action

Fig.05: dstat in action
As a sysadmin, I heavily depends upon tools such as vmstat, iostat and friends for troubleshooting server issues. Dstat overcomes some of the limitations provided by vmstat and friends. It adds some extra features. It allows me to view all of my system resources instantly. I can compare disk usage in combination with interrupts from hard disk controller, or compare the network bandwidth numbers directly with the disk throughput and much more.

... ... ..

#8:mtr – Traceroute+ping in a single network diagnostic tool

Fig.07: mtr in action

Fig.07: mtr in action
The mtr command combines the functionality of the traceroute and ping programs in a single network diagnostic tool. Use mtr to monitor outgoing bandwidth, latency and jitter in your network. A great little app to solve network problems. If you see a sudden increase in packetloss or response time is often an indication of a bad or simply overloaded link.

#9:multitail – Tail command on steroids

Fig.08: multitail in action (image credit - official project)

Fig.08: multitail in action (image credit – official project)
MultiTail is a program for monitoring multiple log files, in the fashion of the original tail program. This program lets you view one or multiple files like the original tail program. The difference is that it creates multiple windows on your console (with ncurses). I often use this tool when I am monitoring logs on my server.

... ... ...

#11: netcat – TCP/IP swiss army knife

Fig.10: nc server and telnet client in action

Fig.10: nc server and telnet client in action
Netcat or nc is a simple Linux or Unix command which reads and writes data across network connections, using TCP or UDP protocol. I often use this tool to open up a network pipe to test network connectivity, make backups, bind to sockets to handle incoming / outgoing requests and much more. In this example, I tell nc to listen to a port # 3005 and execute /usr/bin/w command when client connects and send data back to the client:
$ nc -l -p 3005 -e /usr/bin/w
From a different system try to connect to port # 3005:
$ telnet server1.cyberciti.biz.lan 3005

... ... ...

#14: lftp: A better command-line ftp/http/sftp client

This is the best and most sophisticated sftp/ftp/http download and upload client program. I often use this tool to:

  1. Recursively mirroring entire directory trees from a ftp server
  2. Accelerate ftp / http download speed
  3. Location bookmarks and resuming downloads.
  4. Backup files to a remote ftp servers.
  5. Transfers can be scheduled for execution at a later time.
  6. Bandwidth can be throttled and transfer queues can be set up.
  7. Lftp has shell-like command syntax allowing you to launch several commands in parallel in background (&).
  8. Segmented file transfer, that allows more than one connection for the same file.
  9. And much more.
  10. Download lftp

... ... ...

#16: Rest Conclusion

This is my personal FOSS terminal apps list and it is not absolutely definitive, so if you've got your own terminal apps, share in the comments below.

[Feb 04, 2017] Quickly find differences between two directories

You will be surprised, but GNU diff use in Linux understands the situation when two arguments are directories and behaves accordingly
Feb 04, 2017 | www.cyberciti.biz

The diff command compare files line by line. It can also compare two directories:

# Compare two folders using diff ##
diff /etc /tmp/etc_old  
Rafal Matczak September 29, 2015, 7:36 am
§ Quickly find differences between two directories
And quicker:
 diff -y <(ls -l ${DIR1}) <(ls -l ${DIR2})  

[Feb 04, 2017] Restoring deleted /tmp folder

Jan 13, 2015 | cyberciti.biz

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:

mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
 
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp 

[Feb 04, 2017] Use CDPATH to access frequent directories in bash - Mac OS X Hints

Feb 04, 2017 | hints.macworld.com
The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html . Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS
Use CDPATH to access frequent directories in bash UNIX
Mar 21, '05 10:01:00AM • Contributed by: jonbauman

I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.), but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH , as desribed in man bash ):

The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr".
Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile for permanent use):
CDPATH=".:~:~/Library"

This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that is a subdirectory of any of the ones in the list. For example:
$ cd
$ cd Documents 
/Users/baumanj/Documents
$ cd Pictures
/Users/username/Pictures
$ cd Preferences
/Users/username/Library/Preferences
etc...

[ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so I'm assuming it will be of interest to some other readers as well.]

cdable_vars is also nice
Authored by: clh on Mar 21, '05 08:16:26PM

Check out the bash command shopt -s cdable_vars

From the man bash page:

cdable_vars

If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to.

With this set, if I give the following bash command:

export d="/Users/chap/Desktop"

I can then simply type

cd d

to change to my Desktop directory.

I put the shopt command and the various export commands in my .bashrc file.

[May 08, 2014] 25 Even More – Sick Linux Commands UrFix's Blog

6) Display a cool clock on your terminal

watch -t -n1 "date +%T|figlet"

This command displays a clock on your terminal which updates the time every second. Press Ctrl-C to exit.

A couple of variants:

A little bit bigger text:

watch -t -n1 "date +%T|figlet -f big"You can try other figlet fonts, too.

Big sideways characters:

watch -n 1 -t '/usr/games/banner -w 30 $(date +%M:%S)'This requires a particular version of banner and a 40-line terminal or you can adjust the width ("30″ here).

7) intercept stdout/stderr of another process
strace -ff -e trace=write -e write=1,2 -p SOME_PID
8) Remove duplicate entries in a file without sorting.
awk '!x[$0]++' <file>

Using awk, find duplicates in a file without sorting, which reorders the contents. awk will not reorder them, and still find and remove duplicates which you can then redirect into another file.

9) Record a screencast and convert it to an mpeg
ffmpeg -f x11grab -r 25 -s 800x600 -i :0.0 /tmp/outputFile.mpg

Grab X11 input and create an MPEG at 25 fps with the resolution 800×600

10) Mount a .iso file in UNIX/Linux
mount /path/to/file.iso /mnt/cdrom -oloop

"-o loop" lets you use a file as a block device

11) Insert the last command without the last argument (bash)
!:-

/usr/sbin/ab2 -f TLS1 -S -n 1000 -c 100 -t 2 http://www.google.com/then

!:- http://www.urfix.com/is the same as

/usr/sbin/ab2 -f TLS1 -S -n 1000 -c 100 -t 2 http://www.urfix.com/

12) Convert seconds to human-readable format

date -d@1234567890

This example, for example, produces the output, "Fri Feb 13 15:26:30 EST 2009″

13) Job Control
^Z $bg $disown

You're running a script, command, whatever.. You don't expect it to take long, now 5pm has rolled around and you're ready to go home… Wait, it's still running… You forgot to nohup it before running it… Suspend it, send it to the background, then disown it… The ouput wont go anywhere, but at least the command will still run…

14) Edit a file on a remote host using vim
vim scp://username@host//path/to/somefile
15) Monitor the queries being run by MySQL
watch -n 1 mysqladmin --user=<user> --password=<password> processlist

Watch is a very useful command for periodically running another command – in this using mysqladmin to display the processlist. This is useful for monitoring which queries are causing your server to clog up.

More info here: http://codeinthehole.com/archives/2-Monitoring-MySQL-processes.html

16) escape any command aliases
\[command]

e.g. if rm is aliased for 'rm -i', you can escape the alias by prepending a backslash:

rm [file] # WILL prompt for confirmation per the alias

\rm [file] # will NOT prompt for confirmation per the default behavior of the command

17) Show apps that use internet connection at the moment. (Multi-Language)
ss -p

for one line per process:

ss -p | catfor established sockets only:

ss -p | grep STAfor just process names:

ss -p | cut -f2 -sd\"or

ss -p | grep STA | cut -f2 -d\"

18) Send pop-up notifications on Gnome

notify-send ["<title>"] "<body>"

The title is optional.

Options:

-t: expire time in milliseconds.

-u: urgency (low, normal, critical).

-i: icon path.

On Debian-based systems you may need to install the 'libnotify-bin' package.

Useful to advise when a wget download or a simulation ends. Example:

wget URL ; notify-send "Done"

19) quickly rename a file

mv filename.{old,new}
20) Remove all but one specific file
rm -f !(survivior.txt)
21) Generate a random password 30 characters long
strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'; echo

Find random strings within /dev/urandom. Using grep filter to just Alphanumeric characters, and then print the first 30 and remove all the line feeds.

22) Run a command only when load average is below a certain threshold
echo "rm -rf /unwanted-but-large/folder" | batch

Good for one off jobs that you want to run at a quiet time. The default threshold is a load average of 0.8 but this can be set using atrun.

23) Binary Clock
watch -n 1 'echo "obase=2;`date +%s`" | bc'

Create a binary clock.

24) Processor / memory bandwidthd? in GB/s
dd if=/dev/zero of=/dev/null bs=1M count=32768

Read 32GB zero's and throw them away.

How fast is your system?

25) Backup all MySQL Databases to individual files
for I in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $I | gzp > "$I.sql.gz"; done

[May 08, 2014] 25 Best Linux Commands UrFix's Blog

25) sshfs name@server:/path/to/folder /path/to/mount/point
Mount folder/filesystem through SSH
Install SSHFS from http://fuse.sourceforge.net/sshfs.html
Will allow you to mount a folder security over a network.

24) !!:gs/foo/bar
Runs previous command replacing foo by bar every time that foo appears
Very useful for rerunning a long command changing some arguments globally.
As opposed to ^foo^bar, which only replaces the first occurrence of foo, this one changes every occurrence.

23) mount | column -t
currently mounted filesystems in nice layout
Particularly useful if you're mounting different drives, using the following command will allow you to see all the filesystems currently mounted on your computer and their respective specs with the added benefit of nice formatting.

22) <space>command
Execute a command without saving it in the history
Prepending one or more spaces to your command won't be saved in history.
Useful for pr0n or passwords on the commandline.

21) ssh user@host cat /path/to/remotefile | diff /path/to/localfile -
Compare a remote file with a local file
Useful for checking if there are differences between local and remote files.

20) mount -t tmpfs tmpfs /mnt -o size=1024m
Mount a temporary ram partition
Makes a partition in ram which is useful if you need a temporary working space as read/write access is fast.
Be aware that anything saved in this partition will be gone after your computer is turned off.

19) dig +short txt <keyword>.wp.dg.cx
Query Wikipedia via console over DNS
Query Wikipedia by issuing a DNS query for a TXT record. The TXT record will also include a short URL to the complete corresponding Wikipedia entry.

18) netstat -tlnp
Lists all listening ports together with the PID of the associated process
The PID will only be printed if you're holding a root equivalent ID.

17) dd if=/dev/dsp | ssh -c arcfour -C username@host dd of=/dev/dsp
output your microphone to a remote computer's speaker
This will output the sound from your microphone port to the ssh target computer's speaker port. The sound quality is very bad, so you will hear a lot of hissing.

16) echo "ls -l" | at midnight
Execute a command at a given time
This is an alternative to cron which allows a one-off task to be scheduled for a certain time.


15) curl -u user:pass -d status="Tweeting from the shell" http://twitter.com/statuses/update.xml
Update twitter via curl

14) ssh -N -L2001:localhost:80 somemachine
start a tunnel from some machine's port 80 to your local post 2001
now you can acces the website by going to http://localhost:2001/

13) reset
Salvage a borked terminal
If you bork your terminal by sending binary data to STDOUT or similar, you can get your terminal back using this command rather than killing and restarting the session. Note that you often won't be able to see the characters as you type them.

12) ffmpeg -f x11grab -s wxga -r 25 -i :0.0 -sameq /tmp/out.mpg
Capture video of a linux desktop

11) > file.txt
Empty a file
For when you want to flush all content from a file without removing it (hat-tip to Marc Kilgus).

10) $ssh-copy-id user@host
Copy ssh keys to user@host to enable password-less ssh logins.
To generate the keys use the command ssh-keygen

9) ctrl-x e
Rapidly invoke an editor to write a long, complex, or tricky command
Next time you are using your shell, try typing ctrl-x e (that is holding control key press x and then e). The shell will take what you've written on the command line thus far and paste it into the editor specified by $EDITOR. Then you can edit at leisure using all the powerful macros and commands of vi, emacs, nano, or whatever.

8 ) !whatever:p
Check command history, but avoid running it
!whatever will search your command history and execute the first command that matches 'whatever'. If you don't feel safe doing this put :p on the end to print without executing. Recommended when running as superuser.

7) mtr google.com
mtr, better than traceroute and ping combined
mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.
As mtr starts, it investigates the network connection between the host mtr runs on and HOSTNAME. by sending packets with purposly low TTLs. It continues to send packets with low TTL, noting the response time of the intervening routers. This allows mtr to print the response percentage and response times of the internet route to HOSTNAME. A sudden increase in packetloss or response time is often an indication of a bad (or simply over‐loaded) link.

6 ) cp filename{,.bak}
quickly backup or copy a file with bash

5) ^foo^bar
Runs previous command but replacing
Really useful for when you have a typo in a previous command. Also, arguments default to empty so if you accidentally run: echo "no typozs"
you can correct it with ^z

4) cd -
change to the previous working directory

3):w !sudo tee %
Save a file you edited in vim without the needed permissions
I often forget to sudo before editing a file I don't have write permissions on. When you come to save that file and get the infamous "E212: Can't open file for writing", just issue that vim command in order to save the file without the need to save it to a temp file and then copy it back again.

2) python -m SimpleHTTPServer
Serve current directory tree at http://$HOSTNAME:8000/

1) sudo !!
Run the last command as root
Useful when you forget to use sudo for a command. "!!" grabs the last run command.

Monitoring Processes with pgrep By Sandra Henry-Stocker

This week, we're going to look at a simple bash script for monitoring processes that we want to ensure are running all the time. We'll use a couple cute scripting "tricks" to facilitate this process and make it as useful as possible.

The basic command we're going to use is pgrep. For those of you unfamiliar with pgrep, it's a very nice Solaris command that looks in the process queue to see whether a process by a particular name is running. If it finds the requested process, it returns the process id. For example:

% pgrep httpd
1345
1346
1347
1348
This output tells us that there are four httpd processes running on our system. These processes might look like this if we were to execute a ps -ef command:
% ps -ef | grep httpd
output
The pgrep command, therefore, accomplishes what many of us used to do with strings of Unix command of this variety:
% ps -ef | grep httpd | grep -v grep | awk '{print $2}'
In this command, we ran the ps command, narrowed the output down to only those lines containing the word "httpd", removed the grep command itself, and then printed out the second column of the output, the process id. With pgrep, extracting the process ids for the processes that we want to track is faster and "cleaner". Let's look at a couple code segments. First, the old way:
for PROC in [ proc1 proc2 proc3 proc4 proc5 ]
do
RUNNING = `ps -ef | grep $PROC | grep -v grep | wc -l`
if [ $RUNNING ge 1 ]; then
echo $proc1 is running
else
echo $proc1 is down
fi
done
For each process, we generate a count of the number of instances we detect in the ps output and, if this number is one or more, we issue the "running" output. Otherwise, we display a message saying the process is down.

Now, here's out replacement code using pgrep:

for PROC in [ proc1 proc2 proc3 proc4 proc5 ]
do
if [ `pgrep $PROC` ]; then
echo $PROC is running
else
echo $PROC is down
fi
done
In this case, we've simplified our code in a couple of ways. First, we rely on pgrep to give is output (procids) if the process is running and nothing if it isn't. Second, because we're not using ps and grep, we don't have to remove the output that isn't relevant to our task. We don't have to remove the ps output relating to the other running processes and to the process generated by our grep command.

The process for killing a set of processes would be quite similar. In fact, we could use both pgrep and a "sister" command, pkill in a similar manner.

for PROC in [ proc1 proc2 proc3 proc4 proc5 ]
do
if [ `pgrep $PROC` ]; then
pkill $PROC
else
echo $PRIC is not running
fi
done
The pgrep command is more predictable because we know we're going to get only the process id and that we won't be matching on other strings that just happen to appear in the ps output (e.g., if someone were editing the httpd.conf file).

The pgrep, pkill and related commands are not only easier to use. The code is easier to read and understand. One of the reasons for using sequences of commands such as this:

ps -ef | grep $PROC | grep -v grep | wc -l
was to ensure that we knew what our answer would have to look like. If we left off the final "wc -l", we might get one or a number of pieces of output and have to deal with this fact when we went to check it. In addition, we could use similar logic when the number of processes, rather than just some or none, was important. We would just check the number against what we expected to see.

Even so, anyone reading this script a year later would have to stop and think through this command. This is not true for pgrep. The command "pgrep httpd" is easy and quick to interpret as "if httpd is running".

The "if [ `pgrep $PROC` ]" is especially efficient as well. This statement tests whether there is output from the command and is compact and readable. Much as I love Unix for the way it allows me to pipe output from one command to the other, I love it even more when I don't have to.
	sh -x
By S. Lee Henry

Whenever you enter a command in a Unix shell, whether interactively or through a script, the shell expands your commands and passes them on to the Unix kernel for execution. Normally, the shell does its work invisibly. In fact, it so unobtrusively processes your commands that you can easily forget that it's actually doing something for you. As we saw last week, presenting the shell with a command like "rm *" can, on rare occasion, results in a complaint. When the shell balks, producing an error indicating that the argument list is too long, it suddenly reminds us of its presence and that it is subject to resource limitations just like everything else.

Invoking the shell with an option to display commands as it processes them is another way to become acquainted with the shell's method of intercepting and interpreting your commands. The Bourne family shells use the option -x. If you enter the shell using a -x, then commands will be displayed for you before execution. For example:
    boson% /bin/ksh -x
    $ date
    + date
    Mon Jun  4 07:11:01 EDT 2001
You can also see file expansion as the shell provides it for you:
    $ ls oops*
    + ls oops1 oops2 oop3 oops4 oops5 oopsies
    oops1 oops2 oop3 oops4 oops5 oopsies
This is all very exciting, of course, but of limited utility once you get a solid appreciation of how hard the shell is working for you command line after command line. The sh -x "trick" can be very useful when you are debugging a script though. Instead of inserting lines of code like "echo at end of loop" to help determine your code is failing, you can change your "shebang" line to include the -x option:
    #!/bin/sh -x
Afterwards, when you run the script, each line of code will display as it is processed so you can easily see which of the commands are working and where your breakdown is occurring. This is far more useful than looking at no output or little output and wondering where processing is hanging up -- especially true for a complex script where execution follows numerous paths. Being able to watch the executed commands and the order in which they are executed while the script is running can be an invaluable debugging aid -- particularly for complex scripts that don't write much output to the screen while running.

How Many is Too Many? By Sandra Henry-Stocker

I surprised myself recently when I issued a command to remove all the files in an old, and clearly unimportant, directory and received this response:
    bin/rm: arg list too long
I seldom encounter this response when cleaning up server directories that I manage, so seeing it surprised me. When I began listing the directory's contents, I wasn't surprised that my command had failed. The directory contained more than 200,000 small, old, and meaningless files, which would take a long time to list, consumes quite a bit of directory file space, and would comprise a very long command line if the shell were about to manage it. Even if every file name had only eight characters, then a line containing all of their names (with blank characters separating the names) would be nearly 1.8 million bytes long. Not surprisingly, my shell balked at the task.

Situations like this remind us that, even though Unix is flexible, powerful, and fun, each of the commands has built in limits. My shell could not allocate adequate space to "expand" the asterisk that I presented in my "rm *" command to a list of all 200,000+ files.

Of course, Unix offers several ways to solve every problem and running out of space to expand a command merely invites one to solve the problem differently. In my case, the easiest solution was to remove the directory along with its contents. The rm -r command, since it doesn't require any argument expansion, is "happy" to comply with such a request. Had I not wanted to remove every file in the directory, I would have gone through a little more trouble. I could have removed subsets of the files, using commands like "rm a*" or "rm *5" until I had removed all of the unwanted files.

A third approach would have been the appropriate for preserving only a small number of the directory's files ? especially files that are easily described by substring or date. I would have tarred up the interesting files using tar and a wild card or a find command to create an include file.

You will not often encounter situations where the shell will be unable to expand your file names into a workable command. Few directories house as many files as the one that I was cleaning up, and the Unix shells allocate enough buffer space for most commands that you might enter. Even so, limits exist and you might happen to bump into one of them every few years.

Moving Around the Console.

So you're new to Linux and wondering how this virtual terminal stuff works. Well you can work in six different terminals at a time. To move around from one to another:

To change to Terminal 1 - Alt + F1
To change to Terminal 2 - Alt + F2
...
To change to Terminal 6 - Alt + F6
That's cool. But I just did locate on something and a lot of stuff scrolled up. How do I scroll up to see what flew by?
Shift +  PgUp - Scroll Up
Shift +  PgDn - Scroll Down

Note: If you switch away from a console and switch back to it,
you will lose what has already scrolled by.

If you had X running and wanted to change from X to text based and vice versa

To change to text based from X - Ctrl + Alt + F(n) where n = 1..6

To change to X from text based - Alt + F7

Something unexpected happened and I want to shut down my X server.
Just press:

Ctrl + Alt + Backspace

What do you do when you need to see what a program is doing, but it's not one that you'd normally run from the command line?

LinuxMonth

What do you do when you need to see what a program is doing, but it's not one that you'd normally run from the command line? Perhaps it's one that is called as a network daemon from inetd, is called from inside another shell script or application, or is even called from cron. Is it actually being called? What command line parameters is it being handed? Why is it dying?

Let's assume the app in question is /the/path/to/myapp . Here's what you do. Make sure you have the "strace" program installed. Download "apptrace" from ftp://ftp.stearns.org/pub/apptrace/ and place it in your path, mode 755. Then type:

apptrace /the/path/to/myapp

When that program is called in the future, apptrace will record the last time myapp ran (see the timestamp on myapp-last-run), the command line parameters used (see myapp-parameters), and the strace output from running myapp (see myapp.pid.trace) in either $HOME/apptrace or /tmp/apptrace if $HOME is not set.

Note that if the original application is setuid-root, strace will not honor that flag and it will run with the permissions of the user running it like any other non-setuid-root app. See the man page for strace for more information on why.

When you've found out what you need to know and wish to stop monitoring the application, type:

mv -f /the/path/to/myapp.orig /the/path/to/myapp


Many thanks to David S. Miller , kernel hacker extraordinaire, for the right to publish his idea. His original version was:

It's actually pretty easy once if you can get a shell on the machine
before the event, once you know the program in question:

mv /path/to/${PROGRAM} /path/to/${PROGRAM}.ORIG
edit /path/to/${PROGRAM}
#!/bin/sh
strace -f -o /tmp/${PROGRAM}.trace /path/to/${PROGRAM}.ORIG $*

I do it all the time to debug network services started from
inetd for example.

Ever wonder what ports are open on your Linux machine ?

Did you ever want to know who was connecting to your machine and what services were they connecting to ? Netstat does just that.

To take a look at all TCP ports that are open on you system.
The use of the '-n' option will give you numerical addresses instead of determining the host. This speeds up the response of the output. The '-l' option only shows connections which are in "LISTEN" mode. And '-t' only shows the TCP connections.

netstat -nlt

[user@mymachine /home/user]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State     
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN

The above output show that I have 3 open ports (80, 3306, 22) on my sytem and are waiting for connections on all of the interfaces. The three ports are 80 => apache , 3306 => mysql, 22 => ssh.

Let's take a look at the active connections to this machine. For this you don't use the '-l' option but instead use the '-a' option. The '-a' stand for, yup, you guessed it, show all.

netstat -nat

[user@mymachine /user]# netstat -nat
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State     
tcp        0      0 206.112.62.102:80       204.210.35.27:3467      ESTABLISHED  
tcp        0      0 206.112.62.102:80       208.229.189.4:2582      FIN_WAIT2  
tcp        0   7605 206.112.62.102:80       208.243.30.195:36957    CLOSING    
tcp        0      0 206.112.62.102:22       10.60.1.18:3150         ESTABLISHED
tcp        0      0 206.112.62.102:22       10.60.1.18:3149         ESTABLISHED
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     

The above output shows I have 3 web request that are currently being made or are about to finish up. It also show I have 2 SSH connections established. Now I know which IP address are making web requests or have SSH connections open. For more info on the different states, ie "FIN_WAIT2" and "CLOSING" please consult your local man pages.

Well that was a quick tip on how to use netstat to see what TCP ports are open on your machine and who is connecting to them. Hope it was helpful. Share the knowledge !

how virtual terminal stuff works

So you're new to Linux and wondering how this virtual terminal stuff works. Well you can work in six different terminals at a time. To move around from one to another:

To change to Terminal 1 - Alt + F1
To change to Terminal 2 - Alt + F2
...
To change to Terminal 6 - Alt + F6

That's cool. But I just did locate on something and a lot of stuff scrolled up. How do I scroll up to see what flew by?

Shift +  PgUp - Scroll Up
Shift +  PgDn - Scroll Down

Note: If you switch away from a console and switch back to it,
you will lose what has already scrolled by.

If you had X running and wanted to change from X to text based and vice versa

To change to text based from X - Ctrl + Alt + F(n) where n = 1..6

To change to X from text based - Alt + F7

Something unexpected happened and I want to shut down my X server. Just press:

Ctrl + Alt + Backspace

apptrace /the/path/to/myapp

When that program is called in the future, apptrace will record the last time myapp ran (see the timestamp on myapp-last-run), the command line parameters used (see myapp-parameters), and the strace output from running myapp (see myapp.pid.trace) in either $HOME/apptrace or /tmp/apptrace if $HOME is not set.

Note that if the original application is setuid-root, strace will not honor that flag and it will run with the permissions of the user running it like any other non-setuid-root app. See the man page for strace for more information on why.

When you've found out what you need to know and wish to stop monitoring the application, type:

mv -f /the/path/to/myapp.orig /the/path/to/myapp

RPM - Installing, Querying, Deleting your packages.

RPM (Redhat Package Manager) is an excellent package manager. RPM, created by Red Hat, can be used for building, installing, querying, updating, verifying, and removing software packages. This brief article will show you some of the usage of the rpm tool.

So you have an rpm package that you wish to install. But you want to find out more information about the package, like who built it, and when was it built. Or you want to find out a short description about the package. The following command will show you such information.

	rpm -qpi packagename.rpm

Now that you know more about the package, you're ready to install it. But before you install it you want to get a list of files and find out where will these files be installed. The following command will show you exactly that.

	rpm -qpl packagename.rpm

To actually install the package, use:

	rpm -i packagename.rpm

But what if I have an older version of the rpm already installed ? Then you want to upgrade the package. The following command will remove any older version of the package and install the newer version.

	rpm -Uvh packagename.rpm

How do I check all the packages installed on my system ? The following will list their names and version numbers.

	rpm -qa

and to see all the packages installed with the latest ones on top.

	rpm -qa --last

And if you want to see what package a file belongs to, if any, you can do the following. This command will show the rpm name or tell you that the file does not belong to any packages.

	rpm -qf file

And if you wanted to uninstall the package, you can do the following.

	rpm -e packagename

and to unistall even if it other packages depend on it. Note: This is dangerous; this should only be done if you are absolutely sure the dependency does not apply in your case.

	rpm -e packagename --nodeps

There are a lot more comands to help you manage your packages better. But this will cover the thirst of most users. If you want to learn more about rpms type man rpm at your prompt or visit www.rpm.org. In particular, see the RPM-HOWTO at www.rpm.org.

How will you spend your lunch hour?
Sshhh, somebody might hear you!
Recovering Deleted Files with "mc"
SSH Techniques
The Open Source Tech Support Partnership
Top Ten Reasons Why You Shouldn't Log in as Root

Linux-etc Quickies

"Some snippets of helpful advice were lying around my hard drive, so I thought it a good time to unload it. There's no theme to any of it, really, but I think that themes are sometimes overrated, don't you?"

"My favorite mail reader, pine 4.21, does not lag behind when it comes to modern features. For example, it supports rule-based filtering just like those graphical clients that get all the press these days. Just head to Main menu -> Setup -> Rules -> Filters -> Add. Voila!"

"Red Hat 6.2 ships with the ability to display TrueType fonts with the XFree86 X server. Oddly, the freetype package doesn't include any TrueType fonts, nor does it provide clear instructions on how to add them to your system."

Linux Today - O'Reilly Network Top 10 Tips for Linux Users

Troubleshooting Tips

From the SGI Admin Guide - last I checked the CPU spends most of its time waiting for something to do

Table 5-3 : Indications of an I/O-Bound System

Field Value sar Option

%busy (% time disk is busy) >85 sar -d

%rcache (reads in buffer cache) low, <85 sar -b

%wcache (writes in buffer cache) low, <60% sar -b

%wio (idle CPU waiting for disk I/O) dev. system >30 sar -u
fileserver >80


Table 5-5 Indications of Excessive Swapping/Paging


bswot/s (ransfers from memory to disk swap area)	>200	sar -w

bswin/s (transfers to memory)				>200	sar -w

%swpocc (time swap queue is occupied)			>10	sar -q

rflt/s (page reference fault)				>0	sar -t

freemem (average pages for user processes)		<100	sar -r

Indications of a CPU bound systems

%idle (% of time CPU has no work to do)			<5	sar -u

runq-sz (processes in memory waiting for CPU)		>2	sar -q

%runocc (% run queue occupied and processes not executing)	>90	sar -q

hypermail /usr/local/src/src/hypermail - mailing list to web page converter; grep hypermail /etc/aliases shows which lists use hypermail

pwck, grpck should be run weekly to make sure ok; grpck produces a ton of errors

can use local man pages - text only - see Ch3 User Services
put in /usr/local/manl (try /usr/man/local/manl) suffix .l
long ones pack -> pack program.1;mv program.1.z /usr/man/local/mannl/program.z

Linux Gazette Index

More 2-Cent Tips

Wed, 17 May 2000 08:38:09 +0200
From: Sebastian Schleussner Sebastian.Schleussner@gmx.de

I have been trying to set command line editing (vi mode) as part of
my bash shell environment and have been unsuccessful so far. You might
think this is trivial - well so did I.
I am using Red Hat Linux 6.1 and wanted to use "set -o vi" in my
start up scripts. I have tried all possible combinations but it JUST DOES
NOT WORK. I inserted the line in /etc/profile , in my .bash_profile, in
my .bashrc etc but I cannot get it to work. How can I get this done? This
used to be a breeze in the korn shell. Where am I going wrong?

Hi!
I recently learned from the SuSE help that you have to put the line
set keymap vi
into your /etc/inputrc or ~/.inputrc file, in addition to what you did
('set -o vi' in ~/.bashrc or /etc/profile)!
I hope that will do the trick for you.

Cheers,
Sebastian Schleussner


More 2-Cent Tips

It detects filesystem types of all accessible partitions and checks/mounts them in folders named after device (hda7,hdb1,hdb3,sd1,...).

So you will never have to write sequences of fdisk,fsck,mount,df...

allfilesys

You maybe interested in checking the site "Tracerote Lists by States. Backbone Maps List" http://cities.lk.net/trlist.html

You can find there many links to the traceroute resources sorted by the next items:

Other thing is the List of Backbone Maps, sorted by Geographical Location, also some other info about backbones.



LG52 2-Cent Tips

LG51 2-Cent Tips

Copy Your Linux Install to a Different Partition or Drive

Jul 9, 2009

If you need to move your Linux installation to a different hard drive or partition (and keep it working) and your distro uses grub this tech tip is what you need.

To start, get a live CD and boot into it. I prefer Ubuntu for things like this. It has Gparted. Now follow the steps outlined below.

Copying

Configuration

Install Grub

That's it! You should now have a bootable working copy of your source drive on your destination drive! You can use this to move to a different drive, partition, or filesystem.

Related Stories:
Linux - Compare two directories(Feb 18, 2009)
Cloning Linux Systems With CloneZilla Server Edition (CloneZilla SE)(Jan 22, 2009)
Copying a Filesystem between Computers(Oct 28, 2008)
rsnapshot: rsync-Based Filesystem Snapshot(Aug 26, 2008)
K9Copy Helps Make DVD Backups Easy(Aug 23, 2008)

UNIX tips Productivity tips by Michael Stutz

Useful command-line secrets for increasing productivity in the office

Level: Intermediate

Michael Stutz (stutz@dsl.org), Author, Consultant

19 Sep 2006
Updated 21 Sep 2006

Using UNIX in a day-to-day office setting doesn't have to be clumsy. Learn some of the many ways, both simple and complex, to use the power of the UNIX shell and available system tools to greatly increase your productivity in the office.
More dW content related to:

Introduction

The language of the UNIX® command line is notoriously versatile: With a panorama of small tools and utilities and a shell to combine and execute them, you can specify many precise and complex tasks.

But when used in an office setting, these same tools can become a powerful ally toward increasing your productivity. Many techniques unique to UNIX can be applied to the issue of workplace efficiency.

This article gives several suggestions and techniques for bolstering office productivity at the command-line level: how to review your current system habits, how to time your work, secrets for manipulating dates, a quick and simple method of sending yourself a reminder, and a way to automate repetitive interactions.

Review your daily habits

The first step toward increasing your office productivity using the UNIX command line is to take a close look at your current day-to-day habits. The tools and applications you regularly use and the files you access and modify can give you an idea of what routines are taking up a lot of your time -- and what you might be avoiding.

Review the tools you use

You'll want to see what tools and applications you're using regularly. You can easily ascertain your daily work habits on the system with the shell's history built-in, which outputs an enumerated listing of the input lines you've sent to the shell in the current and past sessions. See Listing 1 for a typical example.


Listing 1. Sample output of the shell history built-in
$ history
1 who
2 ls
3 cd /usr/local/proj
4 ls
5 cd websphere
6 ls
7 ls -l
$

The actual history is usually kept in a file so that it can be kept through future sessions; for example, the Korn shell keeps its command history hidden in the .sh_history file in the user's home directory, and the Bash shell uses .bash_history. These files are usually overwritten when they reach a certain length, but many shells have variables to set the maximum length of the history; the Korn and Bash shells have the HISTSIZE and HISTFILESIZE variables, which you can set in your shell startup file.

It can be useful to run history through sort to get a list of the most popular commands. Then, use awk to strip out the command name minus options and arguments, and pass the sorted list to uniq to give an enumerated list. Finally, call sort again to resort the list in reverse order (highest first) by the first column, which is the enumeration itself. Listing 2 shows an example of this in action.


Listing 2. Listing the commands in the shell history by popularity
$ history|awk '{print $2}'|awk 'BEGIN {FS="|"} {print $1}'|sort|uniq -c|sort -r

      4 ls
      2 cd
      1 who
$

If your history file is large, you can run periodic checks by piping to tail first -- for example, to check the last 1,000 commands, try:
$ history|tail -1000|awk '{print $2}'|awk 'BEGIN {FS="|"} {print $1}'|sort|uniq -c|sort -r

Review the files you access or modify

Use the same principle to review the files that you've modified or accessed. To do this, use the find utility to locate and review all files you've accessed or changed during a certain time period -- today, yesterday, or at any date or segment of time in the past.

You generally can't find out who last accessed or modified a file, because this information isn't easily available under UNIX, but you can review your personal files by limiting the search to only files contained in your home directory tree. You can also limit the search to only files in the directory of a particular project that you're monitoring or otherwise working on.

The find utility has several flags that aid in locating files by time, as listed in Table 1. Directories aren't regular files but are accessed every time you list them or make them the current working directory, so exclude them in the search using a negation and the -type flag.


Table 1. Selected flags of the find utility
FlagDescription
-daystartThis flag starts at the beginning of the day.
-atimeThe time the file was last accessed -- in number of days.
-ctimeThe time the file's status last changed -- in number of days.
-mtimeThe time the file was last modified -- in number of days.
-aminThe time the file was last accessed -- in number of minutes. (It is not available on all implementations.)
-cminThe time the file's status last changed -- in number of minutes. (It is not available on all implementations.)
-mminThe time the file was last modified -- in number of minutes. (It is not available on all implementations.)
-typeThis flag describes the type of file, such as d for directories.
-user XFiles belonging to user X.
-group XFiles belonging to group X.
-newer XFiles that are newer than file X.

Here's how to list all the files in your home directory tree that were modified exactly one hour ago:

$ find ~ -mmin 60 \! -type d

Giving a negative value for a flag means to match that number or sooner. For example, here's how to list all the files in your home directory tree that were modified exactly one hour ago or any time since:
$ find ~ -mmin -60 \! -type d

Not all implementations of find support the min flags. If yours doesn't, you can make a workaround by using touch to create a dummy file whose timestamp is older than what you're looking for, and then search for files newer than it with the -newer flag:
$ date
Mon Oct 23 09:42:42 EDT 2006
$ touch -t 10230842 temp
$ ls -l temp
-rw-r--r--    1 joe        joe               0 Oct 23 08:42 temp
$ find ~ -newer temp \! -type d

The special -daystart flag, when used in conjunction with any of the day options, measures days from the beginning of the current day instead of from 24 hours previous to when the command is executed. Try listing all of your files, existing anywhere on the system, that have been accessed any time from the beginning of the day today up until right now:
$ find / -user `whoami` -daystart -atime -1 \! -type d

Similarly, you can list all the files in your home directory tree that were modified at any time today:
$ find ~ -daystart -mtime -1 \! -type d

Give different values for the various time flags to change the search times. You can also combine flags. For instance, you can list all the files in your home directory tree that were both accessed and modified between now and seven days ago:
$ find ~ -daystart -atime -7 -mtime -7  \! -type d

You can also find files based on a specific date or a range of time, measured in either days or minutes. The general way to do this is to use touch to make a dummy file or files, as described earlier.

When you want to find files that match a certain range, make two dummy files whose timestamps delineate the range. Then, use the -newer flag with the older file, and use "\! -newer" on the second file.

For example, to find all the files in the /usr/share directory tree that were accessed in August, 2006, try the following:

$ touch -d "Aug 1 2006" file.start
$ touch -d "Sep 1 2006" file.end
$ find /usr/share -daystart -newer file.start \! -daystart -newer file.end

Finally, it's sometimes helpful when listing the contents of a directory to view the files sorted by their time of last modification. Some versions of the ls tool have the -c option, which sorts by the time of file modification, showing the most recently modified files first. In conjunction with the -l (long-listing) and -t (sort by modification time) options, you can peruse a directory listing by the most recently modified files first; the long listing shows the file modification time instead of the default creation time:
$ ls -ltc /usr/local/proj/websphere | less

Time your work

Another useful means of increasing office productivity using UNIX is to time commands that you regularly execute. Then, you can evaluate the results and determine whether you're spending too much time waiting for a particular process to finish.

Time command execution

Is the system slowing you down? How long are you waiting at the shell, doing nothing, while a particular command is being executed? How long does it take you to run through your usual morning routine?

You can get concrete answers to these questions when you use the date, sleep, and echo commands to time your work.

To do this, type a long input line that first contains a date statement to output the time and date in the desired format (usually hours and minutes suffice). Then, run the command input line -- this can be several lines strung together with shell directives -- and finally, get the date again on the same input line. If the commands you're testing produce a lot of output, redirect it so that you can read both start and stop dates. Calculate the difference between the two dates:

$ date; system-backup > /dev/null; system-diag > /dev/null;\
> netstat > /dev/null; df > /dev/null; date

Test your typing speed

You can use these same principles to test your typing speed:

$ date;cat|wc -w;date

This command works best if you give a long typing sample that lasts at least a minute, but ideally three minutes or more. Take the difference in minutes between the two dates and divide by the number of words you typed (which is output by the middle command) to get the average number of words per minute you type.

You can automate this by setting variables for the start and stop dates and for the command that outputs the number of words. But to do this right, you must be careful to avoid a common error in calculation when subtracting times. A GNU extension to the date command, the %s format option, avoids such errors -- it outputs the number of seconds since the UNIX epoch, which is defined as midnight UTC on January 1, 1970. Then, you can calculate the time based on seconds alone.

Assign a variable, SPEED, as the output of an echo command to set up the right equation to pipe to a calculator tool, such as bc. Then, output a new echo statement that outputs a message with the speed:

$ START=`date +%s`;WORDS=`cat|wc -w`; STOP=`date +%s; SPEED=\
> `echo "$WORDS / ( ( $STOP - $START ) / 60 )"|bc`;echo \
> "You have a typing speed of $SPEED words per minute."

You can put this in a script and then change the permissions to make it executable by all users, so that others on the system can use it, too, as in Listing 3.


Listing 3. Example of running the typespeed script

$ typespeed
The quick brown fox jumped over the lazy dog. The quick brown dog--
                              ...
--jumped over the lazy fox.
^D

You have a typing speed of 82.33333333 words per minute.
$

Know your dates

The date tool can do much more than just print the current system date. You can use it to get the day of the week on which a given date falls and to get dates relative to the current date.

Get the day of a date

Another GNU extension to the date command, the -d option, comes in handy when you don't have a desk calendar nearby -- and what UNIX person bothers with one? With this powerful option, you can quickly find out what day of the week a particular date falls on by giving the date as a quoted argument:

$ date -d "nov 22"
Wed Nov 22 00:00:00 EST 2006
$

In this example, you see that November 22 of this year is on a Wednesday.

So, when it's suggested that the big meeting be held on November 22, you'll know right away that it falls on a Wednesday -- which is the day you're out in the field office.

Get relative dates

The -d option can also tell you what the date will be relative to the current date -- either a number of days or weeks from now, or before now (ago). Do this by quoting this relative offset as an argument to the -d option.

Suppose, for example, that you need to know the date two weeks hence. If you're at a shell prompt, you can get the answer immediately:

$ date -d '2 weeks'

There are other important ways to use this command. With the next directive, you can get the day of the week for a coming day:
$ date -d 'next monday'

With the ago directive, you can get dates in the past:

$ date -d '30 days ago'

And you can use negative numbers to get dates in reverse:

$ date -d 'dec 14 -2 weeks'

This technique is useful to give yourself a reminder based on a coming date, perhaps in a script or shell startup file, like so:

DAY=`date -d '2 weeks' +"%b %d"`
if test "`echo $DAY`" = "Aug 16"; then echo 'Product launch is now two weeks away!'; fi

Give yourself reminders

Use the tools at your disposal to leave reminders for yourself on the system -- they take up less space than notes on paper, and you'll see them from anywhere you happen to be logged in.

Know when it's time to leave

When you're working on the system, it's easy to get distracted. The leave tool, common on the IBM AIX® operating system and Berkeley Software Distribution (BSD) systems (see Resources) can help.

Give leave the time when you have to leave, using a 24-hour format: HHMM. It runs in the background, and five minutes before that given time, it outputs on your terminal a reminder for you to leave. It does this again one minute before the given time if you're still logged in, and then at the time itself -- and from then on, it keeps sending reminders every minute until you log out (or kill the leave process). See Listing 4 for an example. When you log out, the leave process is killed.


Listing 4. Example of running the leave command
$ leave
When do you have to leave? 1830
Alarm set for Fri Aug  4 18:30. (pid 1735)
$ date +"Time now: %l:%M%p"
Time now: 6:20PM
$
<system bell rings>
You have to leave in 5 minutes.
$ date +"Time now: %l:%M%p"
Time now: 6:25PM
$
<system bell rings>
Just one more minute!
$ date +"Time now: %l:%M%p"
Time now: 6:29PM
$
Time to leave!
$ date +"Time now: %l:%M%p"
Time now: 6:30PM
$
<system bell rings>
Time to leave!
$ date +"Time now: %l:%M%p"
Time now: 6:31PM
$ kill 1735
$ sleep 120; date +"Time now: %l:%M%p"
Time now: 6:33PM
$

You can give relative times. If you want to leave a certain amount of time from now, precede the time argument with a +. So, to be reminded to leave in two hours, type the following:
$ leave +0200

To give a time amount in minutes, make the hours field 0. For example, if you know you have only 10 more minutes before you absolutely have to go, type:

$ leave +0010

You can also specify the time to leave as an argument, which makes leave a useful command to put in scripts -- particularly in shell startup files. For instance, if you're normally scheduled to work until 5 p.m., but on Fridays you have to be out of the building at 4 p.m., you can set a weekly reminder in your shell startup file:

if test "`date +%a`" = "Fri"; then leave 1600; fi

You can put a plain leave statement, with no arguments, in a startup script. Every time you start a login shell, you can enter a time to be reminded when to leave; if you press the Enter key, giving no value, then leave exits without setting a reminder.

Send yourself an e-mail reminder

You can also send yourself a reminder using a text message. Sometimes it's useful to make a reminder that you'll see either later in your current login session or the next time you log in.

At one time, the old elm mail agent came bundled with a tool that enabled you to send memorandums using e-mail; it was basically a script that prompted for the sender, the subject, and the body text. This is easily replicated by the time-honored method of sending mail to yourself with the command-line mailx tool. (On some UNIX systems, mail is used instead of mailx.)

Give as an argument your e-mail address (or your username on the local system, if that's where you read mail); then, you can type the reminder on the Subject line when prompted, if it's short enough, as in Listing 5. If the reminder won't fit on the Subject line, type it in the body of the message. A ^D on a line by itself exits mailx and sends the mail.


Listing 5. Example of sending yourself a reminder with the mailx command
$ mailx joe
Subject: Call VP on Monday
^D
Cc:
Null message body; hope that's ok
$

Automate your repetitive interactions

The Expect language (an extension of Tcl/Tk, but other variations are also available) is used to write scripts that run sessions with interactive programs, as if the script were a user interacting directly with the program.

Expect scripts can save you a great deal of time, particularly when you find yourself engaging in repetitive tasks. Expect can interact with multiple programs including shells and text-based Web browsers, start remote sessions, and run over the network.

For example, if you frequently connect to a system on your local intranet to run particular program -- the test-servers command, for instance -- you can automate it with an Expect script named servmaint, whose contents appear in Listing 6.


Listing 6. Sample Expect script to automate remote system program execution
#!/usr/bin/expect -f
spawn telnet webserv4
expect "login:"
send "joe\r"
expect "Password:"
send "secret\r"
expect "webserv4>$"
send "test-servers\r"
expect "webserv4>$"
send "bye\r"
expect eof

Now, instead of going through the entire process of having to run telnet to connect to the remote system, log in with your username and password, run the command(s) on that system, and then log out. You just run the servmaint script as given in Listing 6; everything else is done for you. Of course, if you give a password or other proprietary information in such a script, there is a security consideration; at minimum, you should change the file's permissions so that you're the only user (besides the superuser) who can read it.

Any repetitive task involving system interaction can be programmed in Expect -- it's capable of branching, conditionals, and all other features of a high-level language so that the response and direction of the interaction with the program(s) can be completely automated.


Conclusion

In an office setting, UNIX systems can handle many of the tasks that are normally handled by standalone computers running other operating systems -- and with their rich supply of command-line tools, they're capable of productivity boosters that can't be found anywhere else.

This article introduced several techniques and concepts to increase your office productivity using UNIX command-line tools and applications. You should be able to apply these ideas to your own office situations and, with a little command-line ingenuity, come up with even more ways to save time and be more productive.

Resources

Learn

Get products and technologies
Michael Stutz is author of The Linux Cookbook, which he also designed and typeset using only open source software. His research interests include digital publishing and the future of the book. He has used various UNIX operating systems for 20 years. You can reach him at stutz@dsl.org.
 

Moving Around the Console.

So you're new to Linux and wondering how this virtual terminal stuff works. Well you can work in six different terminals at a time. To move around from one to another:

To change to Terminal 1 - Alt + F1
To change to Terminal 2 - Alt + F2
...
To change to Terminal 6 - Alt + F6
That's cool. But I just did locate on something and a lot of stuff scrolled up. How do I scroll up to see what flew by?
Shift +  PgUp - Scroll Up
Shift +  PgDn - Scroll Down

Note: If you switch away from a console and switch back to it,
you will lose what has already scrolled by.

If you had X running and wanted to change from X to text based and vice versa

To change to text based from X - Ctrl + Alt + F(n) where n = 1..6

To change to X from text based - Alt + F7

Something unexpected happened and I want to shut down my X server.
Just press:

Ctrl + Alt + Backspace

LinuxMonth - An Online monthly Linux magazine. Linux articles for Linux Enthusiasts.

What do you do when you need to see what a program is doing, but it's not one that you'd normally run from the command line? Perhaps it's one that is called as a network daemon from inetd, is called from inside another shell script or application, or is even called from cron. Is it actually being called? What command line parameters is it being handed? Why is it dying?

Let's assume the app in question is /the/path/to/myapp . Here's what you do. Make sure you have the "strace" program installed. Download "apptrace" from ftp://ftp.stearns.org/pub/apptrace/ and place it in your path, mode 755. Then type:

apptrace /the/path/to/myapp

When that program is called in the future, apptrace will record the last time myapp ran (see the timestamp on myapp-last-run), the command line parameters used (see myapp-parameters), and the strace output from running myapp (see myapp.pid.trace) in either $HOME/apptrace or /tmp/apptrace if $HOME is not set.

Note that if the original application is setuid-root, strace will not honor that flag and it will run with the permissions of the user running it like any other non-setuid-root app. See the man page for strace for more information on why.

When you've found out what you need to know and wish to stop monitoring the application, type:

mv -f /the/path/to/myapp.orig /the/path/to/myapp


Many thanks to David S. Miller , kernel hacker extraordinaire, for the right to publish his idea. His original version was:

It's actually pretty easy once if you can get a shell on the machine
before the event, once you know the program in question:

mv /path/to/${PROGRAM} /path/to/${PROGRAM}.ORIG
edit /path/to/${PROGRAM}
#!/bin/sh
strace -f -o /tmp/${PROGRAM}.trace /path/to/${PROGRAM}.ORIG $*

I do it all the time to debug network services started from
inetd for example.

Linux Today - O'Reilly Network Top 10 Tips for Linux Users

Troubleshooting Tips

System performance

From the SGI Admin Guide - last I checked the CPU spends most of its time waiting for something to do


Table 5-3 : Indications of an I/O-Bound System

Field Value sar Option

%busy (% time disk is busy) >85 sar -d

%rcache (reads in buffer cache) low, <85 sar -b

%wcache (writes in buffer cache) low, <60% sar -b

%wio (idle CPU waiting for disk I/O) dev. system >30 sar -u
fileserver >80


Table 5-5 Indications of Excessive Swapping/Paging


bswot/s (ransfers from memory to disk swap area)	>200	sar -w

bswin/s (transfers to memory)				>200	sar -w

%swpocc (time swap queue is occupied)			>10	sar -q

rflt/s (page reference fault)				>0	sar -t

freemem (average pages for user processes)		<100	sar -r

Indications of a CPU bound systems

%idle (% of time CPU has no work to do)			<5	sar -u

runq-sz (processes in memory waiting for CPU)		>2	sar -q

%runocc (% run queue occupied and processes not executing)	>90	sar -q

hypermail /usr/local/src/src/hypermail - mailing list to web page converter; grep hypermail /etc/aliases shows which lists use hypermail

pwck, grpck should be run weekly to make sure ok; grpck produces a ton of errors

can use local man pages - text only - see Ch3 User Services
put in /usr/local/manl (try /usr/man/local/manl) suffix .l
long ones pack -> pack program.1;mv program.1.z /usr/man/local/mannl/program.z

Linux Gazette Index

More 2-Cent Tips

Wed, 17 May 2000 08:38:09 +0200
From: Sebastian Schleussner Sebastian.Schleussner@gmx.de

I have been trying to set command line editing (vi mode) as part of
my bash shell environment and have been unsuccessful so far. You might
think this is trivial - well so did I.
I am using Red Hat Linux 6.1 and wanted to use "set -o vi" in my
start up scripts. I have tried all possible combinations but it JUST DOES
NOT WORK. I inserted the line in /etc/profile , in my .bash_profile, in
my .bashrc etc but I cannot get it to work. How can I get this done? This
used to be a breeze in the korn shell. Where am I going wrong?

Hi!
I recently learned from the SuSE help that you have to put the line
set keymap vi
into your /etc/inputrc or ~/.inputrc file, in addition to what you did
('set -o vi' in ~/.bashrc or /etc/profile)!
I hope that will do the trick for you.

Cheers,
Sebastian Schleussner


More 2-Cent Tips

It detects filesystem types of all accessible partitions and checks/mounts them in folders named after device (hda7,hdb1,hdb3,sd1,...).

So you will never have to write sequences of fdisk,fsck,mount,df...

allfilesys

You maybe interested in checking the site "Tracerote Lists by States. Backbone Maps List" http://cities.lk.net/trlist.html

You can find there many links to the traceroute resources sorted by the next items:

Other thing is the List of Backbone Maps, sorted by Geographical Location, also some other info about backbones.


More 2-Cent Tips

faq_builder.pl script

Sat, 11 Mar 2000 07:08:15 +0100 (CET)
From: Hans Zoebelein <hzo@goldfish.cube.net>

Everybody who is running a software project needs a FAQ to clarify questions about the project and to enlighten newbies how to run the software. Writing FAQs can be a time consuming process without much fun.

Now here comes a little Perl script which transforms simple ASCII input into HTML output which is perfect for FAQs (Frequently Asked Questions). I'm using this script on a daily basis and it is really nice and spares a lot of time. Check out http://leb.net/blinux/blinux-faq.html for results.

Attachment faq_builder.txt is the ASCII input to produce faq_builder.html using faq_builder.pl script.

'faq_builder.pl faq_builder.txt > faq_builder.html'

does the trick. Faq_builder.html is the the description how to use faq_builder.pl.

faq_builder.pl
faq_builder.html
faq_builder.txt

Fantastic book on linux - available for free both on/offline!

Sat, 18 Mar 2000 16:15:22 GMT
From: Esben Maalře (Acebone) <acebone@f2s.com>

Hi!

When I browse through the 2 cent tips, I see a lot of general Sysadmin/bash questions that could be answered by a book called "An Introduction to Linux Systems Administration" - written by David Jones and Bruce Jamieson.

You can check it out at www.infocom.cqu.edu.au/Units/aut99/85321

It's available both on-line and as downloadable PostScript file. Perhaps it's also available in PDF.

It's a great book, and a great read!

Quick tip for mounting FDs, CDs, etc...

Fri, 25 Feb 2000 15:49:17 -0800
From: <fuzzybear@pocketmail.com>

If you can't or don't want to use auto-mounting, and are tired of typing out all those 'mount' and 'umount' commands, here's a script called 'fd' that will do "the right thing at the right time" - and is easily modified for other devices:

#!/bin/bash
d="/mnt/fd0"
if [ -n "$(mount $d 2>&1)" ]; then umount $d; fi

It's a fine example of "obfuscated Bash scripting" , but it works well - I use it and its relatives 'cdr', 'dvd', and 'fdl' (Linux-ext2 floppy) every day.

Ben Okopnik

2 Cent Tips

Wed, 08 Mar 2000 16:13:59 -0500
From: Bolen Coogler <bcoogler@dscga.com> How to set vi edit mode in bash for Mandrake 7.0

If, like me, you prefer vi-style command line editing in bash, here's how to get it working in Mandrake 7.0.

When I wiped out Redhat 5.2 on my PC and installed Mandrake 7.0, I found vi command line editing no longer worked, even after issuing the "set -o vi" command. After much hair pulling and gnashing of teeth, I finally found the problem is with the /etc/inputrc file. I still don't know which line in this file caused the problem. If you have this same problem in Mandrake or some other distribution, my suggestion for a fix is:

1. su to root. 2. Save a copy of the original /etc/inputrc file (you may want it back).

3. Replace the contents of /etc/inputrc with the following:

set convert-meta off
set input-meta on
set output-meta on
set keymap vi
set editing-mode vi

The next time you start a terminal session, vi editing will be functional.

--Bolen Coogler


LG52 2-Cent Tips

LG51 2-Cent Tips

Info-search tips for Midnight Commander users

Mon, 31 Jan 2000 14:57:13 -0800
From: Ben Okopnik <fuzzybear@pocketmail.com>

Funny thing; I was just about to post this tip when I read Matt Willis' "HOWTO searching script" in LG45. Still, this script is a good bit more flexible (allows diving into subdirectories, actually displays the HOWTO or the document whether .gz or .html or whatever format, etc.), uses the Bash shell instead of csh (well, _I_ see it as an advantage ...), and reads the entire /usr/doc hierarchy - perfect for those times when the man page isn't quite enough. I find myself using it about as often as I do the 'man' command.

You will need the Midnight Commander on your system to take advantage of this (in my opinion, one of the top three apps ever written for the Linux console). I also find that it is at its best when used under X-windows, as this allows the use of GhostView, xdvi, and all the other nifty tools that aren't available on the console.

Here's the script.

To use it, type (for example)

doc xl

and press Enter. The script will respond with a menu of all the /usr/doc subdirs beginning with 'xl' prefixed by menu numbers; simply select the number for the directory that you want, and the script will switch to that directory and present you with another menu. Whenever your selection is an actual file, MC will open it in the appropriate manner - and when you exit that view of it, you'll be presented with the menu again. To quit the script, press 'Ctrl-C'.

A couple of built-in minor features (read: 'bugs') - if given a nonsense number as a selection, 'doc' will drop you into your home directory. Simply 'Ctrl-C' to get out and try again. Also, for at least one directory in '/usr/doc' (the 'gimp-manual/html') there is simply not enough scroll-back buffer to see all the menu-items (526 of them!). I'm afraid that you'll simply have to switch there and look around; fortunately, MC makes that relatively easy!

Oh, one more MC tip. If you define the 'CDPATH' variable in your .bash_profile and make '/usr/doc' one of the entries in it, you'll be able to switch to any directory in that hierarchy by simply typing 'cd <first_few_letters_of_dir_name>' and pressing the Tab key for completion. Just like using 'doc', in some ways...

Hope this is of help.

dual booting NT and linux

Thu, 03 Feb 2000 22:30:06 +0000
From: Clive Wright <clive_wright@telinco.co.uk>

I am not familiar with Norton Ghost; however I have been successfully dual booting NT 4 and versions of linux (currently Redhat 6.0) for the past year.

First let me refer you to the excellent article on multibooting by Tom de Blende in issue 47 of LG. Note step 17. "The tricky part is configuring Lilo. You must keep Lilo OUT OF THE MBR! The mbr is reserved for NT. If you'd install Lilo in your mbr, NT won't boot anymore".

As your requirements are quite modest they can easily be accomplished without any third party software ie. "Bootpart".

If NT is on a Fat partition then install MSdos and use the NT loader floppy disks to repair the startup environment. If NT is on an NTFS partition then you will need a Fat partition to load MSdos. Either way you should get to a stage where you can use NT's boot manager to select between NT and MSdos.

Boot into dos and from the dos prompt: "copy bootsect.dos *.lux".

Use attrib to remove attributes from boot.ini "attrib -s -h -r boot.ini" and edit the boot.ini file; after a line similar to C:\bootsect.dos="MS-DOS v6.22" add the line C:\bootsect.lux="Redhat Linux".

Save the edited file and replace the attributes.

At the boot menu you should now have four options: two for NT (normal and vga mode) and one each for msdos and Linux. To get the linux option to work you will have to use redhat's boot disk to boot into Linux and configure Lilo. Log on as root and use your favorite text editor to edit /etc/lilo.conf. Here is a copy of mine:

boot=/c/bootsect.lux
map=/boot/map
install=/boot/boot.b
prompt
timeout=1
image=/boot/vmlinuz-2.2.14
	label=linux
	root=/dev/hda5
	read-only

It can be quite minimal as it only has one operating system to boot; there is no requirement for a prompt and the timeout is reduced to 1 so that it boots almost immediately without further user intervention. If your linux root partition is not /dev/hda5 then the root line will require amendment.

I mount my MSdos C: drive as /c/ under linux. I am sure this will make some unix purists cringe but I find C: to /c easy to type and easy to remember. If you are happy with that; then all that is required is to create the mount point, "mkdir /c" and mount the C: drive. "mount -t msdos /dev/hda1 /c" will do for now but you may want to include /dev/hda1 in /etc/fstab so that it will automatically mounted in the future; useful for exporting files to make them available to NT.

Check that /c/bootsect.lux is visible to Linux "ls /c/bootsect*"

/c/bootsect.dos  /c/bootsect.lux

Then run "lilo"

Added linux *

Following an orderly shutdown and reboot you can now select Redhat Linux at NT's boot prompt and boot into Linux. I hope you find the above useful.

Recommended Links

developerWorks Linux Technical library view

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

More 2 Cent Tips

Two Cent BASH Shell Script Tips

Lots More 2 Cent Tips...

Some great 2˘ Tips...

Linux Magazine: Tip Pack: KDE(Aug 03, 2000)
O'Reilly Network: 12 Tips on Building Firewalls(Jul 29, 2000)
Linux.com: LILO Security Tips(Apr 20, 2000)
About.com: Small Computer Tips(Aug 16, 1999)
Ext2.org: Misc Kernel Tips #2(Jul 06, 1999)
Ext2.org: Misc kernel tips(May 29, 1999)
Online book -- 100 Linux Tips and Tricks(May 12, 1999)
PC Week: Tips for those taking the Linux plunge(Apr 01, 1999)
ZDNet AnchorDesk: Tips and Tricks to Get You Started [with Linux](Jan 21, 1999)
Linux Tips and Tricks(Jan 02, 1999)

Learn


Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: October 30, 2017