Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Linux Tips

News

Recommended Links

Filesystems tips

Linux netwoking tips

RHEL Tips

Suse Tips
Bash Tips and Trick Attaching to and detaching from screen sessions Midnight Commander Tips and Tricks Midnight Commander Tips and Tricks WinSCP Tips SSH Tips
Shell Tips How to rename files with special characters in names VIM Tips GNU Tar Tips GNU Screen Tips  
Grub Linux Start up and Run Levels AWK Tips Sysadmin Horror Stories Humor Etc

There are several large collection of Linux Tips on the Internet. Those are mixture of obsolete and useful tips so some work need to be done selecting valuable info from junk. Among them:

For YUM tips one can look at Yum - Linux@Duke Project Wiki

Linux Gazette regularly publishes tips column. See for example More 2 Cent Tips! LG #106

Some references from Linux Today also might be useful, for example


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Nov 12, 2018] Linux Find Out Which Process Is Listening Upon a Port

Jun 25, 2012 | www.cyberciti.biz

How do I find out running processes were associated with each open port? How do I find out what process has open tcp port 111 or udp port 7000 under Linux?

You can the following programs to find out about port numbers and its associated process:

  1. netstat – a command-line tool that displays network connections, routing tables, and a number of network interface statistics.
  2. fuser – a command line tool to identify processes using files or sockets.
  3. lsof – a command line tool to list open files under Linux / UNIX to report a list of all open files and the processes that opened them.
  4. /proc/$pid/ file system – Under Linux /proc includes a directory for each running process (including kernel processes) at /proc/PID, containing information about that process, notably including the processes name that opened port.

You must run above command(s) as the root user.

netstat example

Type the following command:
# netstat -tulpn
Sample outputs:

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1138/mysqld     
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      850/portmap     
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2    
tcp        0      0 0.0.0.0:55091           0.0.0.0:*               LISTEN      910/rpc.statd   
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1467/dnsmasq    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      992/sshd        
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1565/cupsd      
tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      3813/transmission
tcp6       0      0 :::22                   :::*                    LISTEN      992/sshd        
tcp6       0      0 ::1:631                 :::*                    LISTEN      1565/cupsd      
tcp6       0      0 :::7000                 :::*                    LISTEN      3813/transmission
udp        0      0 0.0.0.0:111             0.0.0.0:*                           850/portmap     
udp        0      0 0.0.0.0:662             0.0.0.0:*                           910/rpc.statd   
udp        0      0 192.168.122.1:53        0.0.0.0:*                           1467/dnsmasq    
udp        0      0 0.0.0.0:67              0.0.0.0:*                           1467/dnsmasq    
udp        0      0 0.0.0.0:68              0.0.0.0:*                           3697/dhclient   
udp        0      0 0.0.0.0:7000            0.0.0.0:*                           3813/transmission
udp        0      0 0.0.0.0:54746           0.0.0.0:*                           910/rpc.statd

TCP port 3306 was opened by mysqld process having PID # 1138. You can verify this using /proc, enter:
# ls -l /proc/1138/exe
Sample outputs:

lrwxrwxrwx 1 root root 0 2010-10-29 10:20 /proc/1138/exe -> /usr/sbin/mysqld

You can use grep command to filter out information:
# netstat -tulpn | grep :80
Sample outputs:

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2
Video demo

https://www.youtube.com/embed/h3fJlmuGyos

fuser command

Find out the processes PID that opened tcp port 7000, enter:
# fuser 7000/tcp
Sample outputs:

7000/tcp:             3813

Finally, find out process name associated with PID # 3813, enter:
# ls -l /proc/3813/exe
Sample outputs:

lrwxrwxrwx 1 vivek vivek 0 2010-10-29 11:00 /proc/3813/exe -> /usr/bin/transmission

/usr/bin/transmission is a bittorrent client, enter:
# man transmission
OR
# whatis transmission
Sample outputs:

transmission (1)     - a bittorrent client
Task: Find Out Current Working Directory Of a Process

To find out current working directory of a process called bittorrent or pid 3813, enter:
# ls -l /proc/3813/cwd
Sample outputs:

lrwxrwxrwx 1 vivek vivek 0 2010-10-29 12:04 /proc/3813/cwd -> /home/vivek

OR use pwdx command, enter:
# pwdx 3813
Sample outputs:

3813: /home/vivek
Task: Find Out Owner Of a Process

Use the following command to find out the owner of a process PID called 3813:
# ps aux | grep 3813
OR
# ps aux | grep '[3]813'
Sample outputs:

vivek     3813  1.9  0.3 188372 26628 ?        Sl   10:58   2:27 transmission

OR try the following ps command:
# ps -eo pid,user,group,args,etime,lstart | grep '[3]813'
Sample outputs:

3813 vivek    vivek    transmission                   02:44:05 Fri Oct 29 10:58:40 2010

Another option is /proc/$PID/environ, enter:
# cat /proc/3813/environ
OR
# grep --color -w -a USER /proc/3813/environ
Sample outputs (note –colour option):

Fig.01: grep output
Fig.01: grep output

lsof Command Example

Type the command as follows:

lsof -i :portNumber 
lsof -i tcp:portNumber 
lsof -i udp:portNumber 
lsof -i :80
lsof -i :80 | grep LISTEN

lsof -i :portNumber lsof -i tcp:portNumber lsof -i udp:portNumber lsof -i :80 lsof -i :80 | grep LISTEN

Sample outputs:

apache2   1607     root    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1616 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1617 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1618 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1619 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1620 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)

Now, you get more information about pid # 1607 or 1616 and so on:
# ps aux | grep '[1]616'
Sample outputs:
www-data 1616 0.0 0.0 35816 3880 ? S 10:20 0:00 /usr/sbin/apache2 -k start
I recommend the following command to grab info about pid # 1616:
# ps -eo pid,user,group,args,etime,lstart | grep '[1]616'
Sample outputs:

1616 www-data www-data /usr/sbin/apache2 -k start     03:16:22 Fri Oct 29 10:20:17 2010

Where,

Help: I Discover an Open Port Which I Don't Recognize At All

The file /etc/services is used to map port numbers and protocols to service names. Try matching port numbers:
$ grep port /etc/services
$ grep 443 /etc/services

Sample outputs:

https		443/tcp				# http protocol over TLS/SSL
https		443/udp
Check For rootkit

I strongly recommend that you find out which processes are really running, especially servers connected to the high speed Internet access. You can look for rootkit which is a program designed to take fundamental control (in Linux / UNIX terms "root" access, in Windows terms "Administrator" access) of a computer system, without authorization by the system's owners and legitimate managers. See how to detecting / checking rootkits under Linux .

Keep an Eye On Your Bandwidth Graphs

Usually, rooted servers are used to send a large number of spam or malware or DoS style attacks on other computers.

See also:

See the following man pages for more information:
$ man ps
$ man grep
$ man lsof
$ man netstat
$ man fuser

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . GOT FEEDBACK? CLICK HERE TO JOIN THE DISCUSSION

[Nov 08, 2018] How to find which process is regularly writing to disk?

Notable quotes:
"... tick...tick...tick...trrrrrr ..."
"... /var/log/syslog ..."
Nov 08, 2018 | unix.stackexchange.com

Cedric Martin , Jul 27, 2012 at 4:31

How can I find which process is constantly writing to disk?

I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

But here I'm not sure.

I tried the following:

ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp

but nothing is changing there.

Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

(HD is a Seagate Barracude 500GB)

Mat , Jul 27, 2012 at 6:03

Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

Cedric Martin , Jul 27, 2012 at 7:02

@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

camh , Jul 27, 2012 at 9:48

Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

mnmnc , Jul 27, 2012 at 8:27

Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

example output:

Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
    6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
    7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
    8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
 1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
   10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]

Cedric Martin , Aug 2, 2012 at 15:56

thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

ndemou , Jun 20, 2016 at 15:32

I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

scai , Jul 27, 2012 at 10:48

You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

dan3 , Jul 15, 2013 at 8:32

It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

scai , Jul 16, 2013 at 6:32

You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

dan3 , Jul 16, 2013 at 7:22

I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

scai , Jul 16, 2013 at 10:50

I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

Volker Siegel , Apr 16, 2014 at 22:57

I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

Gilles , Jul 28, 2012 at 1:34

Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
auditctl -S sync -S fsync -S fdatasync -a exit,always

Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

Gilles , Mar 23 at 18:24

@StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

cas , Jul 27, 2012 at 9:40

It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

for disk in /dev/sd? ; do
  /sbin/hdparm -q -S 0 "/dev/$disk"
done

Cedric Martin , Aug 2, 2012 at 16:03

that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

cas , Aug 2, 2012 at 21:42

IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

Micheal Johnson , Mar 12, 2016 at 20:48

It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

Micheal Johnson , Mar 12, 2016 at 20:51

Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

,

I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

[Nov 08, 2018] Determining what process is bound to a port

Mar 14, 2011 | unix.stackexchange.com
I know that using the command:
lsof -i TCP

(or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.

Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.

Cakemox , Mar 14, 2011 at 20:48

netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not all others (like AIX.) Add -t if you want TCP only.
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:24800           0.0.0.0:*               LISTEN      27899/synergys
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      3361/python
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      2264/mysqld
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      22964/apache2
tcp        0      0 192.168.99.1:53         0.0.0.0:*               LISTEN      3389/named
tcp        0      0 192.168.88.1:53         0.0.0.0:*               LISTEN      3389/named

etc.

xxx , Mar 14, 2011 at 21:01

Cool, thanks. Looks like that that works under RHEL, but not under Solaris (as you indicated). Anybody know if there's something similar for Solaris? – user5721 Mar 14 '11 at 21:01

Rich Homolka , Mar 15, 2011 at 19:56

netstat -p above is my vote. also look at lsof . – Rich Homolka Mar 15 '11 at 19:56

Jonathan , Aug 26, 2014 at 18:50

As an aside, for windows it's similar: netstat -aon | more – Jonathan Aug 26 '14 at 18:50

sudo , May 25, 2017 at 2:24

What about for SCTP? – sudo May 25 '17 at 2:24

frielp , Mar 15, 2011 at 13:33

On AIX, netstat & rmsock can be used to determine process binding:
[root@aix] netstat -Ana|grep LISTEN|grep 80
f100070000280bb0 tcp4       0      0  *.37               *.*        LISTEN
f1000700025de3b0 tcp        0      0  *.80               *.*        LISTEN
f1000700002803b0 tcp4       0      0  *.111              *.*        LISTEN
f1000700021b33b0 tcp4       0      0  127.0.0.1.32780    *.*        LISTEN

# Port 80 maps to f1000700025de3b0 above, so we type:
[root@aix] rmsock f1000700025de3b0 tcpcb
The socket 0x25de008 is being held by process 499790 (java).

Olivier Dulac , Sep 18, 2013 at 4:05

Thanks for this! Is there a way, however, to just display what process listen on the socket (instead of using rmsock which attempt to remove it) ? – Olivier Dulac Sep 18 '13 at 4:05

Vitor Py , Sep 26, 2013 at 14:18

@OlivierDulac: "Unlike what its name implies, rmsock does not remove the socket, if it is being used by a process. It just reports the process holding the socket." ( ibm.com/developerworks/community/blogs/cgaix/entry/ ) – Vitor Py Sep 26 '13 at 14:18

Olivier Dulac , Sep 26, 2013 at 16:00

@vitor-braga: Ah thx! I thought it was trying but just said which process holds in when it couldn't remove it. Apparently it doesn't even try to remove it when a process holds it. That's cool! Thx! – Olivier Dulac Sep 26 '13 at 16:00

frielp , Mar 15, 2011 at 13:27

Another tool available on Linux is ss . From the ss man page on Fedora:
NAME
       ss - another utility to investigate sockets
SYNOPSIS
       ss [options] [ FILTER ]
DESCRIPTION
       ss is used to dump socket statistics. It allows showing information 
       similar to netstat. It can display more TCP and state informations  
       than other tools.

Example output below - the final column shows the process binding:

[root@box] ss -ap
State      Recv-Q Send-Q      Local Address:Port          Peer Address:Port
LISTEN     0      128                    :::http                    :::*        users:(("httpd",20891,4),("httpd",20894,4),("httpd",20895,4),("httpd",20896,4)
LISTEN     0      128             127.0.0.1:munin                    *:*        users:(("munin-node",1278,5))
LISTEN     0      128                    :::ssh                     :::*        users:(("sshd",1175,4))
LISTEN     0      128                     *:ssh                      *:*        users:(("sshd",1175,3))
LISTEN     0      10              127.0.0.1:smtp                     *:*        users:(("sendmail",1199,4))
LISTEN     0      128             127.0.0.1:x11-ssh-offset                  *:*        users:(("sshd",25734,8))
LISTEN     0      128                   ::1:x11-ssh-offset                 :::*        users:(("sshd",25734,7))

Eugen Constantin Dinca , Mar 14, 2011 at 23:47

For Solaris you can use pfiles and then grep by sockname: or port: .

A sample (from here ):

pfiles `ptree | awk '{print $1}'` | egrep '^[0-9]|port:'

rickumali , May 8, 2011 at 14:40

I was once faced with trying to determine what process was behind a particular port (this time it was 8000). I tried a variety of lsof and netstat, but then took a chance and tried hitting the port via a browser (i.e. http://hostname:8000/ ). Lo and behold, a splash screen greeted me, and it became obvious what the process was (for the record, it was Splunk ).

One more thought: "ps -e -o pid,args" (YMMV) may sometimes show the port number in the arguments list. Grep is your friend!

Gilles , Oct 8, 2015 at 21:04

In the same vein, you could telnet hostname 8000 and see if the server prints a banner. However, that's mostly useful when the server is running on a machine where you don't have shell access, and then finding the process ID isn't relevant. – Gilles May 8 '11 at 14:45

[Nov 08, 2018] How to find which process is regularly writing to disk?

Notable quotes:
"... tick...tick...tick...trrrrrr ..."
"... /var/log/syslog ..."
Jul 27, 2012 | unix.stackexchange.com

Cedric Martin , Jul 27, 2012 at 4:31

How can I find which process is constantly writing to disk?

I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

But here I'm not sure.

I tried the following:

ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp

but nothing is changing there.

Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

(HD is a Seagate Barracude 500GB)

Mat , Jul 27, 2012 at 6:03

Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

Cedric Martin , Jul 27, 2012 at 7:02

@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

camh , Jul 27, 2012 at 9:48

Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

mnmnc , Jul 27, 2012 at 8:27

Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

example output:

Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
    6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
    7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
    8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
 1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
   10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]

Cedric Martin , Aug 2, 2012 at 15:56

thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

ndemou , Jun 20, 2016 at 15:32

I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

scai , Jul 27, 2012 at 10:48

You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

dan3 , Jul 15, 2013 at 8:32

It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

scai , Jul 16, 2013 at 6:32

You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

dan3 , Jul 16, 2013 at 7:22

I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

scai , Jul 16, 2013 at 10:50

I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

Volker Siegel , Apr 16, 2014 at 22:57

I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

Gilles , Jul 28, 2012 at 1:34

Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
auditctl -S sync -S fsync -S fdatasync -a exit,always

Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

Gilles , Mar 23 at 18:24

@StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

cas , Jul 27, 2012 at 9:40

It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

for disk in /dev/sd? ; do
  /sbin/hdparm -q -S 0 "/dev/$disk"
done

Cedric Martin , Aug 2, 2012 at 16:03

that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

cas , Aug 2, 2012 at 21:42

IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

Micheal Johnson , Mar 12, 2016 at 20:48

It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

Micheal Johnson , Mar 12, 2016 at 20:51

Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

,

I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

[Oct 16, 2018] How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command by Prakash Subramanian

Oct 15, 2018 | www.2daygeek.com
It's a important topic for Linux admin (such a wonderful topic) so, everyone must be aware of this and practice how to use this in the efficient way.

In Linux, whenever we install any packages which has services or daemons. By default all the services "init & systemd" scripts will be added into it but it wont enabled.

Hence, we need to enable or disable the service manually if it's required. There are three major init systems are available in Linux which are very famous and still in use.

What is init System?

In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel.

It's holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down.

Init looks at the /etc/inittab file to decide the Linux run level then it starts all other processes & applications in the background as per the run level.

BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process.

Below are the available run levels for Linux (There are seven runlevels exist, from zero to six).

Below three init systems are widely used in Linux.

What is System V (Sys V)?

System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it's a parent process for everything.

Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart.

But systemd has been adopted by several major Linux distributions over the traditional SysV init systems.

What is Upstart?

Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.

It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init.

It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd.

What is systemd?

Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.

systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.

It's a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).

systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file.

How to Enable or Disable Services on Boot Using chkconfig Commmand?

The chkconfig utility is a command-line tool that allows you to specify in which
runlevel to start a selected service, as well as to list all available services along with their current setting.

Also, it will allows us to enable or disable a services from the boot. Make sure you must have superuser privileges (either root or sudo) to use this command.

All the services script are located on /etc/rd.d/init.d .

How to list All Services in run-level

The -–list parameter displays all the services along with their current status (What run-level the services are enabled or disabled).

# chkconfig --list
NetworkManager     0:off    1:off    2:on    3:on    4:on    5:on    6:off
abrt-ccpp          0:off    1:off    2:off    3:on    4:off    5:on    6:off
abrtd              0:off    1:off    2:off    3:on    4:off    5:on    6:off
acpid              0:off    1:off    2:on    3:on    4:on    5:on    6:off
atd                0:off    1:off    2:off    3:on    4:on    5:on    6:off
auditd             0:off    1:off    2:on    3:on    4:on    5:on    6:off
.
.

How to check the Status of Specific Service

If you would like to see a particular service status in run-level then use the following format and grep the required service.

In this case, we are going to check the auditd service status in run-level

[Oct 15, 2018] Breaking News! SUSE Linux Sold for $2.5 Billion It's FOSS by Abhishek Prakash

Aqusition by a private equity shark is never good news for a software vendor...
Jul 03, 2018 | itsfoss.com

British software company Micro Focus International has agreed to sell SUSE Linux and its associated software business to Swedish private equity group EQT Partners for $2.535 billion. Read the details. ­ rm 3 months ago

This comment is awaiting moderation

Novell acquired SUSE in 2003 for $210 million ­ asoc 4 months ago

This comment is awaiting moderation

"It has over 1400 employees all over the globe "
They should be updating their CVs.

[Oct 14, 2018] Does Systemd Make Linux Complex, Error-Prone, and Unstable

Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster. That alone is a good reason to stay away from it.
Oct 14, 2018 | linux.slashdot.org

Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )

Re: Does systemd make ... ( Score: 5 , Funny)

Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!

Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )

Re:It violates fundamental Unix principles ( Score: 4 , Funny)

The Emacs of the 2010s.

DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )
Re:It violates fundamental Unix principles ( Score: 5 , Funny)

We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.

serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) Journal
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)

I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.

Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )
Don't go hating on systemd ( Score: 5 , Funny)

RHEL7 is a fine OS, the only thing it's missing is a really good init system.

[Sep 10, 2018] Find Version of Currently Running Ubuntu

Looks like lsb_release is limited to Ubuntu and Debian...
Sep 10, 2018 | www.mysysad.com
lsb_release -a

Here is the syntax to determine which Ubuntu kernel version you are currently running on.

uname -a
uname -r

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS

Highly recommended!
Aug 24, 2018 | linuxconfig.org
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple shell scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run from cron at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.

While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.

You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?

On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.

In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two bash scripts, parselogs.sh and pullnews.sh to provide a way that all systems have the latest version of these scripts in the /usr/local/sbin directory, and thus on the path of any user who logs in to the system.


me width=


Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.

In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except bash ), so we will build noarch packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that uses rpm , and to any version - we only need to ensure that the build machine's rpm-build package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install the rpm-build package:

# yum install rpm-build
From now on, we do not use root user, and for a good reason. Building packages does not require root privilege, and you don't want to break your building machine.

Building the first version of the package Let's create the directory structure needed for building:

$ mkdir -p rpmbuild/SPECS
Our package is called admin-scripts, version 1.0. We create a specfile that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such as vi . The previously installed rpmbuild package will fill your empty specfile with template data if you use vi to create an empty one, but for this tutorial consider the specification below called admin-scripts-1.0.spec :

me width=


Name:           admin-scripts
Version:        1
Release:        0
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release
Place the specfile in the rpmbuild/SPEC directory we created earlier.

We need the sources referenced in the specfile - in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):

$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scripts
And copy/move the scripts into it:
$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/
parselogs.sh  pullnews.sh

me width=


As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and the pullnews.sh is the script we will demonstrate with, it's source in the first version is as below:
#!/bin/bash
echo "news pulled"
exit 0
Do not forget to add the appropriate rights to the files in the source - in our case, execution right:
chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.sh
Now we create a tar.gz archive from the source in the same directory:
cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1
We are ready to build the package:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.spec
We'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under the rpmbuild directory (sorted into subdirectories by architecture):
$ ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm
We have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:
$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm 
Name        : admin-scripts
Version     : 1
Release     : 0
Architecture: noarch
Install Date: (not installed)
Group       : Application/Other
Size        : 78
License     : GPL
Signature   : (none)
Source RPM  : admin-scripts-1-0.src.rpm
Build Date  : 2018. aug.  1., Wed, 13.27.34 CEST
Build Host  : build01.foobar.com
Relocations : (not relocatable)
Packager    : John Doe 
URL         : www.foobar.com/admin-scripts
Summary     : FooBar Inc. IT dept. admin scripts
Description :
Package installing latest version the admin scripts used by the IT dept.
And of cause we can install it (with root privileges): Installing custom scripts with rpm Installing custom scripts with rpm

me width=


As we installed the scripts into a directory that is on every user's $PATH , you can run them as any user in the system, from any directory:
$ pullnews.sh 
news pulled
The package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simple yum install admin-scripts within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that the pullnews.sh should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.

First we change the source of the pullnews.sh in the SOURCES to something even more complex:

#!/bin/bash
echo "news pulled"
echo "another line printed"
exit 0
We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so the Source0 reference will be still valid). Note that we delete the previous archive first:
cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1
Now we create another specfile with a higher release number:
cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.spec
We don't change much on the package itself, so we simply administrate the new version as shown below:
Name:           admin-scripts
Version:        1
Release:        1
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release

me width=


All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.spec
If the build is successful, we now have two versions of the package under our RPMS directory:
ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm  admin-scripts-1-1.noarch.rpm
And now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpm Upgrading custom scripts with rpm

And our sysadmins can see that the feature request is landed in this version:

rpm -q --changelog admin-scripts
* sze aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line

* sze aug 01 2018 John Doe 
- release 1.0 - initial release
Conclusion

We wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.

[Jan 14, 2018] How to remount filesystem in read write mode under Linux

Jan 14, 2018 | kerneltalks.com

Most of the time on newly created file systems of NFS filesystems we see error like below :

1 2 3 4 root @ kerneltalks # touch file1 touch : cannot touch ' file1 ' : Read - only file system

This is because file system is mounted as read only. In such scenario you have to mount it in read-write mode. Before that we will see how to check if file system is mounted in read only mode and then we will get to how to re mount it as a read write filesystem.


How to check if file system is read only

To confirm file system is mounted in read only mode use below command –

1 2 3 4 # cat /proc/mounts | grep datastore / dev / xvdf / datastore ext3 ro , seclabel , relatime , data = ordered 0 0

Grep your mount point in cat /proc/mounts and observer third column which shows all options which are used in mounted file system. Here ro denotes file system is mounted read-only.

You can also get these details using mount -v command

1 2 3 4 root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( ro , relatime , seclabel , data = ordered )

In this output. file system options are listed in braces at last column.


Re-mount file system in read-write mode

To remount file system in read-write mode use below command –

1 2 3 4 5 6 root @ kerneltalks # mount -o remount,rw /datastore root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( rw , relatime , seclabel , data = ordered )

Observe after re-mounting option ro changed to rw . Now, file system is mounted as read write and now you can write files in it.

Note : It is recommended to fsck file system before re mounting it.

You can check file system by running fsck on its volume.

1 2 3 4 5 6 7 8 9 10 root @ kerneltalks # df -h /datastore Filesystem Size Used Avail Use % Mounted on / dev / xvda2 10G 881M 9.2G 9 % / root @ kerneltalks # fsck /dev/xvdf fsck from util - linux 2.23.2 e2fsck 1.42.9 ( 28 - Dec - 2013 ) / dev / xvdf : clean , 12 / 655360 files , 79696 / 2621440 blocks

Sometimes there are some corrections needs to be made on file system which needs reboot to make sure there are no processes are accessing file system.

[Jan 14, 2018] Linux yes Command Tutorial for Beginners (with Examples)

Jan 14, 2018 | www.howtoforge.com

You can see that user has to type 'y' for each query. It's in situation like these where yes can help. For the above scenario specifically, you can use yes in the following way:

yes | rm -ri test Q3. Is there any use of yes when it's used alone?

Yes, there's at-least one use: to tell how well a computer system handles high amount of loads. Reason being, the tool utilizes 100% processor for systems that have a single processor. In case you want to apply this test on a system with multiple processors, you need to run a yes process for each processor.

[Jan 14, 2018] Working with Vim Editor Advanced concepts

Jan 14, 2018 | linuxtechlab.com

Opening multiple files with VI/VIM editor

To open multiple files, command would be same as is for a single file; we just add the file name for second file as well.

$ vi file1 file2 file 3

Now to browse to next file, we can use

$ :n

or we can also use

$ :e filename

Run external commands inside the editor

We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use the BANG i.e. '!' followed by the command that needs to be used. Syntax for running a command is,

$ :! command

An example for this would be

$ :! df -H

Searching for a pattern

To search for a word or pattern in the text file, we use following two commands in command mode,

Both of these commands are used for same purpose, only difference being the direction they search in. An example would be,

$ :/ search pattern (If at beginning of the file)

$ :/ search pattern (If at the end of the file)

Searching & replacing a pattern

We might be required to search & replace a word or a pattern from our text files. So rather than finding the occurrence of word from whole text file & replace it, we can issue a command from the command mode to replace the word automatically. Syntax for using search & replacement is,

$ :s/pattern_to_be_found/New_pattern/g

Suppose we want to find word "alpha" & replace it with word "beta", the command would be

$ :s/alpha/beta/g

If we want to only replace the first occurrence of word "alpha", then the command would be

$ :s/alpha/beta/

Using Set commands

We can also customize the behaviour, the and feel of the vi/vim editor by using the set command. Here is a list of some options that can be use set command to modify the behaviour of vi/vim editor,

Some other commands to modify vi editors are,

$ :colorscheme its used to change the color scheme for the editor. (for VIM editor only)

$ :syntax on will turn on the color syntax for .xml, .html files etc. (for VIM editor only)

This complete our tutorial, do mention your queries/questions or suggestions in the comment box below.

[Oct 31, 2017] Shell config subfiles by Tom Ryder

Notable quotes:
"... Note that we unset the config variable after we're done, otherwise it'll be in the namespace of our shell where we don't need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there's at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference. ..."
"... Thanks to commenter oylenshpeegul for correcting the syntax of the loops. ..."
Jan 30, 2015 | sanctum.geek.nz

Large shell startup scripts ( .bashrc , .profile ) over about fifty lines or so with a lot of options, aliases, custom functions, and similar tweaks can get cumbersome to manage over time, and if you keep your dotfiles under version control it's not terribly helpful to see large sets of commits just editing the one file when it could be more instructive if broken up into files by section.

Given that shell configuration is just shell code, we can apply the source builtin (or the . builtin for POSIX sh ) to load several files at the end of a .bashrc , for example:

source ~/.bashrc.options
source ~/.bashrc.aliases
source ~/.bashrc.functions

This is a better approach, but it still binds us into using those filenames; we still have to edit the ~/.bashrc file if we want to rename them, or remove them, or add new ones.

Fortunately, UNIX-like systems have a common convention for this, the .d directory suffix, in which sections of configuration can be stored to be read by a main configuration file dynamically. In our case, we can create a new directory ~/.bashrc.d :

$ ls ~/.bashrc.d
options.bash
aliases.bash
functions.bash

With a slightly more advanced snippet at the end of ~/.bashrc , we can then load every file with the suffix .bash in this directory:

# Load any supplementary scripts
for config in "$HOME"/.bashrc.d/*.bash ; do
    source "$config"
done
unset -v config

Note that we unset the config variable after we're done, otherwise it'll be in the namespace of our shell where we don't need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there's at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference.

The same method can be applied with .profile to load all scripts with the suffix .sh in ~/.profile.d , if we want to write in POSIX sh , with some slightly different syntax:

# Load any supplementary scripts
for config in "$HOME"/.profile.d/*.sh ; do
    . "$config"
done
unset -v config

Another advantage of this method is that if you have your dotfiles under version control, you can arrange to add extra snippets on a per-machine basis unversioned, without having to update your .bashrc file.

Here's my implementation of the above method, for both .bashrc and .profile :

Thanks to commenter oylenshpeegul for correcting the syntax of the loops.

[Oct 27, 2017] Neat trick of using su command for killing all processes for a particular user

Oct 27, 2017 | unix.stackexchange.com

If you pass -1 as the process ID argument to either the kill shell command or the kill C function , then the signal is sent to all the processes it can reach, which in practice means all the processes of the user running the kill command or syscall.

su -c 'kill -TERM -1' bob

In C (error checking omitted):

if (fork() == 0) {
    setuid(uid);
    signal(SIGTERM, SIG_DFL);
    kill(-1, SIGTERM);
}

[Oct 27, 2017] c - How do I kill all a user's processes using their UID - Unix Linux Stack Exchange

Oct 27, 2017 | unix.stackexchange.com

osgx ,Aug 4, 2011 at 10:07

Use pkill -U UID or pkill -u UID or username instead of UID. Sometimes skill -u USERNAME may work, another tool is killall -u USERNAME .

Skill was a linux-specific and is now outdated, and pkill is more portable (Linux, Solaris, BSD).

pkill allow both numberic and symbolic UIDs, effective and real http://man7.org/linux/man-pages/man1/pkill.1.html

pkill - ... signal processes based on name and other attributes

    -u, --euid euid,...
         Only match processes whose effective user ID is listed.
         Either the numerical or symbolical value may be used.
    -U, --uid uid,...
         Only match processes whose real user ID is listed.  Either the
         numerical or symbolical value may be used.

Man page of skill says is it allowed only to use username, not user id: http://man7.org/linux/man-pages/man1/skill.1.html

skill, snice ... These tools are obsolete and unportable. The command syntax is poorly defined. Consider using the killall, pkill

  -u, --user user
         The next expression is a username.

killall is not marked as outdated in Linux, but it also will not work with numberic UID; only username: http://man7.org/linux/man-pages/man1/killall.1.html

killall - kill processes by name

   -u, --user
         Kill only processes the specified user owns.  Command names
         are optional.

I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use full list of processes (doing some readdir of /proc ). I think, they will iterate over /proc digital subfolders and check every found process for match.

To get list of users, use getpwent (it will get one user per call).

skill (procps & procps-ng) and killall (psmisc) tools both uses getpwnam library call to parse argument of -u option, and only username will be parsed. pkill (procps & procps-ng) uses both atol and getpwnam to parse -u / -U argument and allow both numeric and textual user specifier.

; ,Aug 4, 2011 at 10:11

pkill is not obsolete. It may be unportable outside Linux, but the question was about Linux specifically. – Lars Wirzenius Aug 4 '11 at 10:11

Petesh ,Aug 4, 2011 at 10:58

to get the list of users use the one liner: getent passwd | awk -F: '{print $1}' – Petesh Aug 4 '11 at 10:58

; ,Aug 4, 2011 at 12:07

what about I give a command like: "kill -ju UID" from C system() call? – user489152 Aug 4 '11 at 12:07

osgx ,Aug 4, 2011 at 15:01

is it an embedded linux? you have no skill, pkill and killall? Even busybox embedded shell has pkill and killall. – osgx Aug 4 '11 at 15:01

michalzuber ,Apr 23, 2015 at 7:47

killall -u USERNAME worked like charm – michalzuber Apr 23 '15 at 7:47

[Oct 19, 2017] Bash One-Liners bashoneliners.com

Oct 19, 2017 | www.bashoneliners.com
Kill a process running on port 8080
 $ lsof -i :8080 | awk 'NR > 1 {print $2}' | xargs --no-run-if-empty kill

-- by Janos on Sept. 1, 2017, 8:31 p.m.

Make a new folder and cd into it.
 $ mkcd(){ NAME=$1; mkdir -p "$NAME"; cd "$NAME"; }

-- by PrasannaNatarajan on Aug. 3, 2017, 6:49 a.m.

Go up to a particular folder
 $ alias ph='cd ${PWD%/public_html*}/public_html'

-- by Jab2870 on July 18, 2017, 6:07 p.m.

Explanation

I work on a lot of websites and often need to go up to the public_html folder.

This command creates an alias so that however many folders deep I am, I will be taken up to the correct folder.

alias ph='....' : This creates a shortcut so that when command ph is typed, the part between the quotes is executed

cd ... : This changes directory to the directory specified

PWD : This is a global bash variable that contains the current directory

${...%/public_html*} : This removes /public_html and anything after it from the specified string

Finally, /public_html at the end is appended onto the string.

So, to sum up, when ph is run, we ask bash to change the directory to the current working directory with anything after public_html removed.

Open another terminal at current location
 $ $TERMINAL & disown

-- by Jab2870 on July 18, 2017, 3:04 p.m.

Explanation

Opens another terminal window at the current location.

Use Case

I often cd into a directory and decide it would be useful to open another terminal in the same folder, maybe for an editor or something. Previously, I would open the terminal and repeat the CD command.

I have aliased this command to open so I just type open and I get a new terminal already in my desired folder.

The & disown part of the command stops the new terminal from being dependant on the first meaning that you can still use the first and if you close the first, the second will remain open. Limitations

It relied on you having the $TERMINAL global variable set. If you don't have this set you could easily change it to something like the following:

gnome-terminal & disown or konsole & disown

Preserve your fingers from cd ..; cd ..; cd..; cd..;
 $ up(){ DEEP=$1; for i in $(seq 1 ${DEEP:-"1"}); do cd ../; done; }

-- by alireza6677 on June 28, 2017, 5:40 p.m.

Generate a sequence of numbers
 $ echo {01..10}

-- by Elkku on March 1, 2015, 12:04 a.m.

Explanation

This example will print:

01 02 03 04 05 06 07 08 09 10

While the original one-liner is indeed IMHO the canonical way to loop over numbers, the brace expansion syntax of Bash 4.x has some kick-ass features such as correct padding of the number with leading zeros. Limitations

The zero-padding feature works only in Bash >=4.

Tweet

Related one-liners
Generate a sequence of numbers
 $ for ((i=1; i<=10; ++i)); do echo $i; done

-- by Janos on Nov. 4, 2014, 12:29 p.m.

Explanation

This is similar to seq , but portable. seq does not exist in all systems and is not recommended today anymore. Other variations to emulate various uses with seq :

# seq 1 2 10
for ((i=1; i<=10; i+=2)); do echo $i; done

# seq -w 5 10
for ((i=5; i<=10; ++i)); do printf '%02d\n' $i; done
Find recent logs that contain the string "Exception"
 $ find . -name '*.log' -mtime -2 -exec grep -Hc Exception {} \; | grep -v :0$

-- by Janos on July 19, 2014, 7:53 a.m.

Explanation

The find :

  • -name '*.log' -- match files ending with .log
  • -mtime -2 -- match files modified within the last 2 days
  • -exec CMD ARGS \; -- for each file found, execute command, where {} in ARGS will be replaced with the file's path

The grep :

  • -c is to print the count of the matches instead of the matches themselves
  • -H is to print the name of the file, as grep normally won't print it when there is only one filename argument
  • The output lines will be in the format path:count . Files that didn't match "Exception" will still be printed, with 0 as count
  • The second grep filters the output of the first, excluding lines that end with :0 (= the files that didn't contain matches)

Extra tips:

  • Change "Exception" to the typical relevant failure indicator of your application
  • Add -i for grep to make the search case insensitive
  • To make the find match strictly only files, add -type f
  • Schedule this as a periodic job, and pipe the output to a mailer, for example | mailx -s 'error counts' yourmail@example.com
Remove offending key from known_hosts file with one swift move
 $ sed -i 18d .ssh/known_hosts

-- by EvaggelosBalaskas on Jan. 16, 2013, 2:29 p.m.

Explanation

Using sed to remove a specific line.

The -i parameter is to edit the file in-place. Limitations

This works as posted in GNU sed . In BSD sed , the -i flag requires a parameter to use as the suffix of a backup file. You can set it to empty to not use a backup file:

[Oct 16, 2017] Indenting Here-Documents - bash Cookbook

Oct 16, 2017 | www.safaribooksonline.com

Indenting Here-Documents Problem

The here-document is great, but it's messing up your shell script's formatting. You want to be able to indent for readability. Solution

Use <<- and then you can use tab characters (only!) at the beginning of lines to indent this portion of your shell script.

   $ cat myscript.sh
        ...
             grep $1 <<-'EOF'
                lots of data
                can go here
                it's indented with tabs
                to match the script's indenting
                but the leading tabs are
                discarded when read
                EOF
            ls
        ...
        $
Discussion

The hyphen just after the << is enough to tell bash to ignore the leading tab characters. This is for tab characters only and not arbitrary white space. This is especially important with the EOF or any other marker designation. If you have spaces there, it will not recognize the EOF as your ending marker, and the "here" data will continue through to the end of the file (swallowing the rest of your script). Therefore, you may want to always left-justify the EOF (or other marker) just to be safe, and let the formatting go on this one line.

[Oct 16, 2017] Indenting bourne shell here documents

Oct 16, 2017 | prefetch.net

The Bourne shell provides here documents to allow block of data to be passed to a process through STDIN. The typical format for a here document is something similar to this:

command <<ARBITRARY_TAG
data to pass 1
data to pass 2
ARBITRARY_TAG

This will send the data between the ARBITRARY_TAG statements to the standard input of the process. In order for this to work, you need to make sure that the data is not indented. If you indent it for readability, you will get a syntax error similar to the following:

./test: line 12: syntax error: unexpected end of file

To allow your here documents to be indented, you can append a "-" to the end of the redirection strings like so:

if [ "${STRING}" = "SOMETHING" ]
then
        somecommand <<-EOF
        this is a string1
        this is a string2
        this is a string3
        EOF
fi

You will need to use tabs to indent the data, but that is a small price to pay for added readability. Nice!

[Oct 09, 2017] TMOUT - Auto Logout Linux Shell When There Isn't Any Activity by Aaron Kili

Oct 07, 2017 | www.tecmint.com
... ... ..

To enable automatic user logout, we will be using the TMOUT shell variable, which terminates a user's login shell in case there is no activity for a given number of seconds that you can specify.

To enable this globally (system-wide for all users), set the above variable in the /etc/profile shell initialization file.

[Jul 29, 2017] shell - How does this bash code detect an interactive session - Stack Overflow

Notable quotes:
"... ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ..."
Jul 29, 2017 | stackoverflow.com

user1284631 , asked Jun 5 '13 at 8:44

Following some issues with scp (it did not like the presence of the bash bind command in my .bashrc file, apparently), I followed the advice of a clever guy on the Internet (I just cannot find that post right now) that put at the top of its .bashrc file this:
[[ ${-#*} != ${-} ]] || return

in order to make sure that the bash initialization is NOT executed unless in interactive session.

Now, that works. However, I am not able to figure how it works. Could you enlighten me?

According to this answer , the $- is the current options set for the shell and I know that the ${} is the so-called "substring" syntax for expanding variables.

However, I do not understand the ${-#*i} part. And why $-#*i is not the same as ${-#*i} .

blue , answered Jun 5 '13 at 8:49

parameter#word}

$parameter##word}

The word is expanded to produce a pattern just as in filename expansion. If the pattern matches the beginning of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the '#' case) or the longest matching pattern (the '##' case) deleted.

If parameter is '@' or ' ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ', the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list.

Source: http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html

So basically what happens in ${-#*i} is that *i is expanded, and if it matches the beginning of the value of $- , then the result of the whole expansion is $- with the shortest matching pattern between *i and $- deleted.

Example

VAR "baioasd" 
echo ${VAR#*i};

outputs oasd .

In your case

If shell is interactive, $- will contain the letter 'i', so when you strip the variable $- of the pattern *i you will get a string that is different from the original $- ( [[ ${-#*i} != ${-} ]] yelds true). If shell is not interactive, $- does not contain the letter 'i' so the pattern *i does not match anything in $- and [[ ${-#*i} != $- ]] yelds false, and the return statement is executed.

perreal , answered Jun 5 '13 at 8:53

See this :

To determine within a startup script whether or not Bash is running interactively, test the value of the '-' special parameter. It contains i when the shell is interactive

Your substitution removes the string up to, and including the i and tests if the substituted version is equal to the original string. They will be different if there is i in the ${-} .

[Jul 25, 2017] Local variables

Notable quotes:
"... completely local and separate ..."
Jul 25, 2017 | wiki.bash-hackers.org

local to a function:

myfunc
()
local
var
=VALUE
 
# alternative, only when used INSIDE a function
declare
var
=VALUE
 
...

The local keyword (or declaring a variable using the declare command) tags a variable to be treated completely local and separate inside the function where it was declared:

foo
=external
 
printvalue
()
local
foo
=internal
 
echo
$foo

 
 
# this will print "external"
echo
$foo

 
# this will print "internal"

printvalue
 
# this will print - again - "external"
echo
$foo

[Jul 25, 2017] Environment variables

Notable quotes:
"... environment variables ..."
"... including the environment variables ..."
Jul 25, 2017 | wiki.bash-hackers.org

The environment space is not directly related to the topic about scope, but it's worth mentioning.

Every UNIX® process has a so-called environment . Other items, in addition to variables, are saved there, the so-called environment variables . When a child process is created (in Bash e.g. by simply executing another program, say ls to list files), the whole environment including the environment variables is copied to the new process. Reading that from the other side means: Only variables that are part of the environment are available in the child process.

A variable can be tagged to be part of the environment using the export command:

# create a new variable and set it:
# -> This is a normal shell variable, not an environment variable!
myvariable
"Hello world."

 
# make the variable visible to all child processes:
# -> Make it an environment variable: "export" it
export
 myvariable

Remember that the exported variable is a copy . There is no provision to "copy it back to the parent." See the article about Bash in the process tree !


1) under specific circumstances, also by the shell itself

[Jul 25, 2017] Block commenting

Jul 25, 2017 | wiki.bash-hackers.org

: (colon) and input redirection. The : does nothing, it's a pseudo command, so it does not care about standard input. In the following code example, you want to test mail and logging, but not dump the database, or execute a shutdown:

#!/bin/bash
# Write info mails, do some tasks and bring down the system in a safe way
echo
"System halt requested"
 mail
-s
"System halt"
 netadmin
example.com
logger
-t
 SYSHALT
"System halt requested"

 
##### The following "code block" is effectively ignored

:
<<
"SOMEWORD"
etc
init.d
mydatabase clean_stop
mydatabase_dump
var
db
db1
mnt
fsrv0
backups
db1
logger
-t
 SYSHALT
"System halt: pre-shutdown actions done, now shutting down the system"

shutdown
-h
 NOW
SOMEWORD
##### The ignored codeblock ends here
What happened? The : pseudo command was given some input by redirection (a here-document) - the pseudo command didn't care about it, effectively, the entire block was ignored.

The here-document-tag was quoted here to avoid substitutions in the "commented" text! Check redirection with here-documents for more

[Jul 25, 2017] Doing specific tasks: concepts, methods, ideas

Notable quotes:
"... under construction! ..."
Jul 25, 2017 | wiki.bash-hackers.org

[Feb 14, 2017] Ms Dos style aliases for linux

I think alias ipconfig = 'ifconfig' is really useful for people who work with Linus from Windows POc desktop/laptop.
Feb 14, 2017 | bash.cyberciti.biz
# MS-DOS / XP cmd like stuff  
   alias edit = $VISUAL  
   alias copy = 'cp'  
   alias cls = 'clear'  
   alias del = 'rm'  
   alias dir = 'ls'  
   alias md = 'mkdir'  
   alias move = 'mv'  
   alias rd = 'rmdir'  
   alias ren = 'mv'  
   alias ipconfig = 'ifconfig'

[Feb 04, 2017] Quickly find differences between two directories

You will be surprised, but GNU diff use in Linux understands the situation when two arguments are directories and behaves accordingly
Feb 04, 2017 | www.cyberciti.biz

The diff command compare files line by line. It can also compare two directories:

# Compare two folders using diff ##
diff /etc /tmp/etc_old  
Rafal Matczak September 29, 2015, 7:36 am
§ Quickly find differences between two directories
And quicker:
 diff -y <(ls -l ${DIR1}) <(ls -l ${DIR2})  

[Feb 04, 2017] Restoring deleted /tmp folder

Jan 13, 2015 | cyberciti.biz

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:

mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
 
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp 

[Feb 04, 2017] Use CDPATH to access frequent directories in bash - Mac OS X Hints

Feb 04, 2017 | hints.macworld.com
The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html . Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS
Use CDPATH to access frequent directories in bash
Mar 21, '05 10:01:00AM • Contributed by: jonbauman

I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.), but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH , as desribed in man bash ):

The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr".
Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile for permanent use):
CDPATH=".:~:~/Library"

This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that is a subdirectory of any of the ones in the list. For example:
$ cd
$ cd Documents 
/Users/baumanj/Documents
$ cd Pictures
/Users/username/Pictures
$ cd Preferences
/Users/username/Library/Preferences
etc...
[ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so I'm assuming it will be of interest to some other readers as well.]

cdable_vars is also nice
Authored by: clh on Mar 21, '05 08:16:26PM

Check out the bash command shopt -s cdable_vars

From the man bash page:

cdable_vars

If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to.

With this set, if I give the following bash command:

export d="/Users/chap/Desktop"

I can then simply type

cd d

to change to my Desktop directory.

I put the shopt command and the various export commands in my .bashrc file.

[Feb 04, 2017] Copy file into multiple directories

Feb 04, 2017 | www.cyberciti.biz
Instead of running:
cp /path/to/file /usr/dir1
cp /path/to/file /var/dir2
cp /path/to/file /nas/dir3

Run the following command to copy file into multiple dirs:

echo /usr/dir1 /var/dir2 /nas/dir3 | xargs -n 1 cp -v /path/to/file

[Feb 04, 2017] 20 Unix Command Line Tricks – Part I

Feb 04, 2017 | www.cyberciti.biz
Locking a directory

For privacy of my data I wanted to lock down /downloads on my file server. So I ran:

chmod
 0000 
/
downloads

chmod 0000 /downloads

The root user can still has access and ls and cd commands will not work. To go back:

chmod
 0755 
/
downloads

chmod 0755 /downloads Clear gibberish all over the screen

Just type:

reset

reset Becoming human

Pass the -h or -H (and other options) command line option to GNU or BSD utilities to get output of command commands like ls, df, du, in human-understandable formats:

ls
-lh
# print sizes in human readable format (e.g., 1K 234M 2G)
df
-h
df
-k
# show output in bytes, KB, MB, or GB
free
-b
free
-k
free
-m
free
-g
# print sizes in human readable format (e.g., 1K 234M 2G)
du
-h
# get file system perms in human readable format
stat
-c
%
A 
/
boot
# compare human readable numbers
sort
-h
-a
file
# display the CPU information in human readable format on a Linux

lscpu
lscpu 
-e

lscpu 
-e
=cpu,node
# Show the  size of each file but in a more human readable way
tree
-h
tree
-h
/
boot

ls -lh # print sizes in human readable format (e.g., 1K 234M 2G) df -h df -k # show output in bytes, KB, MB, or GB free -b free -k free -m free -g # print sizes in human readable format (e.g., 1K 234M 2G) du -h # get file system perms in human readable format stat -c %A /boot # compare human readable numbers sort -h -a file # display the CPU information in human readable format on a Linux lscpu lscpu -e lscpu -e=cpu,node # Show the size of each file but in a more human readable way tree -h tree -h /boot Show information about known users in the Linux based system

Just type:

## linux version ##

lslogins
 
## BSD version ##

logins

## linux version ## lslogins## BSD version ## logins

Sample outputs:

UID USER      PWD-LOCK PWD-DENY LAST-LOGIN GECOS
  0 root             0        0   22:37:59 root
  1 bin              0        1            bin
  2 daemon           0        1            daemon
  3 adm              0        1            adm
  4 lp               0        1            lp
  5 sync             0        1            sync
  6 shutdown         0        1 2014-Dec17 shutdown
  7 halt             0        1            halt
  8 mail             0        1            mail
 10 uucp             0        1            uucp
 11 operator         0        1            operator
 12 games            0        1            games
 13 gopher           0        1            gopher
 14 ftp              0        1            FTP User
 27 mysql            0        1            MySQL Server
 38 ntp              0        1            
 48 apache           0        1            Apache
 68 haldaemon        0        1            HAL daemon
 69 vcsa             0        1            virtual console memory owner
 72 tcpdump          0        1            
 74 sshd             0        1            Privilege-separated SSH
 81 dbus             0        1            System message bus
 89 postfix          0        1            
 99 nobody           0        1            Nobody
173 abrt             0        1            
497 vnstat           0        1            vnStat user
498 nginx            0        1            nginx user
499 saslauth         0        1            "Saslauthd user"
Confused on a top command output?

Seriously, you need to try out htop instead of top:

sudo
htop

sudo htop Want to run the same command again?

Just type !! . For example:

/
myhome
/
dir
/
script
/
name arg1 arg2
 
# To run the same command again 
!!

 
## To run the last command again as root user
sudo
!!

/myhome/dir/script/name arg1 arg2# To run the same command again !!## To run the last command again as root user sudo !!

The !! repeats the most recent command. To run the most recent command beginning with "foo":

!
foo
# Run the most recent command beginning with "service" as root
sudo
!
service

!foo # Run the most recent command beginning with "service" as root sudo !service

The !$ use to run command with the last argument of the most recent command:

# Edit nginx.conf
sudo
vi
/
etc
/
nginx
/
nginx.conf
 
# Test nginx.conf for errors
/
sbin
/
nginx 
-t
-c
/
etc
/
nginx
/
nginx.conf
 
# After testing a file with "/sbin/nginx -t -c /etc/nginx/nginx.conf", you
# can edit file again with vi
sudo
vi
!
$

# Edit nginx.conf sudo vi /etc/nginx/nginx.conf# Test nginx.conf for errors /sbin/nginx -t -c /etc/nginx/nginx.conf# After testing a file with "/sbin/nginx -t -c /etc/nginx/nginx.conf", you # can edit file again with vi sudo vi !$ Get a reminder you when you have to leave

If you need a reminder to leave your terminal, type the following command:

leave +hhmm

leave +hhmm

Where,

Home sweet home

Want to go the directory you were just in? Run:
cd -
Need to quickly return to your home directory? Enter:
cd
The variable CDPATH defines the search path for the directory containing directories:

export
CDPATH
=
/
var
/
www:
/
nas10

export CDPATH=/var/www:/nas10

Now, instead of typing cd /var/www/html/ I can simply type the following to cd into /var/www/html path:

cd
 html

cd html Editing a file being viewed with less pager

To edit a file being viewed with less pager, press v . You will have the file for edit under $EDITOR:

less
*
.c
less
 foo.html
## Press v to edit file ##
## Quit from editor and you would return to the less pager again ##

less *.c less foo.html ## Press v to edit file ## ## Quit from editor and you would return to the less pager again ## List all files or directories on your system

To see all of the directories on your system, run:

find
/
-type
 d 
|
less

 
# List all directories in your $HOME
find
$HOME
-type
 d 
-ls
|
less

find / -type d | less# List all directories in your $HOME find $HOME -type d -ls | less

To see all of the files, run:

find
/
-type
 f 
|
less

 
# List all files in your $HOME
find
$HOME
-type
 f 
-ls
|
less

find / -type f | less# List all files in your $HOME find $HOME -type f -ls | less Build directory trees in a single command

You can create directory trees one at a time using mkdir command by passing the -p option:

mkdir
-p
/
jail
/
{
dev,bin,sbin,etc,usr,lib,lib64
}
ls
-l
/
jail
/

mkdir -p /jail/{dev,bin,sbin,etc,usr,lib,lib64} ls -l /jail/ Copy file into multiple directories

Instead of running:

cp
/
path
/
to
/
file
/
usr
/
dir1
cp
/
path
/
to
/
file
/
var
/
dir2
cp
/
path
/
to
/
file
/
nas
/
dir3

cp /path/to/file /usr/dir1 cp /path/to/file /var/dir2 cp /path/to/file /nas/dir3

Run the following command to copy file into multiple dirs:

echo
/
usr
/
dir1 
/
var
/
dir2 
/
nas
/
dir3 
|
xargs
-n
1
cp
-v
/
path
/
to
/
file

echo /usr/dir1 /var/dir2 /nas/dir3 | xargs -n 1 cp -v /path/to/file

Creating a shell function is left as an exercise for the reader

Quickly find differences between two directories

The diff command compare files line by line. It can also compare two directories:

ls
-l
/
tmp
/
r
ls
-l
/
tmp
/
s
# Compare two folders using diff ##
diff
/
tmp
/
r
/
/
tmp
/
s
/

[Feb 04, 2017] List all files or directories on your system

Feb 04, 2017 | www.cyberciti.biz
List all files or directories on your system

To see all of the directories on your system, run:

find
/
-type
 d 
|
less

 
# List all directories in your $HOME
find
$HOME
-type
 d 
-ls
|
less

find / -type d | less# List all directories in your $HOME find $HOME -type d -ls | less

To see all of the files, run:

find
/
-type
 f 
|
less

 
# List all files in your $HOME
find
$HOME
-type
 f 
-ls
|
less

[Jan 26, 2012] A last-resort trick to recover your machine from the brink of death

Have you heard of the magic SysRq key?

No?

Well, it's magic. It's directly shunted to the Linux kernel. You press ALT, press the PrintScreen (SysRq) key, and while holding them both down, press one of the letters (each letter has a different function assigned to it).

It's not normally enabled, but you can enable it by putting

kernel.sysrq = 1

in your machine's /etc/sysctl.conf file. Oh, and then rebooting.

Here's why it's useful.

So, what does SysRq do, really?

Hit Alt+SysRq+K - the windowing system will restart. More effective than Ctrl+Alt+Backspace.

Suppose a GUI application you just opened is starting to swallow massive amounts of RAM. Like, one gigabyte, perhaps? Your machine is locking up, and you feel the mouse start to stutter at first, then freeze completely - while the hard disk light in your computer's front panel is lighting up frantically, gasping for air (aka memory) .

You now have three choices:

  1. Sit it out and let the Linux kernel detect this situation and kill the abusive application. This can take way more than 15 minutes.
  2. Press the computer's power off button for 5 seconds. This shuts your machine down uncleanly and leads to data loss.
  3. Hit the magic SysRq combo: Alt+SysRq+K.

Should you choose option 3, the graphical subsystem dies immediately. That's because Alt+SysRq+K kills any application that holds the keyboard open - and, you guessed it, the graphical subsystem is holding it open. This premature death of the GUI causes all GUI applications to die in a cascade, including the abusive application.

Two to ten seconds later, you will be presented with a login prompt.

Sure, you lost changes to all files you haven't saved, and all the tabs in your Web browser… but at least you didn't have to reboot uncleanly, did you?

But, Ctrl+Alt+Backspace?

Once the machine is in a critically heavy memory crunch, Ctrl+Alt+Backspace will take too much time to work, because the windowing system will be pressed for memory to even execute. The magic SysRq key has the luxury of not having that problem - if Ctrl+Alt+Backspace were an IV drip, SysRq would be like a central line.

Why this key combination exists

The reason this key combo exists is simple. Alt+SysRq+K is called SAK (System Attention Key). It was designed back in the days of, um, yore, to kill all applications snooping on the keyboard - so administrators wishing to log in could safely do so without anyone sniffing their passwords.

As a preventative security measure, it sure works against keyloggers and other malware that may be snooping on your keyboard, may I say. And it most definitely works against your run-of-the-mill temporary memory shortage ;-).

Advantages/disadvantages

Well, the major disadvantages are:

But, on a memory crunch, this beats rebooting hands-down. And that's the biggest advantage.

Controlling runaway processes on Linux • Rudd-O.com

Sometimes, buggy memory hogs can choke your machine. Here, two tricks: one to recover from a memory choke, another to prevent memory chokes forever.

Misbehaving application frozen?

Has an application stopped responding on your machine? Well, as long as your machine is still responsive, you can use these tricks to nuke it safely.

On KDE

Hit Ctrl+Alt+Esc. Your mouse cursor will change - from an arrow to a small skull/crossbones combo. Hit the stubborn application with the skull.

It'll die.

On GNOME

Add a new launcher to your panel and set it so it executes the xkill application. Now, when an application starts stupidifying itself, just hit the launcher you created (the cursor will change to a square-type "target"), then hit the application with your mouse cursor.

It'll die.

Application choking your machine?

Of course, if your machine is already too slow to use these tricks, they won't help you much. Here's why, sometimes, your machine ditches itself into a molasses pit, and how to rescue it from certain death.

Memory and pathologies

…almost all applications have this pathological idea (encouraged by the operating system) that memory is a limitless resource…

You see, almost all applications have this pathological idea (encouraged by the operating system) that memory is a limitless resource - and when they go overboard, the operating system just dips into the hard disk to simulate memory.

Sometimes, bugs in an application do cause them to go haywire, requesting memory like there's no end. Once your machine goes down that lane, there isn't a simple way to recover it, short of powering it off forcibly.

A fortress-like operating system like Linux isn't supposed to die, and yet it does. However, the fortress I'm so proud of can, and does, provide you with effective measures against premature deaths of these sort.

A last-resort trick to recover your machine from the brink of death

Have you heard of the magic SysRq key?

No?

Well, it's magic. It's directly shunted to the Linux kernel. You press ALT, press the PrintScreen (SysRq) key, and while holding them both down, press one of the letters (each letter has a different function assigned to it).

It's not normally enabled, but you can enable it by putting

kernel.sysrq = 1

in your machine's /etc/sysctl.conf file. Oh, and then rebooting.

Here's why it's useful.

So, what does SysRq do, really?

Hit Alt+SysRq+K - the windowing system will restart. More effective than Ctrl+Alt+Backspace.

Suppose a GUI application you just opened is starting to swallow massive amounts of RAM. Like, one gigabyte, perhaps? Your machine is locking up, and you feel the mouse start to stutter at first, then freeze completely - while the hard disk light in your computer's front panel is lighting up frantically, gasping for airmemory.

You now have three choices:

  1. Sit it out and let the Linux kernel detect this situation and kill the abusive application. This can take way more than 15 minutes.
  2. Press the computer's power off button for 5 seconds. This shuts your machine down uncleanly and leads to data loss.
  3. Hit the magic SysRq combo: Alt+SysRq+K.

Should you choose option 3, the graphical subsystem dies immediately. That's because Alt+SysRq+K kills any application that holds the keyboard open - and, you guessed it, the graphical subsystem is holding it open. This premature death of the GUI causes all GUI applications to die in a cascade, including the abusive application.

Two to ten seconds later, you will be presented with a login prompt.

Sure, you lost changes to all files you haven't saved, and all the tabs in your Web browser… but at least you didn't have to reboot uncleanly, did you?

But, Ctrl+Alt+Backspace?

Once the machine is in a critically heavy memory crunch, Ctrl+Alt+Backspace will take too much time to work, because the windowing system will be pressed for memory to even execute. The magic SysRq key has the luxury of not having that problem ;-) - if Ctrl+Alt+Backspace were an IV drip, SysRq would be like a central line.

Why this key combination exists

The reason this key combo exists is simple. Alt+SysRq+K is called SAK (System Attention Key). It was designed back in the days of, um, yore, to kill all applications snooping on the keyboard - so administrators wishing to log in could safely do so without anyone sniffing their passwords.

As a preventative security measure, it sure works against keyloggers and other malware that may be snooping on your keyboard, may I say. And it most definitely works against your run-of-the-mill temporary memory shortage ;-).

Advantages/disadvantages

Well, the major disadvantages are:

But, on a memory crunch, this beats rebooting hands-down. And that's the biggest advantage.

A definitive cure to runaway applications

Become an administrator (root) and use your favorite text editor to open the file /etc/security/limits.conf:

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit

Add the following line anywhere on the file:

*      soft      as      512000
limits.conf is a little-known godsend, and a definite requirement to keep large computing farms or terminal servers under control.

You will need to restart any sessions (graphical/terminal) for this change to take effect. Additionally, you will need to restart the graphical session manager (GDM or KDM).

Of course, 512000 is just my favorite setting - but that's because I have the privilege of using multi-gigabyte memory sticks on my machine. If you have much less memory than me, you will want to tune this, while keeping in mind that modern applications can and do take more than 300 MB under exceptional circumstances.

Like, for example, Firefox with 50 tabs/windows open. Or Evolution managing 2 GB of e-mail. Hey, both circumstances happen to me, but I guess I'm an oddball.

What this "magic incantation" in limits.conf does

limits.conf is the file that lets you set per-user/process/system resource limits. There are several limits to choose from.

One of them is the address space (as). The address space refers to the maximum amount of RAM (in kilobytes) that a process may request from the operating system. Any requests above the configured limit are simply refused.

Advantages/disadvantages

The fortunate side effect of this recipe is that the majority of applications will disappear and die a horrible death if they request memory indiscriminately. Which beats having to turn the machine off.

The unfortunate side effect of refused requests for more memory is that the majority of applications will disappear and die a horrible death if they request memory indiscriminately. That's because they don't know how to cope with more memory. Until software developers actually implement error handling for lack of memory, this will be a small nuisance - hey, save often and you'll be safe ;-).

Temporarily disabling memory limits for picky applications

Another disadvantage: certain applications don't run when a limit is set. WINE is among those applications. However, you can use a terminal window to disable this limit temporarily (within any applications started from the terminal):

[rudd-o@andrea /media/windows/UnrealTournament]$ ulimit -v unlimited
[rudd-o@andrea /media/windows/UnrealTournament]$ wine System/UnrealTournament.exe

That's it for this tutorial

So, the infrequent dead application, SAKing your computer, or powering it off… what do you prefer? Me… well, once I upgraded my new "desktop" machine (1U Dell PowerEdge SC1425) to 2 GB, I haven't looked back.

But my old machine did thank me a lot for keeping it off memory crunches :-). It's now safe in PVR heaven, piratingtime-shifting TV shows for my pleasure.

[Aug 09, 2011] Creating a Linux ramdisk

While performing some testing a few weeks ago, I needed to create a ramdisk on one of my redhat AS 4.0 servers. I knew Solaris supported tmpfs, and after a bit of googling was surprised to find that Linux supported the tmpfs pseudo-file system as well. To create a ramdisk on a Linux host, you first need to find a suitable place to mount the tmpfs file system. For my tests, I used mkdir to create a directory valled /var/ramdisk:

$ mkdir /var/ramdisk

Once the mount point is identified, you can use the mount command to mount a tmpfs file system on top of that mount point:

$ mount -t tmpfs none /var/ramdisk -o size=28m

Now each time you access /var/ramdisk, your reads and writes will be coming directly from memory. Nice!

[Dec 17, 2009] Top Ten Things I Miss in Windows

Thoughts on Technology

Klipper/Copy & Paste Manager

I use this one alot when I am either coding or writing a research paper for school. More often than not I find I have copied something new only to discover I need to paste a link or block of code again from two copies back. Having a tray icon where I can recall the last ten copies or so is mighty useful.

Gnome-Do

Most anyone who uses the computer in their everyday work will tell you that less mouse clicks means faster speed and thus (typically) more productivity. Gnome-Do is a program that allows you to cut down on mouse clicks (so long as you know what program you are looking to load). The jist of what it does is this: you assign a series of hot keys to call up the search bar (personally I use control+alt+space) and then you start typing in the name of an application or folder you want to open and it will start searching for it - once the correct thing is displayed all you need to do is tap enter to load it up. The best part is that it remembers which programs you use most often. Meaning that most times you only need to type the first letter or two of a commonly used application for it to find the one you are looking for.

[Aug 7, 2009] atd daemon is not running on Suse 10 SP2 by default, so at commands fail.

It needs to be manually enabled via chkconfig and started with service command to ensure consistency in behavior with the Solaris. AIX and HP-UX.

[Aug 5, 2009] 10 Essential UNIX-Linux Command Cheat Sheets TECH SOURCE FROM BOHOL

Linux has become so idiot proof nowadays that there is less and less need to use the command line. However, the commands and shell scripts have remained powerful for advanced users to utilize to help them do complicated tasks quickly and efficiently.

To those of you who are aspiring to become a UNIX/Linux guru, you have to know loads of commands and learn how to effectively use them. But there is really no need to memorize everything since there are plenty of cheat sheets available on the web and on books. To spare you from the hassles of searching, I have here a collection of 10 essential UNIX/Linux cheat sheets that can greatly help you on your quest for mastery...

[Aug 4, 2009] Tech Tip View Config Files Without Comments Linux Journal

I've been using this grep invocation for years to trim comments out of config files. Comments are great but can get in your way if you just want to see the currently running configuration. I've found files hundreds of lines long which had fewer than ten active configuration lines, it's really hard to get an overview of what's going on when you have to wade through hundreds of lines of comments.

$ grep ^[^#] /etc/ntp.conf

The regex ^[^#] matches the first character of any line, as long as that character that is not a #. Because blank lines don't have a first character they're not matched either, resulting in a nice compact output of just the active configuration lines.

[Aug 3, 2009] 10 super-cool Linux hacks you did not know about

1. Run top in batch mode

2. Write to more than one file at once with tee

3. Unleash the accounting power with pacct

4. Dump utmp and wtmp logs

5. Monitor CPU and disk usage with iostat

6. Monitor memory usage with vmstat

7. Combine the power of iostat and vmstat with dstat

8. Collect, report or save system activity information with sar

9. Create UDP server-client - version 1

10. Configure UDP server-client - version 2

Linux Commando Remap Caps Lock key for virtual console windows

Remap Caps Lock key for virtual console windows

My last blog entry explains how to use xmodmap to remap the Caps Lock key to the Escape key in X. That takes care of the keyboard mapping when you are in X. What about when you are in a virtual console window? You need to follow the steps below. Make sure that you sudo root before you execute the following commands.
  1. Find out the keycode of the key that you want remapped.

    Execute the showkey command as root in a virtual consolde:

    $ showkey
    kb mode was UNICODE
    
    press any key (program terminates after 10s of last keypress)...
    0x9c
    Hit the Caps Lock key, wait 10 seconds (default timeout), and the showkey command will exit on its own.
    $ showkey
    kb mode was UNICODE
    
    press any key (program terminates after 10s of last keypress)...
    0x9c
    0x3a 
    0xba
    The keycode for the Caps Lock key is 0x3a in hex, or 58 in decimal.

  2. Find out the symbolic name (key symbol) of the key that you want to map to.
    You can list all the supported symbolic names by dumpkeys -l and grep for esc:
    $ dumpkeys -l |grep -i esc 
    0x001b Escape
    0x081b Meta_Escape
  3. Remap the keycode 58 to the Escape key symbol.
    $ (echo `dumpkeys |grep -i keymaps`;  \
       echo keycode 58 = Escape)          \
       | loadkeys -
    Thanks to cjwatson who pointed me to prepending the keymaps statement from dumpkeys. The keymaps statement is a shorthand notation defining what key modifiers you are defining with the key. See man keymaps(5) for more info.
To make the new key mapping permanent, you need to put the loadkeys command in a bootup script.

For my Debian Etch system, I put the
(echo `dumpkeys |grep -i keymaps`; echo keycode 58 = Escape) |loadkeys - command in /etc/rc.local.

Remapping Keys Under Linux

To swap caps lock and control:

# Make the Caps Lock key be a Control key:
xmodmap -e "remove lock = Caps_Lock"
xmodmap -e "add control = Caps_Lock"

# Make the Left Control key be a Caps Lock key:
xmodmap -e "remove control = Control_L"
xmodmap -e "add lock = Control_L"

Questions Answered Below

The instructions in this page apply only to Linux in an X environment (like KDE).

Terminology

How These Relate To One Another

Keycodes, keysyms, and modifiers relate in the following way:

keycodekeysymmodifier (optional)

So for example, on my keyboard:

keycode 38 (the 'a' key)keysym 0x61 (the symbol 'a')

keycode 50 (the left 'shift' key)keysym 0xffe1 (the action 'the left shift key is down')the shift modifier

Note that technically, each keycode can be mapped to more than one keysym. The first mapping applies when no modifier is pressed; the second applies when the shift key is pressed. (I haven't figured out how to use the third and fourth yet.) So for example, the second mapping on my 'a' key is:

keycode 38 (the 'a' key)keysym 0x61 (the symbol 'A')

In other words, when modifier 'shift' is active, my 'a' key generates an 'A' instead of an 'a'.

Viewing Your Settings

Changing Your Settings

Say you want to map the caps lock key to be the control modifier. You have two sensible choices for how to do this:

Caps Lock KeyCaps Lock actionControl Modifier

Caps Lock KeyControl_L actionControl Modifier

To do the first, you need to change the action → modifier mapping. Do this as follows:

xmodmap -e "remove lock = Caps_Lock"
xmodmap -e "add control = Caps_Lock"

To do the second, you need to change the keycode → action mapping, so you'll need to know the keycode of your caps lock key. To find the keycode for your caps lock key use xev, as described above. Mine is 66. So:

xmodmap -e "keycode 66 = Control_L"

Help!

If you mess things up, the simplest way to fix things is to log out of the window manager and log back in.

For More Information

Notes

By David Vespe, April 2006

Remap Caps Lock

The Caps Lock key on most PC keyboards is in the position where the Control key is on many other keyboards, and vice versa. This can make it difficult for programmers to use the "wrong" kind of keyboard.

This page describes how to RemapCapsLock on different keys in different OperatingSystems.

One really stupid thing about PeeCee keyboards is that manufacturers even realized that putting caps-lock on home row was a bad idea because people kept hitting it with the 'a' key. Did they move it? No, that would be too sensible. They carved a little piece of it off to leave a bigger gap. So now if I re-map a standard PC-10* keyboard so that left-control is in a sensible place, it is still harder to use than it should be. :(

Many people (the majority, clearly) feel that the placement of CTRL below the SHIFT key is a better location for it. However, the backspace key is way out of the way -- it would be better if the CAPSLOCK and backspace keys were swapped.


Unix, Console

If you have loadkeys (as you would under Linux), this should do the trick:
 loadkeys /usr/share/keymaps/i386/qwerty/emacs2.kmap.gz
To reset to the defaults (you may have to switch to another tty and back to undo ctrl-lock):
 loadkeys -d
Unix, X

Under Redhat 8.0, just enable the following line in /etc/X11/XF86Config

        Option      "XkbOptions"        "ctrl:swapcaps"
Replace "swapcaps" with "nocaps" to turn both keys into "Control."

With X, there are at least 2 different ways to remap the keys. One is using xmodmap. For example, man xmodmap shows how to swap the left control key and the CapsLock key:

 ! Swap Caps_Lock and Control_L
 !
 remove Lock = Caps_Lock
 remove Control = Control_L
 keysym Control_L = Caps_Lock
 keysym Caps_Lock = Control_L
 add Lock = Caps_Lock
 add Control = Control_L                 
Many people don't want a CapsLock key at all. They can change the CapsLock key to a ControlKey? by using the following lines in xmodmap:
 clear Lock
 keycode 0x7e = Control_R
 add Control = Control_R 
Maybe you have to change the keycode 0x7e. You can find the keycodes with xev. I Furthermore, this only works if you don't have a right control key. I hope somebody has a solution which does not have this restriction.

This solution might be the easiest one. If you do not have a problem owning a dead key in your keyboard you might disable CapsLock at all:

 "remove lock = Caps_Lock"  (or just: "clear lock")
A better solution might be this sequence, which is keycode independent and does not remove existing control keys:
 remove Lock = Caps_Lock
 remove Control = Control_L
 keysym Caps_Lock = Control_L
 add Lock = Caps_Lock
 add Control = Control_L
Now, you can use another solution which uses xkb. For that, you will have to find the sybols directory on your unix system. There, you add a file which might be called 'ctrl' containing the following:
 // eliminate the caps lock key completely (replace with control)
 partial modifier_keys
 xkb_symbols "nocaps" {
     key <CAPS>  {  symbols[Group1]= [ Control_L ] };
     modifier_map  Control { <CAPS>, <LCTL> };
 };
This eliminates the caps lock key if included in a keymap. We can do this by changing the file en_US:
 xkb_symbols "pc101" {
     include "ctrl(nocaps)"
     key <RALT> { [ Mode_switch,  Multi_key ] };
augment "us(pc101)" include "iso9995-3(basic101)"

modifier_map Mod3 { Mode_switch }; };

You can then add the keyboard using a line like:
 /usr/X11R6/lib/X11/xkb/xkbcomp -w 1 -R/usr/X11R6/lib/X11/xkb -xkm -m en_US keymap/xfree86 0:0
Now, unfortunately there are probably errors in the text above. Please correct and make it working for other systems than RedHat Linux.

Red Hat Knowledgebase How can I find information on the maximum amount of memory my system can handle

The dmidecode command can be used to display information from the systems' BIOS that includes the maximum memory that the BIOS will support. This information is displayed by dmidecode as type 16 (Physical Memory Array) which can be filtered with the command dmidecode -t 16.

For instance, the following output shows a system that can support a maximum of 16GB of RAM.

Handle 0x0032, DMI type 16, 15 bytes
Physical Memory Array
	Location: System Board Or Motherboard
	Use: System Memory
	Error Correction Type: None
	Maximum Capacity: 16 GB
	Error Information Handle: Not Provided
	Number Of Devices: 4

[Jan 27, 2009] Linux Keyboard Shortcuts Safe Way to Exit During System Freezes

Jan 26, 2009 | Linux Today

"Alt + SysR + K
Kill all processes (including X), which are running on the currently active virtual console.

"Alt + SysRq + E
Send the TERM signal to all running processes except init, asking them to exit."

Magic Tricks With the Sysreq Key(Dec 10, 2008)
Sounds of Crashing Hard Drives(Nov 24, 2008)
Fix Unresponsive or Frozen Linux Computers Using Shortcuts(Nov 11, 2008)
Linux Kernel Magic SysRq Keys in openSUSE for Crash Recovery(Sep 29, 2008)
Rebooting the Magic Way(Aug 22, 2008)
Fix a Frozen System with the Magic SysRq Keys(Sep 17, 2007)

Tips & Tricks

partprobe - inform the OS of partition table changes

One of the major benefits to using Red Hat Enterprise Linux is that once the operating system is up and running, it tends to stay that way. This also holds true when it comes to reconfiguring a system; mostly. One Achilles heel for Linux, until the past couple of years, has been the fact that the Linux kernel only reads partition table information at system initialization, necessitating a reboot any time you wish to add new disk partitions to a running system.

The good news, however, is that disk re-partitioning can now also be handled 'on-the-fly' thanks to the 'partprobe' command, which is part of the 'parted' package.

Using 'partprobe' couldn't be more simple. Any time you use 'fdisk', 'parted' or any other favorite partitioning utility you may have to modify the partition table for a drive, run 'partprobe' after you exit the partitioning utility and 'partprobe' will let the kernel know about the modified partition table information. If you have several disk drives and want to specify a specific drive for 'partprobe' to scan, you can run 'partprobe <device_node>'

Of course, given a particular hardware configuration, shutting down your system to add hardware may be unavoidable, it's still nice to be given the option of not having to do so and 'partprobe' fills that niche quite nicely.

partprobe [-d] [-s] [devices...]

DESCRIPTION

This manual page documents briefly the partprobe command.

partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table.

OPTIONS

This program uses short UNIX style options.
-d
Don't update the kernel.
-s
Show a summary of devices and their partitions.
-h
Show summary of options.
-v
Show version of program.

CENTOS/RHEL 5 CONFIGURATION TIPS

Yum & Repositories
I noticed this issue with both CentOS 4 and 5 - Yum will often choose bad mirrors from the mirrorlist file - for example, choosing overseas servers, when an official NZ server exists. And in some cases, the servers it has chosen are horribly slow.

You will probably find that you get better download speeds by editing /etc/yum.repos.d/CentOS-Base.repo and commenting out the mirrorlist lines and setting the baseurl line to point to your preferred local mirror.

Yum-updatesd
CentOS 5 has a new daemon called yum-updatesd, which replaces the old cron job yum update scripts. This script will check frequently for updates, and can be configured to download and/or install them.

However, this daemon is bad for a server, since it doesn't run at a fixed time - I really don't want my server downloading and updating software during the busiest time of day thank-you-very-much!

So, it's bad for a server. Let's disable it with:

service yum-updatesd stop
chkconfig --level 2345 yum-updatesd off

Plus I don't like the idea of having a full blown daemon where a simple cronjob will do the trick perfectly fine - seems like overkill. (although it appears yum-updatesd has some useful features like dbus integration for desktop users)

So, I replace it with my favorite cronjob script approach, by running the following (as root of course):

cat << "EOF" > /etc/cron.daily/yumupdate
 #!/bin/sh
 # install any yum updates
/usr/bin/yum -R 10 -e 0 -d 1 -y update yum > /var/log/yum.cron.log 2>&1
/usr/bin/yum -R 120 -e 0 -d 1 -y update  >> /var/log/yum.cron.log 2>&1
if [ -s /var/log/yum.cron.log ]; then
        /bin/cat /var/log/yum.cron.log | mail root -s "Yum update information" 2>&1
fi
EOF

and if you want to clear up the package cache every week:
cat << "EOF" > /etc/cron.weekly/yumclean
 #!/bin/sh
 # remove downloaded packages
/usr/bin/yum -e 0 -d 0 clean packages
EOF

(please excuse the leading space infront of the comments ( #) - it is to work around a limitation in my site, which I will fix shortly. Just copy the lines into a text editor and remove the space, before pasting into the terminal)

This will install 2 scripts that get run around 4:00am (as set in /etc/crontab) which will check for updates and download and install any automatically. If there were any updates, it will send out an email, if there were none, it doesn't send anything.

(of course, you need sendmail/whatever_fucking_email_server_you_like configured correctly to get the alerts!)

You can change yum to just download and not install the updates (just RTFM), but I've never had a update break anything - update compatibility and quality is always very high - so I use automatic updates.

CentOS 4 had something very similar to this, with the addition of a bootscript to turn the cronjobs on and off.

* Please check out the update at the bottom of this page for futher information on this.


Apache Quirks
If you are using indexing in apache (indexing is when you can browse folders/files), you may find that the browsing page looks small and nasty.

The fix is to edit /etc/httpd/conf/httpd.conf and change the following line:

IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable
to
IndexOptions FancyIndexing VersionSort NameWidth=*

This should make the index full screen again. I'm not sure if this is an apache bug, a distro bug or some other weird issue, because I'm sure HTMLTable isn't supposed to be all small like that.

(FYI: CentOS 4 did not have the HTMLTable option active)

SSL Certificates
Redhat have moved things around with SSL certificates a lot. What it seems like happened (I have only had a quick look into this), is that they were going to provide a new tool to generate SSL certificates called "genkey" but pulled it out before release.

To make things more fun, they also removed the good old Makefile that was in /etc/httpd/conf/ that allowed you to generate SSL certificates & keys.

However, I found the same Makefile again in /etc/pki/tls/certs/


Vi vs. Vim
If you use vi/vim, you should check this posting out.

That's all the issues that I've come across for now - if I find any more things to note, I'll update this page with the information and put a note on my blog.

/etc/sysconfig/network

Note the NETMASK should be defined in /etc/sysconfig/network-scripts/ifcfg-eth0

/etc/sysconfig/network

The /etc/sysconfig/network file is used to specify information about the desired network configuration. The following values may be used:

How to see parameter of ext3 filesystem

# tune2fs -l /dev/mapper/vg00-lv06
tune2fs 1.38 (30-Jun-2005)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: c0615eba-5bb6-443d-81c7-7f3c1eb829b2
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal filetype needs_recovery sparse_super
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 1310720
Block count: 2621440
Reserved block count: 131072
Free blocks: 2365370
Free inodes: 1309130
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16384
Inode blocks per group: 512
Filesystem created: Mon May 21 11:16:17 2007
Last mount time: Tue May 6 17:40:40 2008
Last write time: Tue May 6 17:40:40 2008
Mount count: 3
Maximum mount count: 500
Last checked: Thu Apr 3 11:51:39 2008
Check interval: 5184000 (2 months)
Next check after: Mon Jun 2 11:51:39 2008
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 36a54cdf-3f8e-482b-9e2c-a48b6ac1d27e
Journal backup: inode blocks

[Nov 30, 2007] freshmeat.net Project details for Expect-lite

About:
Expect-lite is a wrapper for expect, created to make expect programming even easier. The wrapper permits the creation of expect script command files by using special character(s) at the beginning of each line to indicate the expect-lite action. Basic expect-lite scripts can be created by simply cutting and pasting text from a terminal window into a script, and adding '>' '

Release focus: Major feature enhancements

Changes:
The entire command script read subsystem has changed. The previous system read directly from the script file. The new system reads the script file into a buffer, which can be randomly accessed. This permits looping (realistically only repeat loops). Infinite loop protection has been added. Variable increment and decrement have been added to support looping.

Author:
Craig Miller [contact developer]

[Nov 30, 2007] Got more than a gig of RAM and 32-bit Linux Here's how to use it By Bruce Byfield

September 21, 2007 | Linux.com
Nowadays, many machines are running with 2-4 gigabytes of RAM, and their owners are discovering a problem: When they run 32-bit GNU/Linux distributions, their extra RAM is not being used. Fortunately, correcting the problem is only a matter of installing or building a kernel with a few specific parameters enabled or disabled.

The problem exists because 32-bit Linux kernels are designed to access only 1GB of RAM by default. The workaround for this limitation is vaguely reminiscent of the virtual memory solution once used by DOS, with a high memory area of virtual memory being constantly mapped to physical addresses. This high memory can be enabled for up to 4GB by one kernel parameter, or up to 64GB on a Pentium Pro or higher processor with another parameter. However, since these parameters have not been needed on most machines until recently, the standard kernels in many distributions have not enabled them.

Increasingly, many distributions are enabling high memory for 4GB. Ubuntu default kernels have been enabling this process at least since version 6.10, and so have Fedora 7's. By contrast, Debian's default 486 kernels do not. Few distros, if any, enable 64GB by default.

To check whether your kernel is configured to use all your RAM, enter the command free -m. This command gives you the total amount of unused RAM on your system, as well as the size of your swap file, in megabytes. If the total memory is 885, then no high memory is enabled on your system (the rest of the first gigabyte is reserved by the kernel for its own purposes). Similarly, if the result shows over 1 gigabyte but less than 4GB when you know you have more, then the 4GB parameter is enabled, but not the 64GB one. In either case, you will need to add a new kernel to take full advantage of your RAM.

[Nov 20, 2007] Games with discolors

eval `dircolors ~/.dir_colors`
alias ls="ls --color=auto"

The command 'dircolors' takes its data from the file ~/.dir_colors and
creates an environment variable LS_COLORS. The command 'ls --color' takes
its colors from the environmental variable LS_COLORS.

So, write a suitable ~/.dir_colors file, and execute the command
'dircolors'. To get a starting file for editing, do this:

dircolors -p > ~/.dir_colors

The ~/.dir_colors file so created includes directions on coding the colors
for different kinds of files.

See man dircolors.

[Nov 1, 2007] Changing Gnome behavior to standard UNIX CDE style (application displayed on a particular desktop are visible only this desktop toolbar.

You need to unclick:

/apps/panel/applets/windows_list_screen/pref/display_in_all_workspaces

Controlling runaway processes on Linux • Rudd-O.com

... Have you heard of the magic SysRq key?

No?

Well, it's magic. It's directly shunted to the Linux kernel. You press ALT, press the PrintScreen (SysRq) key, and while holding them both down, press one of the letters (each letter has a different function assigned to it).

It's not normally enabled, but you can enable it by putting

kernel.sysrq = 1

in your machine's /etc/sysctl.conf file. Oh, and then rebooting.

Here's why it's useful.

So, what does SysRq do, really?

Hit Alt+SysRq+K - the windowing system will restart. More effective than Ctrl+Alt+Backspace.

Suppose a GUI application you just opened is starting to swallow massive amounts of RAM. Like, one gigabyte, perhaps? Your machine is locking up, and you feel the mouse start to stutter at first, then freeze completely - while the hard disk light in your computer's front panel is lighting up frantically, gasping for airmemory.

You now have three choices:

  1. Sit it out and let the Linux kernel detect this situation and kill the abusive application. This can take way more than 15 minutes.
  2. Press the computer's power off button for 5 seconds. This shuts your machine down uncleanly and leads to data loss.
  3. Hit the magic SysRq combo: Alt+SysRq+K.

Should you choose option 3, the graphical subsystem dies immediately. That's because Alt+SysRq+K kills any application that holds the keyboard open - and, you guessed it, the graphical subsystem is holding it open. This premature death of the GUI causes all GUI applications to die in a cascade, including the abusive application.

Two to ten seconds later, you will be presented with a login prompt.

Sure, you lost changes to all files you haven't saved, and all the tabs in your Web browser… but at least you didn't have to reboot uncleanly, did you?

But, Ctrl+Alt+Backspace?

Once the machine is in a critically heavy memory crunch, Ctrl+Alt+Backspace will take too much time to work, because the windowing system will be pressed for memory to even execute. The magic SysRq key has the luxury of not having that problem ;-) - if Ctrl+Alt+Backspace were an IV drip, SysRq would be like a central line.

Why this key combination exists

The reason this key combo exists is simple. Alt+SysRq+K is called SAK (System Attention Key). It was designed back in the days of, um, yore, to kill all applications snooping on the keyboard - so administrators wishing to log in could safely do so without anyone sniffing their passwords.

As a preventative security measure, it sure works against keyloggers and other malware that may be snooping on your keyboard, may I say. And it most definitely works against your run-of-the-mill temporary memory shortage ;-).

Advantages/disadvantages

Well, the major disadvantages are:

But, on a memory crunch, this beats rebooting hands-down. And that's the biggest advantage.