Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Scriptorama: A Slightly Skeptical View
on Scripting Languages

News Introduction Recommended Books Recommended Links Programming Languages Design Recommended Papers Scripting Languages for Java vs. Pure Java
Software Engineering Anti-OO John Ousterhout Larry Wall Shell Giants Software Prototyping Software Life Cycle Models
Shells AWK Perl Perl Warts and Quirks Python PHP Javascript
Ruby Tcl/Tk R programming language Rexx Lua S-lang JVM-based scripting languages
Pipes Regex Program understanding Brooks law KISS Principle Software Prototyping Unix Component Model
Programming as a Profession Programming style Language design and programming quotes History Humor Random Findings Etc

This is the central page of the Softpanorama WEB site because I am strongly convinced that the development of scripting languages, not the replication of the efforts of BSD group undertaken by Stallman and Torvalds is the central part of open source. See Scripting languages as VHLL for more details.

 
Ordinarily technology changes fast. But programming languages are different: programming languages are not just technology, but what programmers think in.

They're half technology and half religion. And so the median language, meaning whatever language the median programmer uses, moves as slow as an iceberg.

Paul Graham: Beating the Averages

Libraries are more important that the language.

Donald Knuth


Introduction

A fruitful way to think about language development is to consider it a to be special type of theory building. Peter Naur suggested that programming in general is theory building activity in his 1985 paper "Programming as Theory Building". But idea is especially applicable to compilers and interpreters. What Peter Naur failed to understand was that design of programming languages has religious overtones and sometimes represent an activity, which is pretty close to the process of creating a new, obscure cult ;-). Clueless academics publishing junk papers at obscure conferences are high priests of the church of programming languages. Some, like Niklaus Wirth and Edsger W. Dijkstra, (temporary) reached the status close to (false) prophets :-).

On a deep conceptual level building of a new language is a human way of solving complex problems. That means that complier construction in probably the most underappreciated paradigm of programming of large systems. Much more so then greatly oversold object-oriented programming. OO benefits are greatly overstated.

For users, programming languages distinctly have religious aspects, so decisions about what language to use are often far from being rational and are mainly cultural.  Indoctrination at the university plays a very important role. Recently they were instrumental in making Java a new Cobol.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. The latter includes libraries, IDE, books, level of adoption at universities,  popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things.

A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked.  This is  a story behind success of  Java and PHP. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all the most interesting Perl features surgically removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

Progress in programming languages has been very uneven and contain several setbacks. Currently this progress is mainly limited to development of so called scripting languages.  Traditional high level languages field is stagnant for many decades.  From 2000 to 2017 we observed the huge sucess of Javascript; Python encroached in Perl territory (including genomics/bioinformatics) and R in turn start squeezing Python in several areas. At the same time Ruby despite initial success remained niche language.  PHP still holds its own in web-site design.

Some observations about scripting language design and  usage

At the same time there are some mysterious, unanswered question about factors that help the particular scripting language to increase its user base, or fail in popularity. Among them:

Nothing succeed like success

Those are difficult questions to answer without some way of classifying languages into different categories. Several such classifications exists. First of all like with natural languages, the number of people who speak a given language is a tremendous force that can overcome any real of perceived deficiencies of the language. In programming languages, like in natural languages nothing succeed like success.

The second interesting category is number of applications written in particular language that became part of Linux or, at least, are including in standard RHEL/FEDORA/CENTOS or Debian/Ubuntu repository.

The third relevant category is the number and quality of books for the particular language.

Complexity Curse

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that a language with simpler core, or even simplistic core Basic, Pascal) have better chances to acquire high level of popularity.

The underlying fact here probably is that most programmers are at best mediocre and such programmers tend on intuitive level to avoid more complex, more rich languages and prefer, say, Pascal to PL/1 and PHP to Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because of the progress of hardware, availability of compilers and not the least, because it was associated with OO exactly at the time OO became a mainstream fashion).

Complex non-orthogonal languages can succeed only as a result of a long period of language development (which usually adds complexly -- just compare Fortran IV with Fortran 99; or PHP 3 with PHP 5 ) from a smaller core. Attempts to ride some fashionable new trend extending existing popular language to this new "paradigm" also proved to be relatively successful (OO programming in case of C++, which is a superset of C).

Historically, few complex languages were successful (PL/1, Ada, Perl, C++), but even if they were successful, their success typically was temporary rather then permanent  (PL/1, Ada, Perl). As Professor Wilkes noted   (iee90):

Things move slowly in the computer language field but, over a sufficiently long period of time, it is possible to discern trends. In the 1970s, there was a vogue among system programmers for BCPL, a typeless language. This has now run its course, and system programmers appreciate some typing support. At the same time, they like a language with low level features that enable them to do things their way, rather than the compiler’s way, when they want to.

They continue, to have a strong preference for a lean language. At present they tend to favor C in its various versions. For applications in which flexibility is important, Lisp may be said to have gained strength as a popular programming language.

Further progress is necessary in the direction of achieving modularity. No language has so far emerged which exploits objects in a fully satisfactory manner, although C++ goes a long way. ADA was progressive in this respect, but unfortunately it is in the process of collapsing under its own great weight.

ADA is an example of what can happen when an official attempt is made to orchestrate technical advances. After the experience with PL/1 and ALGOL 68, it should have been clear that the future did not lie with massively large languages.

I would direct the reader’s attention to Modula-3, a modest attempt to build on the appeal and success of Pascal and Modula-2 [12].

Complexity of the compiler/interpreter also matter as it affects portability: this is one thing that probably doomed PL/1 (and later Ada), although those days a new language typically come with open source compiler (or in case of scripting languages, an interpreter) and this is less of a problem.

Here is an interesting take on language design from the preface to The D programming language book 9D language failed to achieve any significant level of popularity):

Programming language design seeks power in simplicity and, when successful, begets beauty.

Choosing the trade-offs among contradictory requirements is a difficult task that requires good taste from the language designer as much as mastery of theoretical principles and of practical implementation matters. Programming language design is software-engineering-complete.

D is a language that attempts to consistently do the right thing within the constraints it chose: system-level access to computing resources, high performance, and syntactic similarity with C-derived languages. In trying to do the right thing, D sometimes stays with tradition and does what other languages do, and other times it breaks tradition with a fresh, innovative solution. On occasion that meant revisiting the very constraints that D ostensibly embraced. For example, large program fragments or indeed entire programs can be written in a well-defined memory-safe subset of D, which entails giving away a small amount of system-level access for a large gain in program debuggability.

You may be interested in D if the following values are important to you:

The role of fashion

At the initial, the most difficult stage of language development the language should solve an important problem that was inadequately solved by currently popular languages.  But at the same time the language has few chances to succeed unless it perfectly fits into the current software fashion. This "fashion factor" is probably as important as several other factors combined. With the notable exclusion of "language sponsor" factor.  The latter can make or break the language.

Like in woman dress fashion rules in language design.  And with time this trend became more and more pronounced.  A new language should simultaneously represent the current fashionable trend.  For example OO-programming was a visit card into the world of "big, successful languages" since probably early 90th (C++, Java, Python).  Before that "structured programming" and "verification" (Pascal, Modula) played similar role.

Programming environment and the role of "powerful sponsor" in language success

PL/1, Java, C#, Ada, Python are languages that had powerful sponsors. Pascal, Basic, Forth, partially Perl (O'Reilly was a sponsor for a short period of time)  are examples of the languages that had no such sponsor during the initial period of development.  C and C++ are somewhere in between.

But language itself is not enough. Any language now need a "programming environment" which consists of a set of libraries, debugger and other tools (make tool, lint, pretty-printer, etc). The set of standard" libraries and debugger are probably two most important elements. They cost  lot of time (or money) to develop and here the role of powerful sponsor is difficult to underestimate.

While this is not the necessary condition for becoming popular, it really helps: other things equal the weight of the sponsor of the language does matter. For example Java, being a weak, inconsistent language (C-- with garbage collection and OO) was pushed through the throat on the strength of marketing and huge amount of money spend on creating Java programming environment. 

The same was partially true for  C# and Python. That's why Python, despite its "non-Unix" origin is more viable scripting language now then, say, Perl (which is better integrated with Unix and has pretty innovative for scripting languages support of pointers and regular expressions), or Ruby (which has support of coroutines from day 1, not as "bolted on" feature like in Python).

Like in political campaigns, negative advertizing also matter. For example Perl suffered greatly from blackmail comparing programs in it with "white noise". And then from withdrawal of O'Reilly from the role of sponsor of the language (although it continue to milk that Perl book publishing franchise ;-)

People proved to be pretty gullible and in this sense language marketing is not that different from woman clothing marketing :-)

Language level and success

One very important classification of programming languages is based on so called the level of the language.  Essentially after there is at least one language that is successful on a given level, the success of other languages on the same level became more problematic. Higher chances for success are for languages that have even slightly higher, but still higher level then successful predecessors.

The level of the language informally can be described as the number of statements (or, more correctly, the number of  lexical units (tokens)) needed to write a solution of a particular problem in one language versus another. This way we can distinguish several levels of programming languages:

 "Nanny languages" vs "Sharp razor" languages

Some people distinguish between "nanny languages" and "sharp razor" languages. The latter do not attempt to protect user from his errors while the former usually go too far... Right compromise is extremely difficult to find.

For example, I consider the explicit availability of pointers as an important feature of the language that greatly increases its expressive power and far outweighs risks of errors in hands of unskilled practitioners.  In other words attempts to make the language "safer" often misfire.

Expressive style of the languages

Another useful typology is based in expressive style of the language:

Those categories are not pure and somewhat overlap. For example, it's possible to program in an object-oriented style in C, or even assembler. Some scripting languages like Perl have built-in regular expressions engines that are a part of the language so they have functional component despite being procedural. Some relatively low level languages (Algol-style languages) implement garbage collection. A good example is Java. There are scripting languages that compile into common language framework which was designed for high level languages. For example, Iron Python compiles into .Net.

Weak correlation between quality of design and popularity

Popularity of the programming languages is not strongly connected to their quality. Some languages that look like a collection of language designer blunders (PHP, Java ) became quite popular. Java became  a new Cobol and PHP dominates dynamic Web sites construction. The dominant technology for such Web sites is often called LAMP, which means Linux - Apache -MySQL- PHP. Being a highly simplified but badly constructed subset of Perl, kind of new Basic for dynamic Web sites construction PHP provides the most depressing experience. I was unpleasantly surprised when I had learnt that the Wikipedia engine was rewritten in PHP from Perl some time ago, but this fact quite illustrates the trend.

So language design quality has little to do with the language success in the marketplace. Simpler languages have more wide appeal as success of PHP (which at the beginning was at the expense of Perl) suggests. In addition much depends whether the language has powerful sponsor like was the case with Java (Sun and IBM) as well as Python (Google).

Progress in programming languages has been very uneven and contain several setbacks like Java. Currently this progress is usually associated with scripting languages. History of programming languages raises interesting general questions about "laws" of programming language design. First let's reproduce several notable quotes:

  1. Knuth law of optimization: "Premature optimization is the root of all evil (or at least most of it) in programming." - Donald Knuth
  2. "Greenspun's Tenth Rule of Programming: any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp." - Phil Greenspun
  3. "The key to performance is elegance, not battalions of special cases." - Jon Bentley and Doug McIlroy
  4. "Some may say Ruby is a bad rip-off of Lisp or Smalltalk, and I admit that. But it is nicer to ordinary people." - Matz, LL2
  5. Most papers in computer science describe how their author learned what someone else already knew. - Peter Landin
  6. "The only way to learn a new programming language is by writing programs in it." - Kernighan and Ritchie
  7. "If I had a nickel for every time I've written "for (i = 0; i < N; i++)" in C, I'd be a millionaire." - Mike Vanier
  8. "Language designers are not intellectuals. They're not as interested in thinking as you might hope. They just want to get a language done and start using it." - Dave Moon
  9. "Don't worry about what anybody else is going to do. The best way to predict the future is to invent it." - Alan Kay
  10. "Programs must be written for people to read, and only incidentally for machines to execute." - Abelson & Sussman, SICP, preface to the first edition

Please note that one thing is to read language manual and appreciate how good the concepts are, and another to bet your project on a new, unproved language without good debuggers, manuals and, what is very important, libraries. Debugger is very important but standard libraries are crucial: they represent a factor that makes or breaks new languages.

In this sense languages are much like cars. For many people car is the thing that they use get to work and shopping mall and they are not very interesting is engine inline or V-type and the use of fuzzy logic in the transmission. What they care is safety, reliability, mileage, insurance and the size of trunk. In this sense "Worse is better" is very true. I already mentioned the importance of the debugger. The other important criteria is quality and availability of libraries. Actually libraries are what make 80% of the usability of the language, moreover in a sense libraries are more important than the language...

A popular belief that scripting is "unsafe" or "second rate" or "prototype" solution is completely wrong. If a project had died than it does not matter what was the implementation language, so for any successful project and tough schedules scripting language (especially in dual scripting language+C combination, for example TCL+C) is an optimal blend that for a large class of tasks. Such an approach helps to separate architectural decisions from implementation details much better that any OO model does.

Moreover even for tasks that handle a fair amount of computations and data (computationally intensive tasks) such languages as Python and Perl are often (but not always !) competitive with C++, C# and, especially, Java.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities, popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things. A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked. This is a story behind success of Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that languages with simpler core, or even simplistic core has more chanced to acquire high level of popularity. The underlying fact here probably is that most programmers are at best mediocre and such programmer tend on intuitive level to avoid more complex, more rich languages like, say, PL/1 and Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because OO became a fashion). Complex non-orthogonal languages can succeed only as a result on long period of language development from a smaller core or with the banner of some fashionable new trend (OO programming in case of C++).

Programming Language Development Timeline

Here is modified from Byte the timeline of Programming Languages (for the original see BYTE.com September 1995 / 20th Anniversary /)

Forties

ca. 1946


1949

Fifties


1951


1952

1957


1958


1959

Sixties


1960


1962


1963


1964


1965


1966


1967



1969

Seventies


1970


1972


1974


1975


1976


1977


1978


1979

Eighties


1980


1981


1982


1983


1984


1985


1986


1987


1988


1989

Nineties


1990


1991


1992


1993


1994


1995


1996


1997


2000


2006

2007 

2008:

2009:

2011

2017:

Special note on Scripting languages

Scripting helps to avoid OO trap that is pushed by
  "a hoard of practically illiterate researchers
publishing crap papers in junk conferences."

Despite the fact that scripting languages are really important computer science phenomena, they are usually happily ignored in university curriculums.  Students are usually indoctrinated (or in less politically correct terms  "brainwashed")  in Java and OO programming ;-)

This site tries to give scripting languages proper emphasis and  promotes scripting languages as an alternative to mainstream reliance on "Java as a new Cobol" approach for software development. Please read my introduction to the topic that was recently converted into the article: A Slightly Skeptical View on Scripting Languages.  

The tragedy of scripting language designer is that there is no way to overestimate the level of abuse of any feature of the language.  Half of the programmers by definition is below average and it is this half that matters most in enterprise environment.  In a way the higher is the level of programmer, the less relevant for him are limitations of the language. That's why statements like "Perl is badly suitable for large project development" are plain vanilla silly. With proper discipline it is perfectly suitable and programmers can be more productive with Perl than with Java. The real question is "What is the team quality and quantity?".  

Scripting is a part of Unix cultural tradition and Unix was the initial development platform for most of mainstream scripting languages with the exception of REXX. But they are portable and now all can be used in Windows and other OSes.

List of Softpanorama pages related to scripting languages

Standard topics

Main Representatives of the family Related topics History Etc

Different scripting languages provide different level of integration with base OS API (for example, Unix or Windows). For example Iron Python compiles into .Net and provides pretty high level of integration with Windows. The same is true about Perl and Unix: almost all Unix system calls are available directly from Perl. Moreover Perl integrates most of Unix API in a very natural way, making it perfect replacement of shell for coding complex scripts. It also have very good debugger. The latter is weak point of shells like bash and ksh93

Unix proved that treating everything like a file is a powerful OS paradigm. In a similar way scripting languages proved that "everything is a string" is also an extremely powerful programming paradigm.

Unix proved that treating everything like a file is a powerful OS paradigm. In a similar way scripting languages proved that "everything is a string" is also extremely powerful programming paradigm.

There are also several separate pages devoted to scripting in different applications. The main emphasis is on shells and Perl. Right now I am trying to convert my old Perl lecture notes into a eBook Nikolai Bezroukov. Introduction to Perl for Unix System Administrators.

Along with pages devoted to major scripting languages this site has many pages devoted to scripting in different applications.  There are more then a dozen of "Perl/Scripting tools for a particular area" type of pages. The most well developed and up-to-date pages of this set are probably Shells and Perl. This page main purpose is to follow the changes in programming practices that can be called the "rise of  scripting," as predicted in the famous John Ousterhout article Scripting: Higher Level Programming for the 21st Century in IEEE COMPUTER (1998). In this brilliant paper he wrote:

...Scripting languages such as Perl and Tcl represent a very different style of programming than system programming languages such as C or Java. Scripting languages are designed for "gluing" applications; they use typeless approaches to achieve a higher level of programming and more rapid application development than system programming languages. Increases in computer speed and changes in the application mix are making scripting languages more and more important for applications of the future.

...Scripting languages and system programming languages are complementary, and most major computing platforms since the 1960's have provided both kinds of languages. The languages are typically used together in component frameworks, where components are created with system programming languages and glued together with scripting languages. However, several recent trends, such as faster machines, better scripting languages, the increasing importance of graphical user interfaces and component architectures, and the growth of the Internet, have greatly increased the applicability of scripting languages. These trends will continue over the next decade, with more and more new applications written entirely in scripting languages and system programming languages used primarily for creating components.

My e-book Portraits of Open Source Pioneers contains several chapters on scripting (most are in early draft stage) that expand on this topic. 

The reader must understand that the treatment of the scripting languages in press, and especially academic press is far from being fair: entrenched academic interests often promote old or commercially supported paradigms until they retire, so change of paradigm often is possible only with the change of generations. And people tend to live longer those days... Please also be aware that even respectable academic magazines like Communications of ACM and IEEE Software often promote "Cargo cult software engineering" like Capability Maturity (CMM) model.

Dr. Nikolai Bezroukov


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2007 2006 2005 2004 2003 2002 2001 2000 1999

[Nov 13, 2018] Resuming rsync partial (-P/--partial) on a interrupted transfer

Notable quotes:
"... should ..."
May 15, 2013 | stackoverflow.com

Glitches , May 15, 2013 at 18:06

I am trying to backup my file server to a remove file server using rsync. Rsync is not successfully resuming when a transfer is interrupted. I used the partial option but rsync doesn't find the file it already started because it renames it to a temporary file and when resumed it creates a new file and starts from beginning.

Here is my command:

rsync -avztP -e "ssh -p 2222" /volume1/ myaccont@backup-server-1:/home/myaccount/backup/ --exclude "@spool" --exclude "@tmp"

When this command is ran, a backup file named OldDisk.dmg from my local machine get created on the remote machine as something like .OldDisk.dmg.SjDndj23 .

Now when the internet connection gets interrupted and I have to resume the transfer, I have to find where rsync left off by finding the temp file like .OldDisk.dmg.SjDndj23 and rename it to OldDisk.dmg so that it sees there already exists a file that it can resume.

How do I fix this so I don't have to manually intervene each time?

Richard Michael , Nov 6, 2013 at 4:26

TL;DR : Use --timeout=X (X in seconds) to change the default rsync server timeout, not --inplace .

The issue is the rsync server processes (of which there are two, see rsync --server ... in ps output on the receiver) continue running, to wait for the rsync client to send data.

If the rsync server processes do not receive data for a sufficient time, they will indeed timeout, self-terminate and cleanup by moving the temporary file to it's "proper" name (e.g., no temporary suffix). You'll then be able to resume.

If you don't want to wait for the long default timeout to cause the rsync server to self-terminate, then when your internet connection returns, log into the server and clean up the rsync server processes manually. However, you must politely terminate rsync -- otherwise, it will not move the partial file into place; but rather, delete it (and thus there is no file to resume). To politely ask rsync to terminate, do not SIGKILL (e.g., -9 ), but SIGTERM (e.g., pkill -TERM -x rsync - only an example, you should take care to match only the rsync processes concerned with your client).

Fortunately there is an easier way: use the --timeout=X (X in seconds) option; it is passed to the rsync server processes as well.

For example, if you specify rsync ... --timeout=15 ... , both the client and server rsync processes will cleanly exit if they do not send/receive data in 15 seconds. On the server, this means moving the temporary file into position, ready for resuming.

I'm not sure of the default timeout value of the various rsync processes will try to send/receive data before they die (it might vary with operating system). In my testing, the server rsync processes remain running longer than the local client. On a "dead" network connection, the client terminates with a broken pipe (e.g., no network socket) after about 30 seconds; you could experiment or review the source code. Meaning, you could try to "ride out" the bad internet connection for 15-20 seconds.

If you do not clean up the server rsync processes (or wait for them to die), but instead immediately launch another rsync client process, two additional server processes will launch (for the other end of your new client process). Specifically, the new rsync client will not re-use/reconnect to the existing rsync server processes. Thus, you'll have two temporary files (and four rsync server processes) -- though, only the newer, second temporary file has new data being written (received from your new rsync client process).

Interestingly, if you then clean up all rsync server processes (for example, stop your client which will stop the new rsync servers, then SIGTERM the older rsync servers, it appears to merge (assemble) all the partial files into the new proper named file. So, imagine a long running partial copy which dies (and you think you've "lost" all the copied data), and a short running re-launched rsync (oops!).. you can stop the second client, SIGTERM the first servers, it will merge the data, and you can resume.

Finally, a few short remarks:

JamesTheAwesomeDude , Dec 29, 2013 at 16:50

Just curious: wouldn't SIGINT (aka ^C ) be 'politer' than SIGTERM ? – JamesTheAwesomeDude Dec 29 '13 at 16:50

Richard Michael , Dec 29, 2013 at 22:34

I didn't test how the server-side rsync handles SIGINT, so I'm not sure it will keep the partial file - you could check. Note that this doesn't have much to do with Ctrl-c ; it happens that your terminal sends SIGINT to the foreground process when you press Ctrl-c , but the server-side rsync has no controlling terminal. You must log in to the server and use kill . The client-side rsync will not send a message to the server (for example, after the client receives SIGINT via your terminal Ctrl-c ) - might be interesting though. As for anthropomorphizing, not sure what's "politer". :-) – Richard Michael Dec 29 '13 at 22:34

d-b , Feb 3, 2015 at 8:48

I just tried this timeout argument rsync -av --delete --progress --stats --human-readable --checksum --timeout=60 --partial-dir /tmp/rsync/ rsync://$remote:/ /src/ but then it timed out during the "receiving file list" phase (which in this case takes around 30 minutes). Setting the timeout to half an hour so kind of defers the purpose. Any workaround for this? – d-b Feb 3 '15 at 8:48

Cees Timmerman , Sep 15, 2015 at 17:10

@user23122 --checksum reads all data when preparing the file list, which is great for many small files that change often, but should be done on-demand for large files. – Cees Timmerman Sep 15 '15 at 17:10

[Nov 08, 2018] How to find which process is regularly writing to disk?

Notable quotes:
"... tick...tick...tick...trrrrrr ..."
"... /var/log/syslog ..."
Nov 08, 2018 | unix.stackexchange.com

Cedric Martin , Jul 27, 2012 at 4:31

How can I find which process is constantly writing to disk?

I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

But here I'm not sure.

I tried the following:

ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp

but nothing is changing there.

Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

(HD is a Seagate Barracude 500GB)

Mat , Jul 27, 2012 at 6:03

Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

Cedric Martin , Jul 27, 2012 at 7:02

@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

camh , Jul 27, 2012 at 9:48

Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

mnmnc , Jul 27, 2012 at 8:27

Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

example output:

Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
    6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
    7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
    8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
 1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
   10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]

Cedric Martin , Aug 2, 2012 at 15:56

thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

ndemou , Jun 20, 2016 at 15:32

I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

scai , Jul 27, 2012 at 10:48

You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

dan3 , Jul 15, 2013 at 8:32

It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

scai , Jul 16, 2013 at 6:32

You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

dan3 , Jul 16, 2013 at 7:22

I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

scai , Jul 16, 2013 at 10:50

I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

Volker Siegel , Apr 16, 2014 at 22:57

I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

Gilles , Jul 28, 2012 at 1:34

Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
auditctl -S sync -S fsync -S fdatasync -a exit,always

Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

Gilles , Mar 23 at 18:24

@StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

cas , Jul 27, 2012 at 9:40

It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

for disk in /dev/sd? ; do
  /sbin/hdparm -q -S 0 "/dev/$disk"
done

Cedric Martin , Aug 2, 2012 at 16:03

that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

cas , Aug 2, 2012 at 21:42

IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

Micheal Johnson , Mar 12, 2016 at 20:48

It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

Micheal Johnson , Mar 12, 2016 at 20:51

Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

,

I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

[Nov 08, 2018] Determining what process is bound to a port

Mar 14, 2011 | unix.stackexchange.com
I know that using the command:
lsof -i TCP

(or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.

Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.

Cakemox , Mar 14, 2011 at 20:48

netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not all others (like AIX.) Add -t if you want TCP only.
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:24800           0.0.0.0:*               LISTEN      27899/synergys
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      3361/python
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      2264/mysqld
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      22964/apache2
tcp        0      0 192.168.99.1:53         0.0.0.0:*               LISTEN      3389/named
tcp        0      0 192.168.88.1:53         0.0.0.0:*               LISTEN      3389/named

etc.

xxx , Mar 14, 2011 at 21:01

Cool, thanks. Looks like that that works under RHEL, but not under Solaris (as you indicated). Anybody know if there's something similar for Solaris? – user5721 Mar 14 '11 at 21:01

Rich Homolka , Mar 15, 2011 at 19:56

netstat -p above is my vote. also look at lsof . – Rich Homolka Mar 15 '11 at 19:56

Jonathan , Aug 26, 2014 at 18:50

As an aside, for windows it's similar: netstat -aon | more – Jonathan Aug 26 '14 at 18:50

sudo , May 25, 2017 at 2:24

What about for SCTP? – sudo May 25 '17 at 2:24

frielp , Mar 15, 2011 at 13:33

On AIX, netstat & rmsock can be used to determine process binding:
[root@aix] netstat -Ana|grep LISTEN|grep 80
f100070000280bb0 tcp4       0      0  *.37               *.*        LISTEN
f1000700025de3b0 tcp        0      0  *.80               *.*        LISTEN
f1000700002803b0 tcp4       0      0  *.111              *.*        LISTEN
f1000700021b33b0 tcp4       0      0  127.0.0.1.32780    *.*        LISTEN

# Port 80 maps to f1000700025de3b0 above, so we type:
[root@aix] rmsock f1000700025de3b0 tcpcb
The socket 0x25de008 is being held by process 499790 (java).

Olivier Dulac , Sep 18, 2013 at 4:05

Thanks for this! Is there a way, however, to just display what process listen on the socket (instead of using rmsock which attempt to remove it) ? – Olivier Dulac Sep 18 '13 at 4:05

Vitor Py , Sep 26, 2013 at 14:18

@OlivierDulac: "Unlike what its name implies, rmsock does not remove the socket, if it is being used by a process. It just reports the process holding the socket." ( ibm.com/developerworks/community/blogs/cgaix/entry/ ) – Vitor Py Sep 26 '13 at 14:18

Olivier Dulac , Sep 26, 2013 at 16:00

@vitor-braga: Ah thx! I thought it was trying but just said which process holds in when it couldn't remove it. Apparently it doesn't even try to remove it when a process holds it. That's cool! Thx! – Olivier Dulac Sep 26 '13 at 16:00

frielp , Mar 15, 2011 at 13:27

Another tool available on Linux is ss . From the ss man page on Fedora:
NAME
       ss - another utility to investigate sockets
SYNOPSIS
       ss [options] [ FILTER ]
DESCRIPTION
       ss is used to dump socket statistics. It allows showing information 
       similar to netstat. It can display more TCP and state informations  
       than other tools.

Example output below - the final column shows the process binding:

[root@box] ss -ap
State      Recv-Q Send-Q      Local Address:Port          Peer Address:Port
LISTEN     0      128                    :::http                    :::*        users:(("httpd",20891,4),("httpd",20894,4),("httpd",20895,4),("httpd",20896,4)
LISTEN     0      128             127.0.0.1:munin                    *:*        users:(("munin-node",1278,5))
LISTEN     0      128                    :::ssh                     :::*        users:(("sshd",1175,4))
LISTEN     0      128                     *:ssh                      *:*        users:(("sshd",1175,3))
LISTEN     0      10              127.0.0.1:smtp                     *:*        users:(("sendmail",1199,4))
LISTEN     0      128             127.0.0.1:x11-ssh-offset                  *:*        users:(("sshd",25734,8))
LISTEN     0      128                   ::1:x11-ssh-offset                 :::*        users:(("sshd",25734,7))

Eugen Constantin Dinca , Mar 14, 2011 at 23:47

For Solaris you can use pfiles and then grep by sockname: or port: .

A sample (from here ):

pfiles `ptree | awk '{print $1}'` | egrep '^[0-9]|port:'

rickumali , May 8, 2011 at 14:40

I was once faced with trying to determine what process was behind a particular port (this time it was 8000). I tried a variety of lsof and netstat, but then took a chance and tried hitting the port via a browser (i.e. http://hostname:8000/ ). Lo and behold, a splash screen greeted me, and it became obvious what the process was (for the record, it was Splunk ).

One more thought: "ps -e -o pid,args" (YMMV) may sometimes show the port number in the arguments list. Grep is your friend!

Gilles , Oct 8, 2015 at 21:04

In the same vein, you could telnet hostname 8000 and see if the server prints a banner. However, that's mostly useful when the server is running on a machine where you don't have shell access, and then finding the process ID isn't relevant. – Gilles May 8 '11 at 14:45

[Nov 08, 2018] How to find which process is regularly writing to disk?

Notable quotes:
"... tick...tick...tick...trrrrrr ..."
"... /var/log/syslog ..."
Jul 27, 2012 | unix.stackexchange.com

Cedric Martin , Jul 27, 2012 at 4:31

How can I find which process is constantly writing to disk?

I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

But here I'm not sure.

I tried the following:

ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp

but nothing is changing there.

Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

(HD is a Seagate Barracude 500GB)

Mat , Jul 27, 2012 at 6:03

Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

Cedric Martin , Jul 27, 2012 at 7:02

@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

camh , Jul 27, 2012 at 9:48

Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

mnmnc , Jul 27, 2012 at 8:27

Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

example output:

Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
    6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
    7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
    8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
 1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
   10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]

Cedric Martin , Aug 2, 2012 at 15:56

thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

ndemou , Jun 20, 2016 at 15:32

I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

scai , Jul 27, 2012 at 10:48

You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

dan3 , Jul 15, 2013 at 8:32

It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

scai , Jul 16, 2013 at 6:32

You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

dan3 , Jul 16, 2013 at 7:22

I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

scai , Jul 16, 2013 at 10:50

I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

Volker Siegel , Apr 16, 2014 at 22:57

I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

Gilles , Jul 28, 2012 at 1:34

Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
auditctl -S sync -S fsync -S fdatasync -a exit,always

Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

Gilles , Mar 23 at 18:24

@StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

cas , Jul 27, 2012 at 9:40

It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

for disk in /dev/sd? ; do
  /sbin/hdparm -q -S 0 "/dev/$disk"
done

Cedric Martin , Aug 2, 2012 at 16:03

that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

cas , Aug 2, 2012 at 21:42

IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

Micheal Johnson , Mar 12, 2016 at 20:48

It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

Micheal Johnson , Mar 12, 2016 at 20:51

Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

,

I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

[Nov 08, 2018] How to split one string into multiple variables in bash shell? [duplicate]

Nov 08, 2018 | stackoverflow.com
This question already has an answer here:

Rob I , May 9, 2012 at 19:22

For your second question, see @mkb's comment to my answer below - that's definitely the way to go! – Rob I May 9 '12 at 19:22

Dennis Williamson , Jul 4, 2012 at 16:14

See my edited answer for one way to read individual characters into an array. – Dennis Williamson Jul 4 '12 at 16:14

Nick Weedon , Dec 31, 2015 at 11:04

Here is the same thing in a more concise form: var1=$(cut -f1 -d- <<<$STR) – Nick Weedon Dec 31 '15 at 11:04

Rob I , May 9, 2012 at 17:00

If your solution doesn't have to be general, i.e. only needs to work for strings like your example, you could do:
var1=$(echo $STR | cut -f1 -d-)
var2=$(echo $STR | cut -f2 -d-)

I chose cut here because you could simply extend the code for a few more variables...

crunchybutternut , May 9, 2012 at 17:40

Can you look at my post again and see if you have a solution for the followup question? thanks! – crunchybutternut May 9 '12 at 17:40

mkb , May 9, 2012 at 17:59

You can use cut to cut characters too! cut -c1 for example. – mkb May 9 '12 at 17:59

FSp , Nov 27, 2012 at 10:26

Although this is very simple to read and write, is a very slow solution because forces you to read twice the same data ($STR) ... if you care of your script performace, the @anubhava solution is much better – FSp Nov 27 '12 at 10:26

tripleee , Jan 25, 2016 at 6:47

Apart from being an ugly last-resort solution, this has a bug: You should absolutely use double quotes in echo "$STR" unless you specifically want the shell to expand any wildcards in the string as a side effect. See also stackoverflow.com/questions/10067266/tripleee Jan 25 '16 at 6:47

Rob I , Feb 10, 2016 at 13:57

You're right about double quotes of course, though I did point out this solution wasn't general. However I think your assessment is a bit unfair - for some people this solution may be more readable (and hence extensible etc) than some others, and doesn't completely rely on arcane bash feature that wouldn't translate to other shells. I suspect that's why my solution, though less elegant, continues to get votes periodically... – Rob I Feb 10 '16 at 13:57

Dennis Williamson , May 10, 2012 at 3:14

read with IFS are perfect for this:
$ IFS=- read var1 var2 <<< ABCDE-123456
$ echo "$var1"
ABCDE
$ echo "$var2"
123456

Edit:

Here is how you can read each individual character into array elements:

$ read -a foo <<<"$(echo "ABCDE-123456" | sed 's/./& /g')"

Dump the array:

$ declare -p foo
declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]="-" [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'

If there are spaces in the string:

$ IFS=$'\v' read -a foo <<<"$(echo "ABCDE 123456" | sed 's/./&\v/g')"
$ declare -p foo
declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]=" " [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'

insecure , Apr 30, 2014 at 7:51

Great, the elegant bash-only way, without unnecessary forks. – insecure Apr 30 '14 at 7:51

Martin Serrano , Jan 11 at 4:34

this solution also has the benefit that if delimiter is not present, the var2 will be empty – Martin Serrano Jan 11 at 4:34

mkb , May 9, 2012 at 17:02

If you know it's going to be just two fields, you can skip the extra subprocesses like this:
var1=${STR%-*}
var2=${STR#*-}

What does this do? ${STR%-*} deletes the shortest substring of $STR that matches the pattern -* starting from the end of the string. ${STR#*-} does the same, but with the *- pattern and starting from the beginning of the string. They each have counterparts %% and ## which find the longest anchored pattern match. If anyone has a helpful mnemonic to remember which does which, let me know! I always have to try both to remember.

Jens , Jan 30, 2015 at 15:17

Plus 1 For knowing your POSIX shell features, avoiding expensive forks and pipes, and the absence of bashisms. – Jens Jan 30 '15 at 15:17

Steven Lu , May 1, 2015 at 20:19

Dunno about "absence of bashisms" considering that this is already moderately cryptic .... if your delimiter is a newline instead of a hyphen, then it becomes even more cryptic. On the other hand, it works with newlines , so there's that. – Steven Lu May 1 '15 at 20:19

mkb , Mar 9, 2016 at 17:30

@KErlandsson: done – mkb Mar 9 '16 at 17:30

mombip , Aug 9, 2016 at 15:58

I've finally found documentation for it: Shell-Parameter-Expansionmombip Aug 9 '16 at 15:58

DS. , Jan 13, 2017 at 19:56

Mnemonic: "#" is to the left of "%" on a standard keyboard, so "#" removes a prefix (on the left), and "%" removes a suffix (on the right). – DS. Jan 13 '17 at 19:56

tripleee , May 9, 2012 at 17:57

Sounds like a job for set with a custom IFS .
IFS=-
set $STR
var1=$1
var2=$2

(You will want to do this in a function with a local IFS so you don't mess up other parts of your script where you require IFS to be what you expect.)

Rob I , May 9, 2012 at 19:20

Nice - I knew about $IFS but hadn't seen how it could be used. – Rob I May 9 '12 at 19:20

Sigg3.net , Jun 19, 2013 at 8:08

I used triplee's example and it worked exactly as advertised! Just change last two lines to <pre> myvar1= echo $1 && myvar2= echo $2 </pre> if you need to store them throughout a script with several "thrown" variables. – Sigg3.net Jun 19 '13 at 8:08

tripleee , Jun 19, 2013 at 13:25

No, don't use a useless echo in backticks . – tripleee Jun 19 '13 at 13:25

Daniel Andersson , Mar 27, 2015 at 6:46

This is a really sweet solution if we need to write something that is not Bash specific. To handle IFS troubles, one can add OLDIFS=$IFS at the beginning before overwriting it, and then add IFS=$OLDIFS just after the set line. – Daniel Andersson Mar 27 '15 at 6:46

tripleee , Mar 27, 2015 at 6:58

FWIW the link above is broken. I was lazy and careless. The canonical location still works; iki.fi/era/unix/award.html#echotripleee Mar 27 '15 at 6:58

anubhava , May 9, 2012 at 17:09

Using bash regex capabilities:
re="^([^-]+)-(.*)$"
[[ "ABCDE-123456" =~ $re ]] && var1="${BASH_REMATCH[1]}" && var2="${BASH_REMATCH[2]}"
echo $var1
echo $var2

OUTPUT

ABCDE
123456

Cometsong , Oct 21, 2016 at 13:29

Love pre-defining the re for later use(s)! – Cometsong Oct 21 '16 at 13:29

Archibald , Nov 12, 2012 at 11:03

string="ABCDE-123456"
IFS=- # use "local IFS=-" inside the function
set $string
echo $1 # >>> ABCDE
echo $2 # >>> 123456

tripleee , Mar 27, 2015 at 7:02

Hmmm, isn't this just a restatement of my answer ? – tripleee Mar 27 '15 at 7:02

Archibald , Sep 18, 2015 at 12:36

Actually yes. I just clarified it a bit. – Archibald Sep 18 '15 at 12:36

[Nov 08, 2018] How to split a string in shell and get the last field

Nov 08, 2018 | stackoverflow.com

cd1 , Jul 1, 2010 at 23:29

Suppose I have the string 1:2:3:4:5 and I want to get its last field ( 5 in this case). How do I do that using Bash? I tried cut , but I don't know how to specify the last field with -f .

Stephen , Jul 2, 2010 at 0:05

You can use string operators :
$ foo=1:2:3:4:5
$ echo ${foo##*:}
5

This trims everything from the front until a ':', greedily.

${foo  <-- from variable foo
  ##   <-- greedy front trim
  *    <-- matches anything
  :    <-- until the last ':'
 }

eckes , Jan 23, 2013 at 15:23

While this is working for the given problem, the answer of William below ( stackoverflow.com/a/3163857/520162 ) also returns 5 if the string is 1:2:3:4:5: (while using the string operators yields an empty result). This is especially handy when parsing paths that could contain (or not) a finishing / character. – eckes Jan 23 '13 at 15:23

Dobz , Jun 25, 2014 at 11:44

How would you then do the opposite of this? to echo out '1:2:3:4:'? – Dobz Jun 25 '14 at 11:44

Mihai Danila , Jul 9, 2014 at 14:07

And how does one keep the part before the last separator? Apparently by using ${foo%:*} . # - from beginning; % - from end. # , % - shortest match; ## , %% - longest match. – Mihai Danila Jul 9 '14 at 14:07

Putnik , Feb 11, 2016 at 22:33

If i want to get the last element from path, how should I use it? echo ${pwd##*/} does not work. – Putnik Feb 11 '16 at 22:33

Stan Strum , Dec 17, 2017 at 4:22

@Putnik that command sees pwd as a variable. Try dir=$(pwd); echo ${dir##*/} . Works for me! – Stan Strum Dec 17 '17 at 4:22

a3nm , Feb 3, 2012 at 8:39

Another way is to reverse before and after cut :
$ echo ab:cd:ef | rev | cut -d: -f1 | rev
ef

This makes it very easy to get the last but one field, or any range of fields numbered from the end.

Dannid , Jan 14, 2013 at 20:50

This answer is nice because it uses 'cut', which the author is (presumably) already familiar. Plus, I like this answer because I am using 'cut' and had this exact question, hence finding this thread via search. – Dannid Jan 14 '13 at 20:50

funroll , Aug 12, 2013 at 19:51

Some cut-and-paste fodder for people using spaces as delimiters: echo "1 2 3 4" | rev | cut -d " " -f1 | revfunroll Aug 12 '13 at 19:51

EdgeCaseBerg , Sep 8, 2013 at 5:01

the rev | cut -d -f1 | rev is so clever! Thanks! Helped me a bunch (my use case was rev | -d ' ' -f 2- | rev – EdgeCaseBerg Sep 8 '13 at 5:01

Anarcho-Chossid , Sep 16, 2015 at 15:54

Wow. Beautiful and dark magic. – Anarcho-Chossid Sep 16 '15 at 15:54

shearn89 , Aug 17, 2017 at 9:27

I always forget about rev , was just what I needed! cut -b20- | rev | cut -b10- | revshearn89 Aug 17 '17 at 9:27

William Pursell , Jul 2, 2010 at 7:09

It's difficult to get the last field using cut, but here's (one set of) solutions in awk and perl
$ echo 1:2:3:4:5 | awk -F: '{print $NF}'
5
$ echo 1:2:3:4:5 | perl -F: -wane 'print $F[-1]'
5

eckes , Jan 23, 2013 at 15:20

great advantage of this solution over the accepted answer: it also matches paths that contain or do not contain a finishing / character: /a/b/c/d and /a/b/c/d/ yield the same result ( d ) when processing pwd | awk -F/ '{print $NF}' . The accepted answer results in an empty result in the case of /a/b/c/d/eckes Jan 23 '13 at 15:20

stamster , May 21 at 11:52

@eckes In case of AWK solution, on GNU bash, version 4.3.48(1)-release that's not true, as it matters whenever you have trailing slash or not. Simply put AWK will use / as delimiter, and if your path is /my/path/dir/ it will use value after last delimiter, which is simply an empty string. So it's best to avoid trailing slash if you need to do such a thing like I do. – stamster May 21 at 11:52

Nicholas M T Elliott , Jul 1, 2010 at 23:39

Assuming fairly simple usage (no escaping of the delimiter, for example), you can use grep:
$ echo "1:2:3:4:5" | grep -oE "[^:]+$"
5

Breakdown - find all the characters not the delimiter ([^:]) at the end of the line ($). -o only prints the matching part.

Dennis Williamson , Jul 2, 2010 at 0:05

One way:
var1="1:2:3:4:5"
var2=${var1##*:}

Another, using an array:

var1="1:2:3:4:5"
saveIFS=$IFS
IFS=":"
var2=($var1)
IFS=$saveIFS
var2=${var2[@]: -1}

Yet another with an array:

var1="1:2:3:4:5"
saveIFS=$IFS
IFS=":"
var2=($var1)
IFS=$saveIFS
count=${#var2[@]}
var2=${var2[$count-1]}

Using Bash (version >= 3.2) regular expressions:

var1="1:2:3:4:5"
[[ $var1 =~ :([^:]*)$ ]]
var2=${BASH_REMATCH[1]}

liuyang1 , Mar 24, 2015 at 6:02

Thanks so much for array style, as I need this feature, but not have cut, awk these utils. – liuyang1 Mar 24 '15 at 6:02

user3133260 , Dec 24, 2013 at 19:04

$ echo "a b c d e" | tr ' ' '\n' | tail -1
e

Simply translate the delimiter into a newline and choose the last entry with tail -1 .

Yajo , Jul 30, 2014 at 10:13

It will fail if the last item contains a \n , but for most cases is the most readable solution. – Yajo Jul 30 '14 at 10:13

Rafael , Nov 10, 2016 at 10:09

Using sed :
$ echo '1:2:3:4:5' | sed 's/.*://' # => 5

$ echo '' | sed 's/.*://' # => (empty)

$ echo ':' | sed 's/.*://' # => (empty)
$ echo ':b' | sed 's/.*://' # => b
$ echo '::c' | sed 's/.*://' # => c

$ echo 'a' | sed 's/.*://' # => a
$ echo 'a:' | sed 's/.*://' # => (empty)
$ echo 'a:b' | sed 's/.*://' # => b
$ echo 'a::c' | sed 's/.*://' # => c

Ab Irato , Nov 13, 2013 at 16:10

If your last field is a single character, you could do this:
a="1:2:3:4:5"

echo ${a: -1}
echo ${a:(-1)}

Check string manipulation in bash .

gniourf_gniourf , Nov 13, 2013 at 16:15

This doesn't work: it gives the last character of a , not the last field . – gniourf_gniourf Nov 13 '13 at 16:15

Ab Irato , Nov 25, 2013 at 13:25

True, that's the idea, if you know the length of the last field it's good. If not you have to use something else... – Ab Irato Nov 25 '13 at 13:25

sphakka , Jan 25, 2016 at 16:24

Interesting, I didn't know of these particular Bash string manipulations. It also resembles to Python's string/array slicing . – sphakka Jan 25 '16 at 16:24

ghostdog74 , Jul 2, 2010 at 1:16

Using Bash.
$ var1="1:2:3:4:0"
$ IFS=":"
$ set -- $var1
$ eval echo  \$${#}
0

Sopalajo de Arrierez , Dec 24, 2014 at 5:04

I would buy some details about this method, please :-) . – Sopalajo de Arrierez Dec 24 '14 at 5:04

Rafa , Apr 27, 2017 at 22:10

Could have used echo ${!#} instead of eval echo \$${#} . – Rafa Apr 27 '17 at 22:10

Crytis , Dec 7, 2016 at 6:51

echo "a:b:c:d:e"|xargs -d : -n1|tail -1

First use xargs split it using ":",-n1 means every line only have one part.Then,pring the last part.

BDL , Dec 7, 2016 at 13:47

Although this might solve the problem, one should always add an explanation to it. – BDL Dec 7 '16 at 13:47

Crytis , Jun 7, 2017 at 9:13

already added.. – Crytis Jun 7 '17 at 9:13

021 , Apr 26, 2016 at 11:33

There are many good answers here, but still I want to share this one using basename :
 basename $(echo "a:b:c:d:e" | tr ':' '/')

However it will fail if there are already some '/' in your string . If slash / is your delimiter then you just have to (and should) use basename.

It's not the best answer but it just shows how you can be creative using bash commands.

Nahid Akbar , Jun 22, 2012 at 2:55

for x in `echo $str | tr ";" "\n"`; do echo $x; done

chepner , Jun 22, 2012 at 12:58

This runs into problems if there is whitespace in any of the fields. Also, it does not directly address the question of retrieving the last field. – chepner Jun 22 '12 at 12:58

Christoph Böddeker , Feb 19 at 15:50

For those that comfortable with Python, https://github.com/Russell91/pythonpy is a nice choice to solve this problem.
$ echo "a:b:c:d:e" | py -x 'x.split(":")[-1]'

From the pythonpy help: -x treat each row of stdin as x .

With that tool, it is easy to write python code that gets applied to the input.

baz , Nov 24, 2017 at 19:27

a solution using the read builtin
IFS=':' read -a field <<< "1:2:3:4:5"
echo ${field[4]}

[Nov 08, 2018] How do I split a string on a delimiter in Bash?

Notable quotes:
"... Bash shell script split array ..."
"... associative array ..."
"... pattern substitution ..."
"... Debian GNU/Linux ..."
Nov 08, 2018 | stackoverflow.com

stefanB , May 28, 2009 at 2:03

I have this string stored in a variable:
IN="bla@some.com;john@home.com"

Now I would like to split the strings by ; delimiter so that I have:

ADDR1="bla@some.com"
ADDR2="john@home.com"

I don't necessarily need the ADDR1 and ADDR2 variables. If they are elements of an array that's even better.


After suggestions from the answers below, I ended up with the following which is what I was after:

#!/usr/bin/env bash

IN="bla@some.com;john@home.com"

mails=$(echo $IN | tr ";" "\n")

for addr in $mails
do
    echo "> [$addr]"
done

Output:

> [bla@some.com]
> [john@home.com]

There was a solution involving setting Internal_field_separator (IFS) to ; . I am not sure what happened with that answer, how do you reset IFS back to default?

RE: IFS solution, I tried this and it works, I keep the old IFS and then restore it:

IN="bla@some.com;john@home.com"

OIFS=$IFS
IFS=';'
mails2=$IN
for x in $mails2
do
    echo "> [$x]"
done

IFS=$OIFS

BTW, when I tried

mails2=($IN)

I only got the first string when printing it in loop, without brackets around $IN it works.

Brooks Moses , May 1, 2012 at 1:26

With regards to your "Edit2": You can simply "unset IFS" and it will return to the default state. There's no need to save and restore it explicitly unless you have some reason to expect that it's already been set to a non-default value. Moreover, if you're doing this inside a function (and, if you aren't, why not?), you can set IFS as a local variable and it will return to its previous value once you exit the function. – Brooks Moses May 1 '12 at 1:26

dubiousjim , May 31, 2012 at 5:21

@BrooksMoses: (a) +1 for using local IFS=... where possible; (b) -1 for unset IFS , this doesn't exactly reset IFS to its default value, though I believe an unset IFS behaves the same as the default value of IFS ($' \t\n'), however it seems bad practice to be assuming blindly that your code will never be invoked with IFS set to a custom value; (c) another idea is to invoke a subshell: (IFS=$custom; ...) when the subshell exits IFS will return to whatever it was originally. – dubiousjim May 31 '12 at 5:21

nicooga , Mar 7, 2016 at 15:32

I just want to have a quick look at the paths to decide where to throw an executable, so I resorted to run ruby -e "puts ENV.fetch('PATH').split(':')" . If you want to stay pure bash won't help but using any scripting language that has a built-in split is easier. – nicooga Mar 7 '16 at 15:32

Jeff , Apr 22 at 17:51

This is kind of a drive-by comment, but since the OP used email addresses as the example, has anyone bothered to answer it in a way that is fully RFC 5322 compliant, namely that any quoted string can appear before the @ which means you're going to need regular expressions or some other kind of parser instead of naive use of IFS or other simplistic splitter functions. – Jeff Apr 22 at 17:51

user2037659 , Apr 26 at 20:15

for x in $(IFS=';';echo $IN); do echo "> [$x]"; doneuser2037659 Apr 26 at 20:15

Johannes Schaub - litb , May 28, 2009 at 2:23

You can set the internal field separator (IFS) variable, and then let it parse into an array. When this happens in a command, then the assignment to IFS only takes place to that single command's environment (to read ). It then parses the input according to the IFS variable value into an array, which we can then iterate over.
IFS=';' read -ra ADDR <<< "$IN"
for i in "${ADDR[@]}"; do
    # process "$i"
done

It will parse one line of items separated by ; , pushing it into an array. Stuff for processing whole of $IN , each time one line of input separated by ; :

 while IFS=';' read -ra ADDR; do
      for i in "${ADDR[@]}"; do
          # process "$i"
      done
 done <<< "$IN"

Chris Lutz , May 28, 2009 at 2:25

This is probably the best way. How long will IFS persist in it's current value, can it mess up my code by being set when it shouldn't be, and how can I reset it when I'm done with it? – Chris Lutz May 28 '09 at 2:25

Johannes Schaub - litb , May 28, 2009 at 3:04

now after the fix applied, only within the duration of the read command :) – Johannes Schaub - litb May 28 '09 at 3:04

lhunath , May 28, 2009 at 6:14

You can read everything at once without using a while loop: read -r -d '' -a addr <<< "$in" # The -d '' is key here, it tells read not to stop at the first newline (which is the default -d) but to continue until EOF or a NULL byte (which only occur in binary data). – lhunath May 28 '09 at 6:14

Charles Duffy , Jul 6, 2013 at 14:39

@LucaBorrione Setting IFS on the same line as the read with no semicolon or other separator, as opposed to in a separate command, scopes it to that command -- so it's always "restored"; you don't need to do anything manually. – Charles Duffy Jul 6 '13 at 14:39

chepner , Oct 2, 2014 at 3:50

@imagineerThis There is a bug involving herestrings and local changes to IFS that requires $IN to be quoted. The bug is fixed in bash 4.3. – chepner Oct 2 '14 at 3:50

palindrom , Mar 10, 2011 at 9:00

Taken from Bash shell script split array :
IN="bla@some.com;john@home.com"
arrIN=(${IN//;/ })

Explanation:

This construction replaces all occurrences of ';' (the initial // means global replace) in the string IN with ' ' (a single space), then interprets the space-delimited string as an array (that's what the surrounding parentheses do).

The syntax used inside of the curly braces to replace each ';' character with a ' ' character is called Parameter Expansion .

There are some common gotchas:

  1. If the original string has spaces, you will need to use IFS :
    • IFS=':'; arrIN=($IN); unset IFS;
  2. If the original string has spaces and the delimiter is a new line, you can set IFS with:
    • IFS=$'\n'; arrIN=($IN); unset IFS;

Oz123 , Mar 21, 2011 at 18:50

I just want to add: this is the simplest of all, you can access array elements with ${arrIN[1]} (starting from zeros of course) – Oz123 Mar 21 '11 at 18:50

KomodoDave , Jan 5, 2012 at 15:13

Found it: the technique of modifying a variable within a ${} is known as 'parameter expansion'. – KomodoDave Jan 5 '12 at 15:13

qbolec , Feb 25, 2013 at 9:12

Does it work when the original string contains spaces? – qbolec Feb 25 '13 at 9:12

Ethan , Apr 12, 2013 at 22:47

No, I don't think this works when there are also spaces present... it's converting the ',' to ' ' and then building a space-separated array. – Ethan Apr 12 '13 at 22:47

Charles Duffy , Jul 6, 2013 at 14:39

This is a bad approach for other reasons: For instance, if your string contains ;*; , then the * will be expanded to a list of filenames in the current directory. -1 – Charles Duffy Jul 6 '13 at 14:39

Chris Lutz , May 28, 2009 at 2:09

If you don't mind processing them immediately, I like to do this:
for i in $(echo $IN | tr ";" "\n")
do
  # process
done

You could use this kind of loop to initialize an array, but there's probably an easier way to do it. Hope this helps, though.

Chris Lutz , May 28, 2009 at 2:42

You should have kept the IFS answer. It taught me something I didn't know, and it definitely made an array, whereas this just makes a cheap substitute. – Chris Lutz May 28 '09 at 2:42

Johannes Schaub - litb , May 28, 2009 at 2:59

I see. Yeah i find doing these silly experiments, i'm going to learn new things each time i'm trying to answer things. I've edited stuff based on #bash IRC feedback and undeleted :) – Johannes Schaub - litb May 28 '09 at 2:59

lhunath , May 28, 2009 at 6:12

-1, you're obviously not aware of wordsplitting, because it's introducing two bugs in your code. one is when you don't quote $IN and the other is when you pretend a newline is the only delimiter used in wordsplitting. You are iterating over every WORD in IN, not every line, and DEFINATELY not every element delimited by a semicolon, though it may appear to have the side-effect of looking like it works. – lhunath May 28 '09 at 6:12

Johannes Schaub - litb , May 28, 2009 at 17:00

You could change it to echo "$IN" | tr ';' '\n' | while read -r ADDY; do # process "$ADDY"; done to make him lucky, i think :) Note that this will fork, and you can't change outer variables from within the loop (that's why i used the <<< "$IN" syntax) then – Johannes Schaub - litb May 28 '09 at 17:00

mklement0 , Apr 24, 2013 at 14:13

To summarize the debate in the comments: Caveats for general use : the shell applies word splitting and expansions to the string, which may be undesired; just try it with. IN="bla@some.com;john@home.com;*;broken apart" . In short: this approach will break, if your tokens contain embedded spaces and/or chars. such as * that happen to make a token match filenames in the current folder. – mklement0 Apr 24 '13 at 14:13

F. Hauri , Apr 13, 2013 at 14:20

Compatible answer

To this SO question, there is already a lot of different way to do this in bash . But bash has many special features, so called bashism that work well, but that won't work in any other shell .

In particular, arrays , associative array , and pattern substitution are pure bashisms and may not work under other shells .

On my Debian GNU/Linux , there is a standard shell called dash , but I know many people who like to use ksh .

Finally, in very small situation, there is a special tool called busybox with his own shell interpreter ( ash ).

Requested string

The string sample in SO question is:

IN="bla@some.com;john@home.com"

As this could be useful with whitespaces and as whitespaces could modify the result of the routine, I prefer to use this sample string:

 IN="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
Split string based on delimiter in bash (version >=4.2)

Under pure bash, we may use arrays and IFS :

var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
oIFS="$IFS"
IFS=";"
declare -a fields=($var)
IFS="$oIFS"
unset oIFS

IFS=\; read -a fields <<<"$var"

Using this syntax under recent bash don't change $IFS for current session, but only for the current command:

set | grep ^IFS=
IFS=$' \t\n'

Now the string var is split and stored into an array (named fields ):

set | grep ^fields=\\\|^var=
fields=([0]="bla@some.com" [1]="john@home.com" [2]="Full Name <fulnam@other.org>")
var='bla@some.com;john@home.com;Full Name <fulnam@other.org>'

We could request for variable content with declare -p :

declare -p var fields
declare -- var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
declare -a fields=([0]="bla@some.com" [1]="john@home.com" [2]="Full Name <fulnam@other.org>")

read is the quickiest way to do the split, because there is no forks and no external resources called.

From there, you could use the syntax you already know for processing each field:

for x in "${fields[@]}";do
    echo "> [$x]"
    done
> [bla@some.com]
> [john@home.com]
> [Full Name <fulnam@other.org>]

or drop each field after processing (I like this shifting approach):

while [ "$fields" ] ;do
    echo "> [$fields]"
    fields=("${fields[@]:1}")
    done
> [bla@some.com]
> [john@home.com]
> [Full Name <fulnam@other.org>]

or even for simple printout (shorter syntax):

printf "> [%s]\n" "${fields[@]}"
> [bla@some.com]
> [john@home.com]
> [Full Name <fulnam@other.org>]
Split string based on delimiter in shell

But if you would write something usable under many shells, you have to not use bashisms .

There is a syntax, used in many shells, for splitting a string across first or last occurrence of a substring:

${var#*SubStr}  # will drop begin of string up to first occur of `SubStr`
${var##*SubStr} # will drop begin of string up to last occur of `SubStr`
${var%SubStr*}  # will drop part of string from last occur of `SubStr` to the end
${var%%SubStr*} # will drop part of string from first occur of `SubStr` to the end

(The missing of this is the main reason of my answer publication ;)

As pointed out by Score_Under :

# and % delete the shortest possible matching string, and

## and %% delete the longest possible.

This little sample script work well under bash , dash , ksh , busybox and was tested under Mac-OS's bash too:

var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
while [ "$var" ] ;do
    iter=${var%%;*}
    echo "> [$iter]"
    [ "$var" = "$iter" ] && \
        var='' || \
        var="${var#*;}"
  done
> [bla@some.com]
> [john@home.com]
> [Full Name <fulnam@other.org>]

Have fun!

Score_Under , Apr 28, 2015 at 16:58

The # , ## , % , and %% substitutions have what is IMO an easier explanation to remember (for how much they delete): # and % delete the shortest possible matching string, and ## and %% delete the longest possible. – Score_Under Apr 28 '15 at 16:58

sorontar , Oct 26, 2016 at 4:36

The IFS=\; read -a fields <<<"$var" fails on newlines and add a trailing newline. The other solution removes a trailing empty field. – sorontar Oct 26 '16 at 4:36

Eric Chen , Aug 30, 2017 at 17:50

The shell delimiter is the most elegant answer, period. – Eric Chen Aug 30 '17 at 17:50

sancho.s , Oct 4 at 3:42

Could the last alternative be used with a list of field separators set somewhere else? For instance, I mean to use this as a shell script, and pass a list of field separators as a positional parameter. – sancho.s Oct 4 at 3:42

F. Hauri , Oct 4 at 7:47

Yes, in a loop: for sep in "#" "ł" "@" ; do ... var="${var#*$sep}" ...F. Hauri Oct 4 at 7:47

DougW , Apr 27, 2015 at 18:20

I've seen a couple of answers referencing the cut command, but they've all been deleted. It's a little odd that nobody has elaborated on that, because I think it's one of the more useful commands for doing this type of thing, especially for parsing delimited log files.

In the case of splitting this specific example into a bash script array, tr is probably more efficient, but cut can be used, and is more effective if you want to pull specific fields from the middle.

Example:

$ echo "bla@some.com;john@home.com" | cut -d ";" -f 1
bla@some.com
$ echo "bla@some.com;john@home.com" | cut -d ";" -f 2
john@home.com

You can obviously put that into a loop, and iterate the -f parameter to pull each field independently.

This gets more useful when you have a delimited log file with rows like this:

2015-04-27|12345|some action|an attribute|meta data

cut is very handy to be able to cat this file and select a particular field for further processing.

MisterMiyagi , Nov 2, 2016 at 8:42

Kudos for using cut , it's the right tool for the job! Much cleared than any of those shell hacks. – MisterMiyagi Nov 2 '16 at 8:42

uli42 , Sep 14, 2017 at 8:30

This approach will only work if you know the number of elements in advance; you'd need to program some more logic around it. It also runs an external tool for every element. – uli42 Sep 14 '17 at 8:30

Louis Loudog Trottier , May 10 at 4:20

Excatly waht i was looking for trying to avoid empty string in a csv. Now i can point the exact 'column' value as well. Work with IFS already used in a loop. Better than expected for my situation. – Louis Loudog Trottier May 10 at 4:20

, May 28, 2009 at 10:31

How about this approach:
IN="bla@some.com;john@home.com" 
set -- "$IN" 
IFS=";"; declare -a Array=($*) 
echo "${Array[@]}" 
echo "${Array[0]}" 
echo "${Array[1]}"

Source

Yzmir Ramirez , Sep 5, 2011 at 1:06

+1 ... but I wouldn't name the variable "Array" ... pet peev I guess. Good solution. – Yzmir Ramirez Sep 5 '11 at 1:06

ata , Nov 3, 2011 at 22:33

+1 ... but the "set" and declare -a are unnecessary. You could as well have used just IFS";" && Array=($IN)ata Nov 3 '11 at 22:33

Luca Borrione , Sep 3, 2012 at 9:26

+1 Only a side note: shouldn't it be recommendable to keep the old IFS and then restore it? (as shown by stefanB in his edit3) people landing here (sometimes just copying and pasting a solution) might not think about this – Luca Borrione Sep 3 '12 at 9:26

Charles Duffy , Jul 6, 2013 at 14:44

-1: First, @ata is right that most of the commands in this do nothing. Second, it uses word-splitting to form the array, and doesn't do anything to inhibit glob-expansion when doing so (so if you have glob characters in any of the array elements, those elements are replaced with matching filenames). – Charles Duffy Jul 6 '13 at 14:44

John_West , Jan 8, 2016 at 12:29

Suggest to use $'...' : IN=$'bla@some.com;john@home.com;bet <d@\ns* kl.com>' . Then echo "${Array[2]}" will print a string with newline. set -- "$IN" is also neccessary in this case. Yes, to prevent glob expansion, the solution should include set -f . – John_West Jan 8 '16 at 12:29

Steven Lizarazo , Aug 11, 2016 at 20:45

This worked for me:
string="1;2"
echo $string | cut -d';' -f1 # output is 1
echo $string | cut -d';' -f2 # output is 2

Pardeep Sharma , Oct 10, 2017 at 7:29

this is sort and sweet :) – Pardeep Sharma Oct 10 '17 at 7:29

space earth , Oct 17, 2017 at 7:23

Thanks...Helped a lot – space earth Oct 17 '17 at 7:23

mojjj , Jan 8 at 8:57

cut works only with a single char as delimiter. – mojjj Jan 8 at 8:57

lothar , May 28, 2009 at 2:12

echo "bla@some.com;john@home.com" | sed -e 's/;/\n/g'
bla@some.com
john@home.com

Luca Borrione , Sep 3, 2012 at 10:08

-1 what if the string contains spaces? for example IN="this is first line; this is second line" arrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) ) will produce an array of 8 elements in this case (an element for each word space separated), rather than 2 (an element for each line semi colon separated) – Luca Borrione Sep 3 '12 at 10:08

lothar , Sep 3, 2012 at 17:33

@Luca No the sed script creates exactly two lines. What creates the multiple entries for you is when you put it into a bash array (which splits on white space by default) – lothar Sep 3 '12 at 17:33

Luca Borrione , Sep 4, 2012 at 7:09

That's exactly the point: the OP needs to store entries into an array to loop over it, as you can see in his edits. I think your (good) answer missed to mention to use arrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) ) to achieve that, and to advice to change IFS to IFS=$'\n' for those who land here in the future and needs to split a string containing spaces. (and to restore it back afterwards). :) – Luca Borrione Sep 4 '12 at 7:09

lothar , Sep 4, 2012 at 16:55

@Luca Good point. However the array assignment was not in the initial question when I wrote up that answer. – lothar Sep 4 '12 at 16:55

Ashok , Sep 8, 2012 at 5:01

This also works:
IN="bla@some.com;john@home.com"
echo ADD1=`echo $IN | cut -d \; -f 1`
echo ADD2=`echo $IN | cut -d \; -f 2`

Be careful, this solution is not always correct. In case you pass "bla@some.com" only, it will assign it to both ADD1 and ADD2.

fersarr , Mar 3, 2016 at 17:17

You can use -s to avoid the mentioned problem: superuser.com/questions/896800/ "-f, --fields=LIST select only these fields; also print any line that contains no delimiter character, unless the -s option is specified" – fersarr Mar 3 '16 at 17:17

Tony , Jan 14, 2013 at 6:33

I think AWK is the best and efficient command to resolve your problem. AWK is included in Bash by default in almost every Linux distribution.
echo "bla@some.com;john@home.com" | awk -F';' '{print $1,$2}'

will give

bla@some.com john@home.com

Of course your can store each email address by redefining the awk print field.

Jaro , Jan 7, 2014 at 21:30

Or even simpler: echo "bla@some.com;john@home.com" | awk 'BEGIN{RS=";"} {print}' – Jaro Jan 7 '14 at 21:30

Aquarelle , May 6, 2014 at 21:58

@Jaro This worked perfectly for me when I had a string with commas and needed to reformat it into lines. Thanks. – Aquarelle May 6 '14 at 21:58

Eduardo Lucio , Aug 5, 2015 at 12:59

It worked in this scenario -> "echo "$SPLIT_0" | awk -F' inode=' '{print $1}'"! I had problems when trying to use atrings (" inode=") instead of characters (";"). $ 1, $ 2, $ 3, $ 4 are set as positions in an array! If there is a way of setting an array... better! Thanks! – Eduardo Lucio Aug 5 '15 at 12:59

Tony , Aug 6, 2015 at 2:42

@EduardoLucio, what I'm thinking about is maybe you can first replace your delimiter inode= into ; for example by sed -i 's/inode\=/\;/g' your_file_to_process , then define -F';' when apply awk , hope that can help you. – Tony Aug 6 '15 at 2:42

nickjb , Jul 5, 2011 at 13:41

A different take on Darron's answer , this is how I do it:
IN="bla@some.com;john@home.com"
read ADDR1 ADDR2 <<<$(IFS=";"; echo $IN)

ColinM , Sep 10, 2011 at 0:31

This doesn't work. – ColinM Sep 10 '11 at 0:31

nickjb , Oct 6, 2011 at 15:33

I think it does! Run the commands above and then "echo $ADDR1 ... $ADDR2" and i get "bla@some.com ... john@home.com" output – nickjb Oct 6 '11 at 15:33

Nick , Oct 28, 2011 at 14:36

This worked REALLY well for me... I used it to itterate over an array of strings which contained comma separated DB,SERVER,PORT data to use mysqldump. – Nick Oct 28 '11 at 14:36

dubiousjim , May 31, 2012 at 5:28

Diagnosis: the IFS=";" assignment exists only in the $(...; echo $IN) subshell; this is why some readers (including me) initially think it won't work. I assumed that all of $IN was getting slurped up by ADDR1. But nickjb is correct; it does work. The reason is that echo $IN command parses its arguments using the current value of $IFS, but then echoes them to stdout using a space delimiter, regardless of the setting of $IFS. So the net effect is as though one had called read ADDR1 ADDR2 <<< "bla@some.com john@home.com" (note the input is space-separated not ;-separated). – dubiousjim May 31 '12 at 5:28

sorontar , Oct 26, 2016 at 4:43

This fails on spaces and newlines, and also expand wildcards * in the echo $IN with an unquoted variable expansion. – sorontar Oct 26 '16 at 4:43

gniourf_gniourf , Jun 26, 2014 at 9:11

In Bash, a bullet proof way, that will work even if your variable contains newlines:
IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")

Look:

$ in=$'one;two three;*;there is\na newline\nin this field'
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two three" [2]="*" [3]="there is
a newline
in this field")'

The trick for this to work is to use the -d option of read (delimiter) with an empty delimiter, so that read is forced to read everything it's fed. And we feed read with exactly the content of the variable in , with no trailing newline thanks to printf . Note that's we're also putting the delimiter in printf to ensure that the string passed to read has a trailing delimiter. Without it, read would trim potential trailing empty fields:

$ in='one;two;three;'    # there's an empty field
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two" [2]="three" [3]="")'

the trailing empty field is preserved.


Update for Bash≥4.4

Since Bash 4.4, the builtin mapfile (aka readarray ) supports the -d option to specify a delimiter. Hence another canonical way is:

mapfile -d ';' -t array < <(printf '%s;' "$in")

John_West , Jan 8, 2016 at 12:10

I found it as the rare solution on that list that works correctly with \n , spaces and * simultaneously. Also, no loops; array variable is accessible in the shell after execution (contrary to the highest upvoted answer). Note, in=$'...' , it does not work with double quotes. I think, it needs more upvotes. – John_West Jan 8 '16 at 12:10

Darron , Sep 13, 2010 at 20:10

How about this one liner, if you're not using arrays:
IFS=';' read ADDR1 ADDR2 <<<$IN

dubiousjim , May 31, 2012 at 5:36

Consider using read -r ... to ensure that, for example, the two characters "\t" in the input end up as the same two characters in your variables (instead of a single tab char). – dubiousjim May 31 '12 at 5:36

Luca Borrione , Sep 3, 2012 at 10:07

-1 This is not working here (ubuntu 12.04). Adding echo "ADDR1 $ADDR1"\n echo "ADDR2 $ADDR2" to your snippet will output ADDR1 bla@some.com john@home.com\nADDR2 (\n is newline) – Luca Borrione Sep 3 '12 at 10:07

chepner , Sep 19, 2015 at 13:59

This is probably due to a bug involving IFS and here strings that was fixed in bash 4.3. Quoting $IN should fix it. (In theory, $IN is not subject to word splitting or globbing after it expands, meaning the quotes should be unnecessary. Even in 4.3, though, there's at least one bug remaining--reported and scheduled to be fixed--so quoting remains a good idea.) – chepner Sep 19 '15 at 13:59

sorontar , Oct 26, 2016 at 4:55

This breaks if $in contain newlines even if $IN is quoted. And adds a trailing newline. – sorontar Oct 26 '16 at 4:55

kenorb , Sep 11, 2015 at 20:54

Here is a clean 3-liner:
in="foo@bar;bizz@buzz;fizz@buzz;buzz@woof"
IFS=';' list=($in)
for item in "${list[@]}"; do echo $item; done

where IFS delimit words based on the separator and () is used to create an array . Then [@] is used to return each item as a separate word.

If you've any code after that, you also need to restore $IFS , e.g. unset IFS .

sorontar , Oct 26, 2016 at 5:03

The use of $in unquoted allows wildcards to be expanded. – sorontar Oct 26 '16 at 5:03

user2720864 , Sep 24 at 13:46

+ for the unset command – user2720864 Sep 24 at 13:46

Emilien Brigand , Aug 1, 2016 at 13:15

Without setting the IFS

If you just have one colon you can do that:

a="foo:bar"
b=${a%:*}
c=${a##*:}

you will get:

b = foo
c = bar

Victor Choy , Sep 16, 2015 at 3:34

There is a simple and smart way like this:
echo "add:sfff" | xargs -d: -i  echo {}

But you must use gnu xargs, BSD xargs cant support -d delim. If you use apple mac like me. You can install gnu xargs :

brew install findutils

then

echo "add:sfff" | gxargs -d: -i  echo {}

Halle Knast , May 24, 2017 at 8:42

The following Bash/zsh function splits its first argument on the delimiter given by the second argument:
split() {
    local string="$1"
    local delimiter="$2"
    if [ -n "$string" ]; then
        local part
        while read -d "$delimiter" part; do
            echo $part
        done <<< "$string"
        echo $part
    fi
}

For instance, the command

$ split 'a;b;c' ';'

yields

a
b
c

This output may, for instance, be piped to other commands. Example:

$ split 'a;b;c' ';' | cat -n
1   a
2   b
3   c

Compared to the other solutions given, this one has the following advantages:

If desired, the function may be put into a script as follows:

#!/usr/bin/env bash

split() {
    # ...
}

split "$@"

sandeepkunkunuru , Oct 23, 2017 at 16:10

works and neatly modularized. – sandeepkunkunuru Oct 23 '17 at 16:10

Prospero , Sep 25, 2011 at 1:09

This is the simplest way to do it.
spo='one;two;three'
OIFS=$IFS
IFS=';'
spo_array=($spo)
IFS=$OIFS
echo ${spo_array[*]}

rashok , Oct 25, 2016 at 12:41

IN="bla@some.com;john@home.com"
IFS=';'
read -a IN_arr <<< "${IN}"
for entry in "${IN_arr[@]}"
do
    echo $entry
done

Output

bla@some.com
john@home.com

System : Ubuntu 12.04.1

codeforester , Jan 2, 2017 at 5:37

IFS is not getting set in the specific context of read here and hence it can upset rest of the code, if any. – codeforester Jan 2 '17 at 5:37

shuaihanhungry , Jan 20 at 15:54

you can apply awk to many situations
echo "bla@some.com;john@home.com"|awk -F';' '{printf "%s\n%s\n", $1, $2}'

also you can use this

echo "bla@some.com;john@home.com"|awk -F';' '{print $1,$2}' OFS="\n"

ghost , Apr 24, 2013 at 13:13

If no space, Why not this?
IN="bla@some.com;john@home.com"
arr=(`echo $IN | tr ';' ' '`)

echo ${arr[0]}
echo ${arr[1]}

eukras , Oct 22, 2012 at 7:10

There are some cool answers here (errator esp.), but for something analogous to split in other languages -- which is what I took the original question to mean -- I settled on this:
IN="bla@some.com;john@home.com"
declare -a a="(${IN/;/ })";

Now ${a[0]} , ${a[1]} , etc, are as you would expect. Use ${#a[*]} for number of terms. Or to iterate, of course:

for i in ${a[*]}; do echo $i; done

IMPORTANT NOTE:

This works in cases where there are no spaces to worry about, which solved my problem, but may not solve yours. Go with the $IFS solution(s) in that case.

olibre , Oct 7, 2013 at 13:33

Does not work when IN contains more than two e-mail addresses. Please refer to same idea (but fixed) at palindrom's answerolibre Oct 7 '13 at 13:33

sorontar , Oct 26, 2016 at 5:14

Better use ${IN//;/ } (double slash) to make it also work with more than two values. Beware that any wildcard ( *?[ ) will be expanded. And a trailing empty field will be discarded. – sorontar Oct 26 '16 at 5:14

jeberle , Apr 30, 2013 at 3:10

Use the set built-in to load up the $@ array:
IN="bla@some.com;john@home.com"
IFS=';'; set $IN; IFS=$' \t\n'

Then, let the party begin:

echo $#
for a; do echo $a; done
ADDR1=$1 ADDR2=$2

sorontar , Oct 26, 2016 at 5:17

Better use set -- $IN to avoid some issues with "$IN" starting with dash. Still, the unquoted expansion of $IN will expand wildcards ( *?[ ). – sorontar Oct 26 '16 at 5:17

NevilleDNZ , Sep 2, 2013 at 6:30

Two bourne-ish alternatives where neither require bash arrays:

Case 1 : Keep it nice and simple: Use a NewLine as the Record-Separator... eg.

IN="bla@some.com
john@home.com"

while read i; do
  # process "$i" ... eg.
    echo "[email:$i]"
done <<< "$IN"

Note: in this first case no sub-process is forked to assist with list manipulation.

Idea: Maybe it is worth using NL extensively internally , and only converting to a different RS when generating the final result externally .

Case 2 : Using a ";" as a record separator... eg.

NL="
" IRS=";" ORS=";"

conv_IRS() {
  exec tr "$1" "$NL"
}

conv_ORS() {
  exec tr "$NL" "$1"
}

IN="bla@some.com;john@home.com"
IN="$(conv_IRS ";" <<< "$IN")"

while read i; do
  # process "$i" ... eg.
    echo -n "[email:$i]$ORS"
done <<< "$IN"

In both cases a sub-list can be composed within the loop is persistent after the loop has completed. This is useful when manipulating lists in memory, instead storing lists in files. {p.s. keep calm and carry on B-) }

fedorqui , Jan 8, 2015 at 10:21

Apart from the fantastic answers that were already provided, if it is just a matter of printing out the data you may consider using awk :
awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"

This sets the field separator to ; , so that it can loop through the fields with a for loop and print accordingly.

Test
$ IN="bla@some.com;john@home.com"
$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
> [bla@some.com]
> [john@home.com]

With another input:

$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "a;b;c   d;e_;f"
> [a]
> [b]
> [c   d]
> [e_]
> [f]

18446744073709551615 , Feb 20, 2015 at 10:49

In Android shell, most of the proposed methods just do not work:
$ IFS=':' read -ra ADDR <<<"$PATH"                             
/system/bin/sh: can't create temporary file /sqlite_stmt_journals/mksh.EbNoR10629: No such file or directory

What does work is:

$ for i in ${PATH//:/ }; do echo $i; done
/sbin
/vendor/bin
/system/sbin
/system/bin
/system/xbin

where // means global replacement.

sorontar , Oct 26, 2016 at 5:08

Fails if any part of $PATH contains spaces (or newlines). Also expands wildcards (asterisk *, question mark ? and braces [ ]). – sorontar Oct 26 '16 at 5:08

Eduardo Lucio , Apr 4, 2016 at 19:54

Okay guys!

Here's my answer!

DELIMITER_VAL='='

read -d '' F_ABOUT_DISTRO_R <<"EOF"
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
NAME="Ubuntu"
VERSION="14.04.4 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.4 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
EOF

SPLIT_NOW=$(awk -F$DELIMITER_VAL '{for(i=1;i<=NF;i++){printf "%s\n", $i}}' <<<"${F_ABOUT_DISTRO_R}")
while read -r line; do
   SPLIT+=("$line")
done <<< "$SPLIT_NOW"
for i in "${SPLIT[@]}"; do
    echo "$i"
done

Why this approach is "the best" for me?

Because of two reasons:

  1. You do not need to escape the delimiter;
  2. You will not have problem with blank spaces . The value will be properly separated in the array!

[]'s

gniourf_gniourf , Jan 30, 2017 at 8:26

FYI, /etc/os-release and /etc/lsb-release are meant to be sourced, and not parsed. So your method is really wrong. Moreover, you're not quite answering the question about spiltting a string on a delimiter.gniourf_gniourf Jan 30 '17 at 8:26

Michael Hale , Jun 14, 2012 at 17:38

A one-liner to split a string separated by ';' into an array is:
IN="bla@some.com;john@home.com"
ADDRS=( $(IFS=";" echo "$IN") )
echo ${ADDRS[0]}
echo ${ADDRS[1]}

This only sets IFS in a subshell, so you don't have to worry about saving and restoring its value.

Luca Borrione , Sep 3, 2012 at 10:04

-1 this doesn't work here (ubuntu 12.04). it prints only the first echo with all $IN value in it, while the second is empty. you can see it if you put echo "0: "${ADDRS[0]}\n echo "1: "${ADDRS[1]} the output is 0: bla@some.com;john@home.com\n 1: (\n is new line) – Luca Borrione Sep 3 '12 at 10:04

Luca Borrione , Sep 3, 2012 at 10:05

please refer to nickjb's answer at for a working alternative to this idea stackoverflow.com/a/6583589/1032370 – Luca Borrione Sep 3 '12 at 10:05

Score_Under , Apr 28, 2015 at 17:09

-1, 1. IFS isn't being set in that subshell (it's being passed to the environment of "echo", which is a builtin, so nothing is happening anyway). 2. $IN is quoted so it isn't subject to IFS splitting. 3. The process substitution is split by whitespace, but this may corrupt the original data. – Score_Under Apr 28 '15 at 17:09

ajaaskel , Oct 10, 2014 at 11:33

IN='bla@some.com;john@home.com;Charlie Brown <cbrown@acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)'
set -f
oldifs="$IFS"
IFS=';'; arrayIN=($IN)
IFS="$oldifs"
for i in "${arrayIN[@]}"; do
echo "$i"
done
set +f

Output:

bla@some.com
john@home.com
Charlie Brown <cbrown@acme.com
!"#$%&/()[]{}*? are no problem
simple is beautiful :-)

Explanation: Simple assignment using parenthesis () converts semicolon separated list into an array provided you have correct IFS while doing that. Standard FOR loop handles individual items in that array as usual. Notice that the list given for IN variable must be "hard" quoted, that is, with single ticks.

IFS must be saved and restored since Bash does not treat an assignment the same way as a command. An alternate workaround is to wrap the assignment inside a function and call that function with a modified IFS. In that case separate saving/restoring of IFS is not needed. Thanks for "Bize" for pointing that out.

gniourf_gniourf , Feb 20, 2015 at 16:45

!"#$%&/()[]{}*? are no problem well... not quite: []*? are glob characters. So what about creating this directory and file: `mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem' and running your command? simple may be beautiful, but when it's broken, it's broken. – gniourf_gniourf Feb 20 '15 at 16:45

ajaaskel , Feb 25, 2015 at 7:20

@gniourf_gniourf The string is stored in a variable. Please see the original question. – ajaaskel Feb 25 '15 at 7:20

gniourf_gniourf , Feb 25, 2015 at 7:26

@ajaaskel you didn't fully understand my comment. Go in a scratch directory and issue these commands: mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem' . They will only create a directory and a file, with weird looking names, I must admit. Then run your commands with the exact IN you gave: IN='bla@some.com;john@home.com;Charlie Brown <cbrown@acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)' . You'll see that you won't get the output you expect. Because you're using a method subject to pathname expansions to split your string. – gniourf_gniourf Feb 25 '15 at 7:26

gniourf_gniourf , Feb 25, 2015 at 7:29

This is to demonstrate that the characters * , ? , [...] and even, if extglob is set, !(...) , @(...) , ?(...) , +(...) are problems with this method! – gniourf_gniourf Feb 25 '15 at 7:29

ajaaskel , Feb 26, 2015 at 15:26

@gniourf_gniourf Thanks for detailed comments on globbing. I adjusted the code to have globbing off. My point was however just to show that rather simple assignment can do the splitting job. – ajaaskel Feb 26 '15 at 15:26

> , Dec 19, 2013 at 21:39

Maybe not the most elegant solution, but works with * and spaces:
IN="bla@so me.com;*;john@home.com"
for i in `delims=${IN//[^;]}; seq 1 $((${#delims} + 1))`
do
   echo "> [`echo $IN | cut -d';' -f$i`]"
done

Outputs

> [bla@so me.com]
> [*]
> [john@home.com]

Other example (delimiters at beginning and end):

IN=";bla@so me.com;*;john@home.com;"
> []
> [bla@so me.com]
> [*]
> [john@home.com]
> []

Basically it removes every character other than ; making delims eg. ;;; . Then it does for loop from 1 to number-of-delimiters as counted by ${#delims} . The final step is to safely get the $i th part using cut .

[Nov 08, 2018] Utilizing multi core for tar+gzip-bzip compression-decompression

Nov 08, 2018 | stackoverflow.com

Ask Question up vote 163 down vote favorite 67


user1118764 , Sep 7, 2012 at 6:58

I normally compress using tar zcvf and decompress using tar zxvf (using gzip due to habit).

I've recently gotten a quad core CPU with hyperthreading, so I have 8 logical cores, and I notice that many of the cores are unused during compression/decompression.

Is there any way I can utilize the unused cores to make it faster?

Warren Severin , Nov 13, 2017 at 4:37

The solution proposed by Xiong Chiamiov above works beautifully. I had just backed up my laptop with .tar.bz2 and it took 132 minutes using only one cpu thread. Then I compiled and installed tar from source: gnu.org/software/tar I included the options mentioned in the configure step: ./configure --with-gzip=pigz --with-bzip2=lbzip2 --with-lzip=plzip I ran the backup again and it took only 32 minutes. That's better than 4X improvement! I watched the system monitor and it kept all 4 cpus (8 threads) flatlined at 100% the whole time. THAT is the best solution. – Warren Severin Nov 13 '17 at 4:37

Mark Adler , Sep 7, 2012 at 14:48

You can use pigz instead of gzip, which does gzip compression on multiple cores. Instead of using the -z option, you would pipe it through pigz:
tar cf - paths-to-archive | pigz > archive.tar.gz

By default, pigz uses the number of available cores, or eight if it could not query that. You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can request better compression with -9. E.g.

tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz

user788171 , Feb 20, 2013 at 12:43

How do you use pigz to decompress in the same fashion? Or does it only work for compression? – user788171 Feb 20 '13 at 12:43

Mark Adler , Feb 20, 2013 at 16:18

pigz does use multiple cores for decompression, but only with limited improvement over a single core. The deflate format does not lend itself to parallel decompression. The decompression portion must be done serially. The other cores for pigz decompression are used for reading, writing, and calculating the CRC. When compressing on the other hand, pigz gets close to a factor of n improvement with n cores. – Mark Adler Feb 20 '13 at 16:18

Garrett , Mar 1, 2014 at 7:26

The hyphen here is stdout (see this page ). – Garrett Mar 1 '14 at 7:26

Mark Adler , Jul 2, 2014 at 21:29

Yes. 100% compatible in both directions. – Mark Adler Jul 2 '14 at 21:29

Mark Adler , Apr 23, 2015 at 5:23

There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files. – Mark Adler Apr 23 '15 at 5:23

Jen , Jun 14, 2013 at 14:34

You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use.

For example use:

tar -c --use-compress-program=pigz -f tar.file dir_to_zip

ranman , Nov 13, 2013 at 10:01

This is an awesome little nugget of knowledge and deserves more upvotes. I had no idea this option even existed and I've read the man page a few times over the years. – ranman Nov 13 '13 at 10:01

Valerio Schiavoni , Aug 5, 2014 at 22:38

Unfortunately by doing so the concurrent feature of pigz is lost. You can see for yourself by executing that command and monitoring the load on each of the cores. – Valerio Schiavoni Aug 5 '14 at 22:38

bovender , Sep 18, 2015 at 10:14

@ValerioSchiavoni: Not here, I get full load on all 4 cores (Ubuntu 15.04 'Vivid'). – bovender Sep 18 '15 at 10:14

Valerio Schiavoni , Sep 28, 2015 at 23:41

On compress or on decompress ? – Valerio Schiavoni Sep 28 '15 at 23:41

Offenso , Jan 11, 2017 at 17:26

I prefer tar - dir_to_zip | pv | pigz > tar.file pv helps me estimate, you can skip it. But still it easier to write and remember. – Offenso Jan 11 '17 at 17:26

Maxim Suslov , Dec 18, 2014 at 7:31

Common approach

There is option for tar program:

-I, --use-compress-program PROG
      filter through PROG (must accept -d)

You can use multithread version of archiver or compressor utility.

Most popular multithread archivers are pigz (instead of gzip) and pbzip2 (instead of bzip2). For instance:

$ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive
$ tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz paths_to_archive

Archiver must accept -d. If your replacement utility hasn't this parameter and/or you need specify additional parameters, then use pipes (add parameters if necessary):

$ tar cf - paths_to_archive | pbzip2 > OUTPUT_FILE.tar.gz
$ tar cf - paths_to_archive | pigz > OUTPUT_FILE.tar.gz

Input and output of singlethread and multithread are compatible. You can compress using multithread version and decompress using singlethread version and vice versa.

p7zip

For p7zip for compression you need a small shell script like the following:

#!/bin/sh
case $1 in
  -d) 7za -txz -si -so e;;
   *) 7za -txz -si -so a .;;
esac 2>/dev/null

Save it as 7zhelper.sh. Here the example of usage:

$ tar -I 7zhelper.sh -cf OUTPUT_FILE.tar.7z paths_to_archive
$ tar -I 7zhelper.sh -xf OUTPUT_FILE.tar.7z
xz

Regarding multithreaded XZ support. If you are running version 5.2.0 or above of XZ Utils, you can utilize multiple cores for compression by setting -T or --threads to an appropriate value via the environmental variable XZ_DEFAULTS (e.g. XZ_DEFAULTS="-T 0" ).

This is a fragment of man for 5.1.0alpha version:

Multithreaded compression and decompression are not implemented yet, so this option has no effect for now.

However this will not work for decompression of files that haven't also been compressed with threading enabled. From man for version 5.2.2:

Threaded decompression hasn't been implemented yet. It will only work on files that contain multiple blocks with size information in block headers. All files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size is used.

Recompiling with replacement

If you build tar from sources, then you can recompile with parameters

--with-gzip=pigz
--with-bzip2=lbzip2
--with-lzip=plzip

After recompiling tar with these options you can check the output of tar's help:

$ tar --help | grep "lbzip2\|plzip\|pigz"
  -j, --bzip2                filter the archive through lbzip2
      --lzip                 filter the archive through plzip
  -z, --gzip, --gunzip, --ungzip   filter the archive through pigz

> , Apr 28, 2015 at 20:41

This is indeed the best answer. I'll definitely rebuild my tar! – user1985657 Apr 28 '15 at 20:41

mpibzip2 , Apr 28, 2015 at 20:57

I just found pbzip2 and mpibzip2 . mpibzip2 looks very promising for clusters or if you have a laptop and a multicore desktop computer for instance. – user1985657 Apr 28 '15 at 20:57

oᴉɹǝɥɔ , Jun 10, 2015 at 17:39

This is a great and elaborate answer. It may be good to mention that multithreaded compression (e.g. with pigz ) is only enabled when it reads from the file. Processing STDIN may in fact be slower. – oᴉɹǝɥɔ Jun 10 '15 at 17:39

selurvedu , May 26, 2016 at 22:13

Plus 1 for xz option. It the simplest, yet effective approach. – selurvedu May 26 '16 at 22:13

panticz.de , Sep 1, 2014 at 15:02

You can use the shortcut -I for tar's --use-compress-program switch, and invoke pbzip2 for bzip2 compression on multiple cores:
tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 DIRECTORY_TO_COMPRESS/

einpoklum , Feb 11, 2017 at 15:59

A nice TL;DR for @MaximSuslov's answer . – einpoklum Feb 11 '17 at 15:59

,

If you want to have more flexibility with filenames and compression options, you can use:
find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec \
tar -P --transform='s@/my/path/@@g' -cf - {} + | \
pigz -9 -p 4 > myarchive.tar.gz
Step 1: find

find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec

This command will look for the files you want to archive, in this case /my/path/*.sql and /my/path/*.log . Add as many -o -name "pattern" as you want.

-exec will execute the next command using the results of find : tar

Step 2: tar

tar -P --transform='s@/my/path/@@g' -cf - {} +

--transform is a simple string replacement parameter. It will strip the path of the files from the archive so the tarball's root becomes the current directory when extracting. Note that you can't use -C option to change directory as you'll lose benefits of find : all files of the directory would be included.

-P tells tar to use absolute paths, so it doesn't trigger the warning "Removing leading `/' from member names". Leading '/' with be removed by --transform anyway.

-cf - tells tar to use the tarball name we'll specify later

{} + uses everyfiles that find found previously

Step 3: pigz

pigz -9 -p 4

Use as many parameters as you want. In this case -9 is the compression level and -p 4 is the number of cores dedicated to compression. If you run this on a heavy loaded webserver, you probably don't want to use all available cores.

Step 4: archive name

> myarchive.tar.gz

Finally.

[Nov 02, 2018] Working with data streams on the Linux command line by David Both The Linux Philosophy for SysAdmins And Everyone Who Wants To Be One by David Both. It is well worth $32.

Notable quotes:
"... This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface." ..."
Oct 30, 2018 | opensource.com
Author's note: Much of the content in this article is excerpted, with some significant edits to fit the Opensource.com article format, from Chapter 3: Data Streams, of my new book, The Linux Philosophy for SysAdmins .

Everything in Linux revolves around streams of data -- particularly text streams. Data streams are the raw materials upon which the GNU Utilities , the Linux core utilities, and many other command-line tools perform their work.

As its name implies, a data stream is a stream of data -- especially text data -- being passed from one file, device, or program to another using STDIO. This chapter introduces the use of pipes to connect streams of data from one utility program to another using STDIO. You will learn that the function of these programs is to transform the data in some manner. You will also learn about the use of redirection to redirect the data to a file.

The Linux Terminal

I use the term "transform" in conjunction with these programs because the primary task of each is to transform the incoming data from STDIO in a specific way as intended by the sysadmin and to send the transformed data to STDOUT for possible use by another transformer program or redirection to a file.

The standard term, "filters," implies something with which I don't agree. By definition, a filter is a device or a tool that removes something, such as an air filter removes airborne contaminants so that the internal combustion engine of your automobile does not grind itself to death on those particulates. In my high school and college chemistry classes, filter paper was used to remove particulates from a liquid. The air filter in my home HVAC system removes particulates that I don't want to breathe.

Although they do sometimes filter out unwanted data from a stream, I much prefer the term "transformers" because these utilities do so much more. They can add data to a stream, modify the data in some amazing ways, sort it, rearrange the data in each line, perform operations based on the contents of the data stream, and so much more. Feel free to use whichever term you prefer, but I prefer transformers. I expect that I am alone in this.

Data streams can be manipulated by inserting transformers into the stream using pipes. Each transformer program is used by the sysadmin to perform some operation on the data in the stream, thus changing its contents in some manner. Redirection can then be used at the end of the pipeline to direct the data stream to a file. As mentioned, that file could be an actual data file on the hard drive, or a device file such as a drive partition, a printer, a terminal, a pseudo-terminal, or any other device connected to a computer.

The ability to manipulate these data streams using these small yet powerful transformer programs is central to the power of the Linux command-line interface. Many of the core utilities are transformer programs and use STDIO.

In the Unix and Linux worlds, a stream is a flow of text data that originates at some source; the stream may flow to one or more programs that transform it in some way, and then it may be stored in a file or displayed in a terminal session. As a sysadmin, your job is intimately associated with manipulating the creation and flow of these data streams. In this post, we will explore data streams -- what they are, how to create them, and a little bit about how to use them.

Text streams -- a universal interface

The use of Standard Input/Output (STDIO) for program input and output is a key foundation of the Linux way of doing things. STDIO was first developed for Unix and has found its way into most other operating systems since then, including DOS, Windows, and Linux.

" This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."

-- Doug McIlroy, Basics of the Unix Philosophy

STDIO

STDIO was developed by Ken Thompson as a part of the infrastructure required to implement pipes on early versions of Unix. Programs that implement STDIO use standardized file handles for input and output rather than files that are stored on a disk or other recording media. STDIO is best described as a buffered data stream, and its primary function is to stream data from the output of one program, file, or device to the input of another program, file, or device.

There are three STDIO data streams, each of which is automatically opened as a file at the startup of a program -- well, those programs that use STDIO. Each STDIO data stream is associated with a file handle, which is just a set of metadata that describes the attributes of the file. File handles 0, 1, and 2 are explicitly defined by convention and long practice as STDIN, STDOUT, and STDERR, respectively.

STDIN, File handle 0 , is standard input which is usually input from the keyboard. STDIN can be redirected from any file, including device files, instead of the keyboard. It is not common to need to redirect STDIN, but it can be done.

STDOUT, File handle 1 , is standard output which sends the data stream to the display by default. It is common to redirect STDOUT to a file or to pipe it to another program for further processing.

STDERR, File handle 2 . The data stream for STDERR is also usually sent to the display.

If STDOUT is redirected to a file, STDERR continues to be displayed on the screen. This ensures that when the data stream itself is not displayed on the terminal, that STDERR is, thus ensuring that the user will see any errors resulting from execution of the program. STDERR can also be redirected to the same or passed on to the next transformer program in a pipeline.

STDIO is implemented as a C library, stdio.h , which can be included in the source code of programs so that it can be compiled into the resulting executable.

Simple streams

You can perform the following experiments safely in the /tmp directory of your Linux host. As the root user, make /tmp the PWD, create a test directory, and then make the new directory the PWD.

# cd /tmp ; mkdir test ; cd test

Enter and run the following command line program to create some files with content on the drive. We use the dmesg command simply to provide data for the files to contain. The contents don't matter as much as just the fact that each file has some content.

# for I in 0 1 2 3 4 5 6 7 8 9 ; do dmesg > file$I.txt ; done

Verify that there are now at least 10 files in /tmp/ with the names file0.txt through file9.txt .

# ll
total 1320
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file0.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file1.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file2.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file3.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file4.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file5.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file6.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file7.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file8.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file9.txt

We have generated data streams using the dmesg command, which was redirected to a series of files. Most of the core utilities use STDIO as their output stream and those that generate data streams, rather than acting to transform the data stream in some way, can be used to create the data streams that we will use for our experiments. Data streams can be as short as one line or even a single character, and as long as needed.

Exploring the hard drive

It is now time to do a little exploring. In this experiment, we will look at some of the filesystem structures.

Let's start with something simple. You should be at least somewhat familiar with the dd command. Officially known as "disk dump," many sysadmins call it "disk destroyer" for good reason. Many of us have inadvertently destroyed the contents of an entire hard drive or partition using the dd command. That is why we will hang out in the /tmp/test directory to perform some of these experiments.

Despite its reputation, dd can be quite useful in exploring various types of storage media, hard drives, and partitions. We will also use it as a tool to explore other aspects of Linux.

Log into a terminal session as root if you are not already. We first need to determine the device special file for your hard drive using the lsblk command.

[root@studentvm1 test]# lsblk -i
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 60G 0 disk
|-sda1 8:1 0 1G 0 part /boot
`-sda2 8:2 0 59G 0 part
|-fedora_studentvm1-pool00_tmeta 253:0 0 4M 0 lvm
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|-fedora_studentvm1-pool00_tdata 253:1 0 2G 0 lvm
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|-fedora_studentvm1-swap 253:4 0 10G 0 lvm [SWAP]
|-fedora_studentvm1-usr 253:5 0 15G 0 lvm /usr
|-fedora_studentvm1-home 253:7 0 2G 0 lvm /home
|-fedora_studentvm1-var 253:8 0 10G 0 lvm /var
`-fedora_studentvm1-tmp 253:9 0 5G 0 lvm /tmp
sr0 11:0 1 1024M 0 rom

We can see from this that there is only one hard drive on this host, that the device special file associated with it is /dev/sda , and that it has two partitions. The /dev/sda1 partition is the boot partition, and the /dev/sda2 partition contains a volume group on which the rest of the host's logical volumes have been created.

As root in the terminal session, use the dd command to view the boot record of the hard drive, assuming it is assigned to the /dev/sda device. The bs= argument is not what you might think; it simply specifies the block size, and the count= argument specifies the number of blocks to dump to STDIO. The if= argument specifies the source of the data stream, in this case, the /dev/sda device. Notice that we are not looking at the first block of the partition, we are looking at the very first block of the hard drive.

[root@studentvm1 test]# dd if=/dev/sda bs=512 count=1
�c�#�м���؎���|�#�#���!#��8#u
��#���u��#�#�#�|���t#�L#�#�|���#�����?t��pt#���y|1��؎м ��d|<�t#��R�|1��D#@�D��D#�##f�#\|f�f�#`|f�\
�D#p�B�#r�p�#�K`#�#��1��������#a`���#f��u#����f1�f�TCPAf�#f�#a�&Z|�#}�#�.}�4�3}�.�#��GRUB GeomHard DiskRead Error
�#��#�<u��ܻޮ�###��� ������ �_U�1+0 records in
1+0 records out
512 bytes copied, 4.3856e-05 s, 11.7 MB/s

This prints the text of the boot record, which is the first block on the disk -- any disk. In this case, there is information about the filesystem and, although it is unreadable because it is stored in binary format, the partition table. If this were a bootable device, stage 1 of GRUB or some other boot loader would be located in this sector. The last three lines contain data about the number of records and bytes processed.

Starting with the beginning of /dev/sda1 , let's look at a few blocks of data at a time to find what we want. The command is similar to the previous one, except that we have specified a few more blocks of data to view. You may have to specify fewer blocks if your terminal is not large enough to display all of the data at one time, or you can pipe the data through the less utility and use that to page through the data -- either way works. Remember, we are doing all of this as root user because non-root users do not have the required permissions.

Enter the same command as you did in the previous experiment, but increase the block count to be displayed to 100, as shown below, in order to show more data.

[root@studentvm1 test]# dd if=/dev/sda1 bs=512 count=100
##33��#:�##�� :o�[:o�[#��S�###�q[#
#<�#{5OZh�GJ͞#t�Ұ##boot/bootysimage/booC�dp��G'�*)�#A�##@
#�q[
�## ## ###�#���To=###<#8���#'#�###�#�����#�' �����#Xi �#��` qT���
<���
� r���� ]�#�#�##�##�##�#�##�##�##�#�##�##�#��#�#�##�#�##�##�#��#�#����# � �# �# �#

�#
�#
�#

�#
�#
�#

�#
�#
�#100+0 records in
100+0 records out
51200 bytes (51 kB, 50 KiB) copied, 0.00117615 s, 43.5 MB/s

Now try this command. I won't reproduce the entire data stream here because it would take up huge amounts of space. Use Ctrl-C to break out and stop the stream of data.

[root@studentvm1 test]# dd if=/dev/sda

This command produces a stream of data that is the complete content of the hard drive, /dev/sda , including the boot record, the partition table, and all of the partitions and their content. This data could be redirected to a file for use as a complete backup from which a bare metal recovery can be performed. It could also be sent directly to another hard drive to clone the first. But do not perform this particular experiment.

[root@studentvm1 test]# dd if=/dev/sda of=/dev/sdx

You can see that the dd command can be very useful for exploring the structures of various types of filesystems, locating data on a defective storage device, and much more. It also produces a stream of data on which we can use the transformer utilities in order to modify or view.

The real point here is that dd , like so many Linux commands, produces a stream of data as its output. That data stream can be searched and manipulated in many ways using other tools. It can even be used for ghost-like backups or disk duplication.

Randomness

It turns out that randomness is a desirable thing in computers -- who knew? There are a number of reasons that sysadmins might want to generate a stream of random data. A stream of random data is sometimes useful to overwrite the contents of a complete partition, such as /dev/sda1 , or even the entire hard drive, as in /dev/sda .

Perform this experiment as a non-root user. Enter this command to print an unending stream of random data to STDIO.

[student@studentvm1 ~]$ cat /dev/urandom

Use Ctrl-C to break out and stop the stream of data. You may need to use Ctrl-C multiple times.

Random data is also used as the input seed to programs that generate random passwords and random data and numbers for use in scientific and statistical calculations. I will cover randomness and other interesting data sources in a bit more detail in Chapter 24: Everything is a file.

Pipe dreams

Pipes are critical to our ability to do the amazing things on the command line, so much so that I think it is important to recognize that they were invented by Douglas McIlroy during the early days of Unix (thanks, Doug!). The Princeton University website has a fragment of an interview with McIlroy in which he discusses the creation of the pipe and the beginnings of the Unix philosophy.

Notice the use of pipes in the simple command-line program shown next, which lists each logged-in user a single time, no matter how many logins they have active. Perform this experiment as the student user. Enter the command shown below:

[student@studentvm1 ~]$ w | tail -n +3 | awk '{print $1}' | sort | uniq
root
student
[student@studentvm1 ~]$

The results from this command produce two lines of data that show that the user's root and student are both logged in. It does not show how many times each user is logged in. Your results will almost certainly differ from mine.

Pipes -- represented by the vertical bar ( | ) -- are the syntactical glue, the operator, that connects these command-line utilities together. Pipes allow the Standard Output from one command to be "piped," i.e., streamed from Standard Output of one command to the Standard Input of the next command.

The |& operator can be used to pipe the STDERR along with STDOUT to STDIN of the next command. This is not always desirable, but it does offer flexibility in the ability to record the STDERR data stream for the purposes of problem determination.

A string of programs connected with pipes is called a pipeline, and the programs that use STDIO are referred to officially as filters, but I prefer the term "transformers."

Think about how this program would have to work if we could not pipe the data stream from one command to the next. The first command would perform its task on the data and then the output from that command would need to be saved in a file. The next command would have to read the stream of data from the intermediate file and perform its modification of the data stream, sending its own output to a new, temporary data file. The third command would have to take its data from the second temporary data file and perform its own manipulation of the data stream and then store the resulting data stream in yet another temporary file. At each step, the data file names would have to be transferred from one command to the next in some way.

I cannot even stand to think about that because it is so complex. Remember: Simplicity rocks!

Building pipelines

When I am doing something new, solving a new problem, I usually do not just type in a complete Bash command pipeline from scratch off the top of my head. I usually start with just one or two commands in the pipeline and build from there by adding more commands to further process the data stream. This allows me to view the state of the data stream after each of the commands in the pipeline and make corrections as they are needed.

It is possible to build up very complex pipelines that can transform the data stream using many different utilities that work with STDIO.

Redirection

Redirection is the capability to redirect the STDOUT data stream of a program to a file instead of to the default target of the display. The "greater than" ( > ) character, aka "gt", is the syntactical symbol for redirection of STDOUT.

Redirecting the STDOUT of a command can be used to create a file containing the results from that command.

[student@studentvm1 ~]$ df -h > diskusage.txt

There is no output to the terminal from this command unless there is an error. This is because the STDOUT data stream is redirected to the file and STDERR is still directed to the STDOUT device, which is the display. You can view the contents of the file you just created using this next command:

[student@studentvm1 test]# cat diskusage.txt
Filesystem Size Used Avail Use% Mounted on
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 1.2M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/fedora_studentvm1-root 2.0G 50M 1.8G 3% /
/dev/mapper/fedora_studentvm1-usr 15G 4.5G 9.5G 33% /usr
/dev/mapper/fedora_studentvm1-var 9.8G 1.1G 8.2G 12% /var
/dev/mapper/fedora_studentvm1-tmp 4.9G 21M 4.6G 1% /tmp
/dev/mapper/fedora_studentvm1-home 2.0G 7.2M 1.8G 1% /home
/dev/sda1 976M 221M 689M 25% /boot
tmpfs 395M 0 395M 0% /run/user/0
tmpfs 395M 12K 395M 1% /run/user/1000

When using the > symbol to redirect the data stream, the specified file is created if it does not already exist. If it does exist, the contents are overwritten by the data stream from the command. You can use double greater-than symbols, >>, to append the new data stream to any existing content in the file.

[student@studentvm1 ~]$ df -h >> diskusage.txt

You can use cat and/or less to view the diskusage.txt file in order to verify that the new data was appended to the end of the file.

The < (less than) symbol redirects data to the STDIN of the program. You might want to use this method to input data from a file to STDIN of a command that does not take a filename as an argument but that does use STDIN. Although input sources can be redirected to STDIN, such as a file that is used as input to grep, it is generally not necessary as grep also takes a filename as an argument to specify the input source. Most other commands also take a filename as an argument for their input source.

Just grep'ing around

The grep command is used to select lines that match a specified pattern from a stream of data. grep is one of the most commonly used transformer utilities and can be used in some very creative and interesting ways. The grep command is one of the few that can correctly be called a filter because it does filter out all the lines of the data stream that you do not want; it leaves only the lines that you do want in the remaining data stream.

If the PWD is not the /tmp/test directory, make it so. Let's first create a stream of random data to store in a file. In this case, we want somewhat less random data that would be limited to printable characters. A good password generator program can do this. The following program (you may have to install pwgen if it is not already) creates a file that contains 50,000 passwords that are 80 characters long using every printable character. Try it without redirecting to the random.txt file first to see what that looks like, and then do it once redirecting the output data stream to the file.

$ pwgen -sy 80 50000 > random.txt

Considering that there are so many passwords, it is very likely that some character strings in them are the same. First, cat the random.txt file, then use the grep command to locate some short, randomly selected strings from the last ten passwords on the screen. I saw the word "see" in one of those ten passwords, so my command looked like this: grep see random.txt , and you can try that, but you should also pick some strings of your own to check. Short strings of two to four characters work best.

$ grep see random.txt
R=p)'s/~0}wr~2(OqaL.S7DNyxlmO69`"12u]h@rp[D2%3}1b87+>Vk,;4a0hX]d7see;1%9|wMp6Yl.
bSM_mt_hPy|YZ1<TY/Hu5{g#mQ<u_(@8B5Vt?w%i-&C>NU@[;zV2-see)>(BSK~n5mmb9~h)yx{a&$_e
cjR1QWZwEgl48[3i-(^x9D=v)seeYT2R#M:>wDh?Tn$]HZU7}j!7bIiIr^cI.DI)W0D"'vZU@.Kxd1E1
z=tXcjVv^G\nW`,y=bED]d|7%s6iYT^a^Bvsee:v\UmWT02|P|nq%A*;+Ng[$S%*s)-ls"dUfo|0P5+n Summary

It is the use of pipes and redirection that allows many of the amazing and powerful tasks that can be performed with data streams on the Linux command line. It is pipes that transport STDIO data streams from one program or file to another. The ability to pipe streams of data through one or more transformer programs supports powerful and flexible manipulation of data in those streams.

Each of the programs in the pipelines demonstrated in the experiments is small, and each does one thing well. They are also transformers; that is, they take Standard Input, process it in some way, and then send the result to Standard Output. Implementation of these programs as transformers to send processed data streams from their own Standard Output to the Standard Input of the other programs is complementary to, and necessary for, the implementation of pipes as a Linux tool.

STDIO is nothing more than streams of data. This data can be almost anything from the output of a command to list the files in a directory, or an unending stream of data from a special device like /dev/urandom , or even a stream that contains all of the raw data from a hard drive or a partition.

Any device on a Linux computer can be treated like a data stream. You can use ordinary tools like dd and cat to dump data from a device into a STDIO data stream that can be processed using other ordinary Linux tools.

Topics Linux Command line

David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for...

[Oct 29, 2018] Getting all the matches with 'grep -f' option

Perverted example, but interesting question.
Oct 29, 2018 | stackoverflow.com

Arturo ,Mar 24, 2017 at 8:59

I would like to find all the matches of the text I have in one file ('file1.txt') that are found in another file ('file2.txt') using the grep option -f, that tells to read the expressions to be found from file.

'file1.txt'

a

a

'file2.txt'

a

When I run the command:

grep -f file1.txt file2.txt -w

I get only once the output of the 'a'. instead I would like to get it twice, because it occurs twice in my 'file1.txt' file. Is there a way to let grep (or any other unix/linux) tool to output a match for each line it reads? Thanks in advance. Arturo

RomanPerekhrest ,Mar 24, 2017 at 9:02

the matches of the text - some exact text? should it compare line to line? – RomanPerekhrest Mar 24 '17 at 9:02

Arturo ,Mar 24, 2017 at 9:04

Yes it contains exact match. I added the -w options, following your input. Yes, it is a comparison line by line. – Arturo Mar 24 '17 at 9:04

Remko ,Mar 24, 2017 at 9:19

Grep works as designed, giving only one output line. You could use another approach:
while IFS= read -r pattern; do
    grep -e $pattern file2.txt
done < file1.txt

This would use every line in file1.txt as a pattern for the grep, thus resulting in the output you're looking for.

Arturo ,Mar 24, 2017 at 9:30

That did the trick!. Thank you. And it is even much faster than my previous grep command. – Arturo Mar 24 '17 at 9:30

ar7 ,Mar 24, 2017 at 9:12

When you use
grep -f pattern.txt file.txt

It means match the pattern found in pattern.txt in the file file.txt .

It is giving you only one output because that is all is there in the second file.

Try interchanging the files,

grep -f file2.txt file1.txt -w

Does this answer your question?

Arturo ,Mar 24, 2017 at 9:17

I understand that, but still I would like to find a way to print a match each time a pattern (even a repeated one) from 'pattern.txt' is found in 'file.txt'. Even a tool or a script rather then 'grep -f' would suffice. – Arturo Mar 24 '17 at 9:17

[Oct 23, 2018] To switch from vertical split to horizontal split fast in Vim

Nov 24, 2013 | stackoverflow.com

ДМИТРИЙ МАЛИКОВ, Nov 24, 2013 at 7:55

How can you switch your current windows from horizontal split to vertical split and vice versa in Vim?

I did that a moment ago by accident but I cannot find the key again.

Mark Rushakoff

Vim mailing list says (re-formatted for better readability):

  • To change two vertically split windows to horizontal split: Ctrl - W t Ctrl - W K
  • Horizontally to vertically: Ctrl - W t Ctrl - W H

Explanations:

  • Ctrl - W t -- makes the first (topleft) window current
  • Ctrl - W K -- moves the current window to full-width at the very top
  • Ctrl - W H -- moves the current window to full-height at far left

Note that the t is lowercase, and the K and H are uppercase.

Also, with only two windows, it seems like you can drop the Ctrl - W t part because if you're already in one of only two windows, what's the point of making it current?

Too much php Aug 13 '09 at 2:17

So if you have two windows split horizontally, and you are in the lower window, you just use ^WL

Alex Hart Dec 7 '12 at 14:10

There are a ton of interesting ^w commands (b, w, etc)

holms Feb 28 '13 at 9:07

somehow doesn't work for me.. =/ –

Lambart Mar 26 at 19:34

Just toggle your NERDTree panel closed before 'rotating' the splits, then toggle it back open. :NERDTreeToggle (I have it mapped to a function key for convenience).

xxx Feb 19 '13 at 20:26

^w followed by capital H , J , K or L will move the current window to the far left , bottom , top or right respectively like normal cursor navigation.

The lower case equivalents move focus instead of moving the window.

respectTheCode, Jul 21 '13 at 9:55

1 Wow, cool! Thanks! :-) – infous Feb 6 at 8:46 it's much better since users use hjkl to move between buffers. – Afshin Mehrabani

In VIM, take a look at the following to see different alternatives for what you might have done:

:help opening-window

For instance:

Ctrl - W s
Ctrl - W o
Ctrl - W v
Ctrl - W o
Ctrl - W s

Anon, Apr 29 at 21:45

The command ^W-o is great! I did not know it. – Masi Aug 13 '09 at 2:20 add a comment | up vote 6 down vote The following ex commands will (re-)split any number of windows:

If there are hidden buffers, issuing these commands will also make the hidden buffers visible.

Mark Oct 22 at 19:31

When you have two or more windows open horizontally or vertically and want to switch them all to the other orientation, you can use the following:

[Oct 22, 2018] linux - If I rm -rf a symlink will the data the link points to get erased, to

Notable quotes:
"... Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch. ..."
Oct 22, 2018 | unix.stackexchange.com

user4951 ,Jan 25, 2013 at 2:40

This is the contents of the /home3 directory on my system:
./   backup/    hearsttr@  lost+found/  randomvi@  sexsmovi@
../  freemark@  investgr@  nudenude@    romanced@  wallpape@

I want to clean this up but I am worried because of the symlinks, which point to another drive.

If I say rm -rf /home3 will it delete the other drive?

John Sui

rm -rf /home3 will delete all files and directory within home3 and home3 itself, which include symlink files, but will not "follow"(de-reference) those symlink.

Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch.

[Oct 22, 2018] Does rm -rf follow symbolic links?

Jan 25, 2012 | superuser.com
I have a directory like this:
$ ls -l
total 899166
drwxr-xr-x 12 me scicomp       324 Jan 24 13:47 data
-rw-r--r--  1 me scicomp     84188 Jan 24 13:47 lod-thin-1.000000-0.010000-0.030000.rda
drwxr-xr-x  2 me scicomp       808 Jan 24 13:47 log
lrwxrwxrwx  1 me scicomp        17 Jan 25 09:41 msg -> /home/me/msg

And I want to remove it using rm -r .

However I'm scared rm -r will follow the symlink and delete everything in that directory (which is very bad).

I can't find anything about this in the man pages. What would be the exact behavior of running rm -rf from a directory above this one?

LordDoskias Jan 25 '12 at 16:43, Jan 25, 2012 at 16:43

How hard it is to create a dummy dir with a symlink pointing to a dummy file and execute the scenario? Then you will know for sure how it works! –

hakre ,Feb 4, 2015 at 13:09

X-Ref: If I rm -rf a symlink will the data the link points to get erased, too? ; Deleting a folder that contains symlinkshakre Feb 4 '15 at 13:09

Susam Pal ,Jan 25, 2012 at 16:47

Example 1: Deleting a directory containing a soft link to another directory.
susam@nifty:~/so$ mkdir foo bar
susam@nifty:~/so$ touch bar/a.txt
susam@nifty:~/so$ ln -s /home/susam/so/bar/ foo/baz
susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── foo
    └── baz -> /home/susam/so/bar/

3 directories, 1 file
susam@nifty:~/so$ rm -r foo
susam@nifty:~/so$ tree
.
└── bar
    └── a.txt

1 directory, 1 file
susam@nifty:~/so$

So, we see that the target of the soft-link survives.

Example 2: Deleting a soft link to a directory

susam@nifty:~/so$ ln -s /home/susam/so/bar baz
susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── baz -> /home/susam/so/bar

2 directories, 1 file
susam@nifty:~/so$ rm -r baz
susam@nifty:~/so$ tree
.
└── bar
    └── a.txt

1 directory, 1 file
susam@nifty:~/so$

Only, the soft link is deleted. The target of the soft-link survives.

Example 3: Attempting to delete the target of a soft-link

susam@nifty:~/so$ ln -s /home/susam/so/bar baz
susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── baz -> /home/susam/so/bar

2 directories, 1 file
susam@nifty:~/so$ rm -r baz/
rm: cannot remove 'baz/': Not a directory
susam@nifty:~/so$ tree
.
├── bar
└── baz -> /home/susam/so/bar

2 directories, 0 files

The file in the target of the symbolic link does not survive.

The above experiments were done on a Debian GNU/Linux 9.0 (stretch) system.

Wyrmwood ,Oct 30, 2014 at 20:36

rm -rf baz/* will remove the contents – Wyrmwood Oct 30 '14 at 20:36

Buttle Butkus ,Jan 12, 2016 at 0:35

Yes, if you do rm -rf [symlink], then the contents of the original directory will be obliterated! Be very careful. – Buttle Butkus Jan 12 '16 at 0:35

frnknstn ,Sep 11, 2017 at 10:22

Your example 3 is incorrect! On each system I have tried, the file a.txt will be removed in that scenario. – frnknstn Sep 11 '17 at 10:22

Susam Pal ,Sep 11, 2017 at 15:20

@frnknstn You are right. I see the same behaviour you mention on my latest Debian system. I don't remember on which version of Debian I performed the earlier experiments. In my earlier experiments on an older version of Debian, either a.txt must have survived in the third example or I must have made an error in my experiment. I have updated the answer with the current behaviour I observe on Debian 9 and this behaviour is consistent with what you mention. – Susam Pal Sep 11 '17 at 15:20

Ken Simon ,Jan 25, 2012 at 16:43

Your /home/me/msg directory will be safe if you rm -rf the directory from which you ran ls. Only the symlink itself will be removed, not the directory it points to.

The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself.

> ,Jan 25, 2012 at 16:54

"The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself." - I don't find this to be true. See the third example in my response below. – Susam Pal Jan 25 '12 at 16:54

Andrew Crabb ,Nov 26, 2013 at 21:52

I get the same result as @Susam ('rm -r symlink/' does not delete the target of symlink), which I am pleased about as it would be a very easy mistake to make. – Andrew Crabb Nov 26 '13 at 21:52

,

rm should remove files and directories. If the file is symbolic link, link is removed, not the target. It will not interpret a symbolic link. For example what should be the behavior when deleting 'broken links'- rm exits with 0 not with non-zero to indicate failure

[Oct 22, 2018] >move selection to a separate file

Oct 22, 2018 | superuser.com

greg0ire ,Jan 23, 2013 at 13:29

With vim, how can I move a piece of text to a new file? For the moment, I do this:

Is there a more efficient way to do this?

Before

a.txt

sometext
some other text
some other other text
end
After

a.txt

sometext
end

b.txt

some other text
some other other text

Ingo Karkat, Jan 23, 2013 at 15:20

How about these custom commands:
:command! -bang -range -nargs=1 -complete=file MoveWrite  <line1>,<line2>write<bang> <args> | <line1>,<line2>delete _
:command! -bang -range -nargs=1 -complete=file MoveAppend <line1>,<line2>write<bang> >> <args> | <line1>,<line2>delete _

greg0ire ,Jan 23, 2013 at 15:27

This is very ugly, but hey, it seems to do in one step exactly what I asked for (I tried). +1, and accepted. I was looking for a native way to do this quickly but since there does not seem to be one, yours will do just fine. Thanks! – greg0ire Jan 23 '13 at 15:27

Ingo Karkat ,Jan 23, 2013 at 16:15

Beauty is in the eye of the beholder. I find this pretty elegant; you only need to type it once (into your .vimrc). – Ingo Karkat Jan 23 '13 at 16:15

greg0ire ,Jan 23, 2013 at 16:21

You're right, "very ugly" shoud have been "very unfamiliar". Your command is very handy, and I think I definitely going to carve it in my .vimrc – greg0ire Jan 23 '13 at 16:21

embedded.kyle ,Jan 23, 2013 at 14:08

By "move a piece of text to a new file" I assume you mean cut that piece of text from the current file and create a new file containing only that text.

Various examples:

The above only copies the text and creates a new file containing that text. You will then need to delete afterward.

This can be done using the same range and the d command:

Or by using dd for the single line case.

If you instead select the text using visual mode, and then hit : while the text is selected, you will see the following on the command line:

:'<,'>

Which indicates the selected text. You can then expand the command to:

:'<,'>w >> old_file

Which will append the text to an existing file. Then delete as above.


One liner:

:2,3 d | new +put! "

The breakdown:

greg0ire ,Jan 23, 2013 at 14:09

Your assumption is right. This looks good, I'm going to test. Could you explain 2. a bit more? I'm not very familiar with ranges. EDIT: If I try this on the second line, it writes the first line to the other file, not the second line. – greg0ire Jan 23 '13 at 14:09

embedded.kyle ,Jan 23, 2013 at 14:16

@greg0ire I got that a bit backward, I'll edit to better explain – embedded.kyle Jan 23 '13 at 14:16

greg0ire ,Jan 23, 2013 at 14:18

I added an example to make my question clearer. – greg0ire Jan 23 '13 at 14:18

embedded.kyle ,Jan 23, 2013 at 14:22

@greg0ire I corrected my answer. It's still two steps. The first copies and writes. The second deletes. – embedded.kyle Jan 23 '13 at 14:22

greg0ire ,Jan 23, 2013 at 14:41

Ok, if I understand well, the trick is to use ranges to select and write in the same command. That's very similar to what I did. +1 for the detailed explanation, but I don't think this is more efficient, since the trick with hitting ':' is what I do for the moment. – greg0ire Jan 23 '13 at 14:41

Xyon ,Jan 23, 2013 at 13:32

Select the text in visual mode, then press y to "yank" it into the buffer (copy) or d to "delete" it into the buffer (cut).

Then you can :split <new file name> to split your vim window up, and press p to paste in the yanked text. Write the file as normal.

To close the split again, pass the split you want to close :q .

greg0ire ,Jan 23, 2013 at 13:42

I have 4 steps for the moment: select, write, select, delete. With your method, I have 6 steps: select, delete, split, paste, write, close. I asked for something more efficient :P – greg0ire Jan 23 '13 at 13:42

Xyon ,Jan 23, 2013 at 13:44

Well, if you pass the split :x instead, you can combine writing and closing into one and make it five steps. :P – Xyon Jan 23 '13 at 13:44

greg0ire ,Jan 23, 2013 at 13:46

That's better, but 5 still > 4 :P – greg0ire Jan 23 '13 at 13:46

> ,

Based on @embedded.kyle's answer and this Q&A , I ended up with this one liner to append a selection to a file and delete from current file. After selecting some lines with Shift+V , hit : and run:
'<,'>w >> test | normal gvd

The first part appends selected lines. The second command enters normal mode and runs gvd to select the last selection and then deletes.

[Oct 21, 2018] Common visual block selection scenarios

Notable quotes:
"... column oriented ..."
Oct 21, 2018 | stackoverflow.com
You are talking about text selecting and copying, I think that you should give a look to the Vim Visual Mode .

In the visual mode, you are able to select text using Vim commands, then you can do whatever you want with the selection.

Consider the following common scenarios:

You need to select to the next matching parenthesis.

You could do:

You want to select text between quotes:

You want to select a curly brace block (very common on C-style languages):

You want to select the entire file:

Visual block selection is another really useful feature, it allows you to select a rectangular area of text, you just have to press Ctrl - V to start it, and then select the text block you want and perform any type of operation such as yank, delete, paste, edit, etc. It's great to edit column oriented text.

[Oct 21, 2018] Moving lines between split windows in vim

Notable quotes:
"... "send the line I am on (or the test I selected) to the other window" ..."
Oct 21, 2018 | superuser.com

brad ,Nov 24, 2015 at 12:28

I have two files, say a.txt and b.txt , in the same session of vim and I split the screen so I have file a.txt in the upper window and b.txt in the lower window.

I want to move lines here and there from a.txt to b.txt : I select a line with Shift + v , then I move to b.txt in the lower window with Ctrl + w , paste with p , get back to a.txt with Ctrl + w and I can repeat the operation when I get to another line I want to move.

My question: is there a quicker way to say vim "send the line I am on (or the test I selected) to the other window" ?

Chong ,Nov 24, 2015 at 12:33

Use q macro? q[some_letter] [whatever operations] q , then call the macro with [times to be called]@qChong Nov 24 '15 at 12:33

Anthony Geoghegan ,Nov 24, 2015 at 13:00

I presume that you're deleting the line that you've selected in a.txt . If not, you'd be pasting something else into b.txt . If so, there's no need to select the line first. – Anthony Geoghegan Nov 24 '15 at 13:00

Anthony Geoghegan ,Nov 24, 2015 at 13:17

This sounds like a good use case for a macro. Macros are commands that can be recorded and stored in a Vim register. Each register is identified by a letter from a to z. Recording

From Recording keys for repeated jobs - Vim Tips

To start recording, press q in Normal mode followed by a letter (a to z). That starts recording keystrokes to the specified register. Vim displays "recording" in the status line. Type any Normal mode commands, or enter Insert mode and type text. To stop recording, again press q while in Normal mode.

For this particular macro, I chose the m (for move) register to store it.

I pressed qm to record the following commands:

When I typed q to finish recording the macro, the contents of the m register were:

dd^Wjp^Wk
Usage

brad ,Nov 24, 2015 at 14:26

I asked to see if there is a command unknown to me that does the job: it seems there is none. In absence of such a command, this can be a good solution. – brad Nov 24 '15 at 14:26

romainl ,Nov 26, 2015 at 9:54

@brad, you can find all the commands available to you in the documentation. If it's not there it doesn't exist no need to ask random strangers. – romainl Nov 26 '15 at 9:54

brad ,Nov 26, 2015 at 10:17

@romainl, yes, I know this but vim documentation is really huge and, although it doesn't scare me, there is always the possibility to miss something. Moreover, it could also be that you can obtain the effect using the combination of 2 commands and in this case it would be hardly documented – brad Nov 26 '15 at 10:17

[Oct 21, 2018] How to move around buffers in vim?

Oct 21, 2018 | stackoverflow.com

user3721893 ,Jul 23, 2014 at 5:43

I normally work with more than 5 files at a time. I use buffers to open different files. I use commands such as :buf file1, :buf file2 etc. Is there a faster way to move to different files?

eckes ,Jul 23, 2014 at 5:49

What I use:

And have a short look on :he buffer

And the wiki entry on Easier Buffer Switching on the Vim Wiki: http://vim.wikia.com/wiki/Easier_buffer_switching

SO already has a question regarding yours: How do you prefer to switch between buffers in Vim?

romainl ,Jul 23, 2014 at 6:13

A few mappings can make your life a lot easier.

This one lists your buffers and prompts you for a number:

nnoremap gb :buffers<CR>:buffer<Space>

This one lists your buffers in the "wildmenu". Depends on the 'wildcharm' option as well as 'wildmenu' and 'wildmode' :

nnoremap <leader>b :buffer <C-z>

These ones allow you to cycle between all your buffers without much thinking:

nnoremap <PageUp>   :bprevious<CR>
nnoremap <PageDown> :bnext<CR>

Also, don't forget <C-^> which allows you to alternate between two buffers.

mikew ,Jul 23, 2014 at 6:38

Once the buffers are already open, you can just type :b partial_filename to switch

So if :ls shows that i have my ~./vimrc open, then I can just type :b vimr or :b rc to switch to that buffer

Brady Trainor ,Jul 25, 2014 at 22:13

Below I describe some excerpts from sections of my .vimrc . It includes mapping the leader key, setting wilds tab completion, and finally my buffer nav key choices (all mostly inspired by folks on the interweb, including romainl). Edit: Then I ramble on about my shortcuts for windows and tabs.
" easier default keys {{{1

let mapleader=','
nnoremap <leader>2 :@"<CR>

The leader key is a prefix key for mostly user-defined key commands (some plugins also use it). The default is \ , but many people suggest the easier to reach , .

The second line there is a command to @ execute from the " clipboard, in case you'd like to quickly try out various key bindings (without relying on :so % ). (My nmeumonic is that Shift - 2 is @ .)

" wilds {{{1

set wildmenu wildmode=list:full
set wildcharm=<C-z>
set wildignore+=*~ wildignorecase

For built-in completion, wildmenu is probably the part that shows up yellow on your Vim when using tab completion on command-line. wildmode is set to a comma-separated list, each coming up in turn on each tab completion (that is, my list is simply one element, list:full ). list shows rows and columns of candidates. full 's meaning includes maintaining existence of the wildmenu . wildcharm is the way to include Tab presses in your macros. The *~ is for my use in :edit and :find commands.

" nav keys {{{1
" windows, buffers and tabs {{{2
" buffers {{{3

nnoremap <leader>bb :b <C-z><S-Tab>
nnoremap <leader>bh :ls!<CR>:b<Space>
nnoremap <leader>bw :ls!<CR>:bw<Space>
nnoremap <leader>bt :TSelectBuffer<CR>
nnoremap <leader>be :BufExplorer<CR>
nnoremap <leader>bs :BufExplorerHorizontalSplit<CR>
nnoremap <leader>bv :BufExplorerVerticalSplit<CR>
nnoremap <leader>3 :e#<CR>
nmap <C-n> :bn<cr>
nmap <C-p> :bp<cr>

The ,3 is for switching between the "two" last buffers (Easier to reach than built-in Ctrl - 6 ). Nmeuonic is Shift - 3 is # , and # is the register symbol for last buffer. (See :marks .)

,bh is to select from hidden buffers ( ! ).

,bw is to bwipeout buffers by number or name. For instance, you can wipeout several while looking at the list, with ,bw 1 3 4 8 10 <CR> . Note that wipeout is more destructive than :bdelete . They have their pros and cons. For instance, :bdelete leaves the buffer in the hidden list, while :bwipeout removes global marks (see :help marks , and the description of uppercase marks).

I haven't settled on these keybindings, I would sort of prefer that my ,bb was simply ,b (simply defining while leaving the others defined makes Vim pause to see if you'll enter more).

Those shortcuts for :BufExplorer are actually the defaults for that plugin, but I have it written out so I can change them if I want to start using ,b without a hang.

You didn't ask for this:

If you still find Vim buffers a little awkward to use, try to combine the functionality with tabs and windows (until you get more comfortable?).

" windows {{{3

" window nav
nnoremap <leader>w <C-w>
nnoremap <M-h> <C-w>h
nnoremap <M-j> <C-w>j
nnoremap <M-k> <C-w>k
nnoremap <M-l> <C-w>l
" resize window
nnoremap <C-h> <C-w><
nnoremap <C-j> <C-w>+
nnoremap <C-k> <C-w>-
nnoremap <C-l> <C-w>>

Notice how nice ,w is for a prefix. Also, I reserve Ctrl key for resizing, because Alt ( M- ) is hard to realize in all environments, and I don't have a better way to resize. I'm fine using ,w to switch windows.

" tabs {{{3

nnoremap <leader>t :tab
nnoremap <M-n> :tabn<cr>
nnoremap <M-p> :tabp<cr>
nnoremap <C-Tab> :tabn<cr>
nnoremap <C-S-Tab> :tabp<cr>
nnoremap tn :tabe<CR>
nnoremap te :tabe<Space><C-z><S-Tab>
nnoremap tf :tabf<Space>
nnoremap tc :tabc<CR>
nnoremap to :tabo<CR>
nnoremap tm :tabm<CR>
nnoremap ts :tabs<CR>

nnoremap th :tabr<CR>
nnoremap tj :tabn<CR>
nnoremap tk :tabp<CR>
nnoremap tl :tabl<CR>

" or, it may make more sense to use
" nnoremap th :tabp<CR>
" nnoremap tj :tabl<CR>
" nnoremap tk :tabr<CR>
" nnoremap tl :tabn<CR>

In summary of my window and tabs keys, I can navigate both of them with Alt , which is actually pretty easy to reach. In other words:

" (modifier) key choice explanation {{{3
"
"       KEYS        CTRL                  ALT            
"       hjkl        resize windows        switch windows        
"       np          switch buffer         switch tab      
"
" (resize windows is hard to do otherwise, so we use ctrl which works across
" more environments. i can use ',w' for windowcmds o.w.. alt is comfortable
" enough for fast and gui nav in tabs and windows. we use np for navs that 
" are more linear, hjkl for navs that are more planar.) 
"

This way, if the Alt is working, you can actually hold it down while you find your "open" buffer pretty quickly, amongst the tabs and windows.

,

There are many ways to solve. The best is the best that WORKS for YOU. You have lots of fuzzy match plugins that help you navigate. The 2 things that impress me most are

1) CtrlP or Unite's fuzzy buffer search

2) LustyExplorer and/or LustyJuggler

And the simplest :

:map <F5> :ls<CR>:e #

Pressing F5 lists all buffer, just type number.

[Oct 21, 2018] What is vim recording and how can it be disabled?

Oct 21, 2018 | stackoverflow.com

vehomzzz, Oct 6, 2009 at 20:03

I keep seeing the recording message at the bottom of my gvim 7.2 window.

What is it and how do I turn it off?

Joey Adams, Aug 17, 2010 at 16:26

To turn off vim recording for good, add map q <Nop> to your .vimrc file. – Joey Adams Aug 17 '10 at 16:26

0xc0de, Aug 12, 2016 at 9:04

I can't believe you want to turn recording off! I would show a really annoying popup 'Are you sure?' if one asks to turn it off (or probably would like to give options like the Windows 10 update gives). – 0xc0de Aug 12 '16 at 9:04

yogsototh, Oct 6, 2009 at 20:08

You start recording by q<letter> and you can end it by typing q again.

Recording is a really useful feature of Vim.

It records everything you type. You can then replay it simply by typing @<letter> . Record search, movement, replacement...

One of the best feature of Vim IMHO.

Cascabel, Oct 6, 2009 at 20:13

As seen other places, it's q followed by a register. A really cool (and possibly non-intuitive) part of this is that these are the same registers used by things like delete, yank, and put. This means that you can yank text from the editor into a register, then execute it as a command. – Cascabel Oct 6 '09 at 20:13

Tolga E, Aug 17, 2013 at 3:07

One more thing to note is you can hit any number before the @ to replay the recording that many times like (100@<letter>) will play your actions 100 times – Tolga E Aug 17 '13 at 3:07

anisoptera, Dec 4, 2014 at 9:43

You could add it afterward, by editing the register with put/yank. But I don't know why you'd want to turn recording on or off as part of a macro. ('q' doesn't affect anything when typed in insert mode.) – anisoptera Dec 4 '14 at 9:43

L0j1k, Jul 16, 2015 at 21:08

Vim is so freakin' cool, man. – L0j1k Jul 16 '15 at 21:08

Cascabel, Jul 29, 2015 at 14:52

@Wade " - it's called the default register. – Cascabel Jul 29 '15 at 14:52

ephemient, Oct 6, 2009 at 20:17

Type :h recording to learn more.
                           *q* *recording*
q{0-9a-zA-Z"}           Record typed characters into register {0-9a-zA-Z"}
                        (uppercase to append).  The 'q' command is disabled
                        while executing a register, and it doesn't work inside
                        a mapping.  {Vi: no recording}

q                       Stops recording.  (Implementation note: The 'q' that
                        stops recording is not stored in the register, unless
                        it was the result of a mapping)  {Vi: no recording}


                                                        *@*
@{0-9a-z".=*}           Execute the contents of register {0-9a-z".=*} [count]
                        times.  Note that register '%' (name of the current
                        file) and '#' (name of the alternate file) cannot be
                        used.  For "@=" you are prompted to enter an
                        expression.  The result of the expression is then
                        executed.  See also |@:|.  {Vi: only named registers}

Tim Henigan, Oct 6, 2009 at 20:07

It sounds like you have macro recording turned on. To shut it off, press q .

Refer to " :help recording " for further information.

Related links:

mitchus, Feb 13, 2015 at 14:16

Typing q starts macro recording, and the recording stops when the user hits q again.

As Joey Adams mentioned, to disable recording, add the following line to .vimrc in your home directory:

map q <Nop>

n611x007, Oct 4, 2015 at 7:16

only answer about "how to turn off" part of the question. Well, it makes recording inaccessible, effectively turning it off - at least noone expects vi to have a separate thread for this code, I guess, including me. – n611x007 Oct 4 '15 at 7:16

JeffH, Oct 6, 2009 at 20:10

As others have said, it's macro recording, and you turn it off with q. Here's a nice article about how-to and why it's useful.

John Millikin, Oct 6, 2009 at 20:06

It means you're in "record macro" mode. This mode is entered by typing q followed by a register name, and can be exited by typing q again.

ephemient, Oct 6, 2009 at 20:08

It's actually entered by typing q followed by any register name, which is 0-9, a-z, A-Z, and ". – ephemient Oct 6 '09 at 20:08

Cascabel, Oct 6, 2009 at 20:08

Actually, it's q{0-9a-zA-Z"} - you can record a macro into any register (named by digit, letter, "). In case you actually want to use it... you execute the contents of a register with @<register>. See :help q and :help @ if you're interested in using it. – Cascabel Oct 6 '09 at 20:08

[Oct 21, 2018] vim - how to move a block or column of text - Stack Overflow

Oct 21, 2018 | stackoverflow.com

how to move a block or column of text Ask Question up vote 23 down vote favorite 5


David.Chu.ca ,Mar 6, 2009 at 20:47

I have the following text as a simple case:

...
abc xxx 123 456
wer xxx 345 678676
...

what I need to move a block of text xxx to another location:

...
abc 123 xxx 456
wer 345 xxx 678676
...

I think I use visual mode to block a column of text, what are the other commands to move the block to another location?

Paul ,Mar 6, 2009 at 20:52

You should use blockwise visual mode ( Ctrl + v ). Then d to delete block, p or P to paste block.

Klinger ,Mar 6, 2009 at 20:53

Try the link .


Marking text (visual mode)

Visual commands

Cut and Paste

Kemin Zhou ,Nov 6, 2015 at 23:59

One of the few useful command I learned at the beginning of learning VIM is :1,3 mo 5 This means move text line 1 through 3 to line 5.

Júda Ronén ,Jan 18, 2017 at 21:20

And you can select the lines in visual mode, then press : to get :'<,'> (equivalent to the :1,3 part in your answer), and add mo N . If you want to move a single line, just :mo N . If you are really lazy, you can omit the space (e.g. :mo5 ). Use marks with mo '{a-zA-Z} . – Júda Ronén Jan 18 '17 at 21:20

Miles ,Jun 29, 2017 at 23:44

just m also works – Miles Jun 29 '17 at 23:44

John Ellinwood ,Mar 6, 2009 at 20:52

  1. In VIM, press Ctrl + V to go in Visual Block mode
  2. Select the required columns with your arrow keys and press x to cut them in the buffer.
  3. Move cursor to row 1 column 9 and press P (thats capital P) in command mode.
  4. Press Ctrl + Shift + b to get in and out of it. ( source )

SergioAraujo ,Jan 4 at 21:49

Using an external command "awk".

%!awk '{print $1,$3,$2,$4}' test.txt

With pure vim

:%s,\v(\w+) (\w+) (\w+) (\w+),\1 \3 \2 \4,g

Another vim solution using global command

:g/./normal wdwwP

[Oct 21, 2018] Camtasia Studio 8

Notable quotes:
"... What did you use to make the gif? ..."
"... That was done with Camtasia Studio 8. Very easy actually. ..."
"... @Zoltán you can use LiceCap, which is small size – ..."
"... GifCam is a simple to use tool, too: blog.bahraniapps.com/gifcam ..."
Oct 21, 2018 | stackoverflow.com

218 down vote


Adam ,Feb 7, 2014 at 22:20

Doesn't get any simpler than this! From normal mode:

yy

then move to the line you want to paste at and

p

Zoltán ,Jul 2, 2014 at 7:42

What did you use to make the gif? Zoltán Jul 2 '14 at 7:42

Adam ,Sep 19, 2014 at 20:15

That was done with Camtasia Studio 8. Very easy actually. Adam Sep 19 '14 at 20:15

onmyway133 ,Feb 23, 2016 at 15:29

@Zoltán you can use LiceCap, which is small size – onmyway133 Feb 23 '16 at 15:29

Jared ,Jun 2, 2016 at 13:44

GifCam is a simple to use tool, too: blog.bahraniapps.com/gifcamJared Jun 2 '16 at 13:44

[Oct 21, 2018] Favorite (G)Vim plugins/scripts?

Dec 27, 2009 | stackoverflow.com
What are your favorite (G)Vim plugins/scripts?

community wiki 2 revs ,Jun 24, 2009 at 13:35

Nerdtree

The NERD tree allows you to explore your filesystem and to open files and directories. It presents the filesystem to you in the form of a tree which you manipulate with the keyboard and/or mouse. It also allows you to perform simple filesystem operations.

The tree can be toggled easily with :NERDTreeToggle which can be mapped to a more suitable key. The keyboard shortcuts in the NERD tree are also easy and intuitive.

Edit: Added synopsis

SpoonMeiser ,Sep 17, 2008 at 19:32

For those of us not wanting to follow every link to find out about each plugin, care to furnish us with a brief synopsis? – SpoonMeiser Sep 17 '08 at 19:32

AbdullahDiaa ,Sep 10, 2012 at 19:51

and NERDTree with NERDTreeTabs are awesome combination github.com/jistr/vim-nerdtree-tabs – AbdullahDiaa Sep 10 '12 at 19:51

community wiki 2 revs ,May 27, 2010 at 0:08

Tim Pope has some kickass plugins. I love his surround plugin.

Taurus Olson ,Feb 21, 2010 at 18:01

Surround is a great plugin for sure. – Taurus Olson Feb 21 '10 at 18:01

Benjamin Oakes ,May 27, 2010 at 0:11

Link to all his vim contributions: vim.org/account/profile.php?user_id=9012 – Benjamin Oakes May 27 '10 at 0:11

community wiki SergioAraujo, Mar 15, 2011 at 15:35

Pathogen plugin and more things commented by Steve Losh

Patrizio Rullo ,Sep 26, 2011 at 12:11

Pathogen is the FIRST plugin you have to install on every Vim installation! It resolves the plugin management problems every Vim developer has. – Patrizio Rullo Sep 26 '11 at 12:11

Profpatsch ,Apr 12, 2013 at 8:53

I would recommend switching to Vundle . It's better by a long shot and truly automates. You can give vim-addon-manager a try, too. – Profpatsch Apr 12 '13 at 8:53

community wiki JPaget, Sep 15, 2008 at 20:47

Taglist , a source code browser plugin for Vim, is currently the top rated plugin at the Vim website and is my favorite plugin.

mindthief ,Jun 27, 2012 at 20:53

A more recent alternative to this is Tagbar , which appears to have some improvements over Taglist. This blog post offers a comparison between the two plugins. – mindthief Jun 27 '12 at 20:53

community wiki 1passenger, Nov 17, 2009 at 9:15

I love snipMate . It's simular to snippetsEmu, but has a much better syntax to read (like Textmate).

community wiki cschol, Aug 22, 2008 at 4:19

A very nice grep replacement for GVim is Ack . A search plugin written in Perl that beats Vim's internal grep implementation and externally invoked greps, too. It also by default skips any CVS directories in the project directory, e.g. '.svn'. This blog shows a way to integrate Ack with vim.

FUD, Aug 27, 2013 at 15:50

github.com/mileszs/ack.vim – FUD Aug 27 '13 at 15:50

community wiki Dominic Dos Santos ,Sep 12, 2008 at 12:44

A.vim is a great little plugin. It allows you to quickly switch between header and source files with a single command. The default is :A , but I remapped it to F2 reduce keystrokes.

community wiki 2 revs, Aug 25, 2008 at 15:06

I really like the SuperTab plugin, it allows you to use the tab key to do all your insert completions.

community wiki Greg Hewgill, Aug 25, 2008 at 19:23

I have recently started using a plugin that highlights differences in your buffer from a previous version in your RCS system (Subversion, git, whatever). You just need to press a key to toggle the diff display on/off. You can find it here: http://github.com/ghewgill/vim-scmdiff . Patches welcome!

Nathan Fellman, Sep 15, 2008 at 18:51

Do you know if this supports bitkeeper? I looked on the website but couldn't even see whom to ask. – Nathan Fellman Sep 15 '08 at 18:51

Greg Hewgill, Sep 16, 2008 at 9:26

It doesn't explicitly support bitkeeper at the moment, but as long as bitkeeper has a "diff" command that outputs a normal patch file, it should be easy enough to add. – Greg Hewgill Sep 16 '08 at 9:26

Yogesh Arora, Mar 10, 2010 at 0:47

does it support clearcase – Yogesh Arora Mar 10 '10 at 0:47

Greg Hewgill, Mar 10, 2010 at 1:39

@Yogesh: No, it doesn't support ClearCase at this time. However, if you can add ClearCase support, a patch would certainly be accepted. – Greg Hewgill Mar 10 '10 at 1:39

Olical ,Jan 23, 2013 at 11:05

This version can be loaded via pathogen in a git submodule: github.com/tomasv/vim-scmdiff – Olical Jan 23 '13 at 11:05

community wiki 4 revs, May 23, 2017 at 11:45

  1. Elegant (mini) buffer explorer - This is the multiple file/buffer manager I use. Takes very little screen space. It looks just like most IDEs where you have a top tab-bar with the files you've opened. I've tested some other similar plugins before, and this is my pick.
  2. TagList - Small file explorer, without the "extra" stuff the other file explorers have. Just lets you browse directories and open files with the "enter" key. Note that this has already been noted by previous commenters to your questions.
  3. SuperTab - Already noted by WMR in this post, looks very promising. It's an auto-completion replacement key for Ctrl-P.
  4. Desert256 color Scheme - Readable, dark one.
  5. Moria color scheme - Another good, dark one. Note that it's gVim only.
  6. Enahcned Python syntax - If you're using Python, this is an enhanced syntax version. Works better than the original. I'm not sure, but this might be already included in the newest version. Nonetheless, it's worth adding to your syntax folder if you need it.
  7. Enhanced JavaScript syntax - Same like the above.
  8. EDIT: Comments - Great little plugin to [un]comment chunks of text. Language recognition included ("#", "/", "/* .. */", etc.) .

community wiki Konrad Rudolph, Aug 25, 2008 at 14:19

Not a plugin, but I advise any Mac user to switch to the MacVim distribution which is vastly superior to the official port.

As for plugins, I used VIM-LaTeX for my thesis and was very satisfied with the usability boost. I also like the Taglist plugin which makes use of the ctags library.

community wiki Yariv ,Nov 25, 2010 at 19:58

clang complete - the best c++ code completion I have seen so far. By using an actual compiler (that would be clang) the plugin is able to complete complex expressions including STL and smart pointers.

community wiki Greg Bowyer, Jul 30, 2009 at 19:51

No one said matchit yet ? Makes HTML / XML soup much nicer http://www.vim.org/scripts/script.php?script_id=39

community wiki 2 revs, 2 users 91% ,Nov 24, 2011 at 5:18

Tomas Restrepo posted on some great Vim scripts/plugins . He has also pointed out some nice color themes on his blog, too. Check out his Vim category .

community wiki HaskellElephant ,Mar 29, 2011 at 17:59,

With version 7.3, undo branches was added to vim. A very powerful feature, but hard to use, until Steve Losh made Gundo which makes this feature possible to use with a ascii representation of the tree and a diff of the change. A must for using undo branches.

community wiki, Auguste ,Apr 20, 2009 at 8:05

Matrix Mode .

community wiki wilhelmtell ,Dec 10, 2010 at 19:11

My latest favourite is Command-T . Granted, to install it you need to have Ruby support and you'll need to compile a C extension for Vim. But oy-yoy-yoy does this plugin make a difference in opening files in Vim!

Victor Farazdagi, Apr 19, 2011 at 19:16

Definitely! Let not the ruby + c compiling stop you, you will be amazed on how well this plugin enhances your toolset. I have been ignoring this plugin for too long, installed it today and already find myself using NERDTree lesser and lesser. – Victor Farazdagi Apr 19 '11 at 19:16

datentyp ,Jan 11, 2012 at 12:54

With ctrlp now there is something as awesome as Command-T written in pure Vimscript! It's available at github.com/kien/ctrlp.vim – datentyp Jan 11 '12 at 12:54

FUD ,Dec 26, 2012 at 4:48

just my 2 cents.. being a naive user of both plugins, with a few first characters of file name i saw a much better result with commandt plugin and a lots of false positives for ctrlp. – FUD Dec 26 '12 at 4:48

community wiki
f3lix
,Mar 15, 2011 at 12:55

Conque Shell : Run interactive commands inside a Vim buffer

Conque is a Vim plugin which allows you to run interactive programs, such as bash on linux or powershell.exe on Windows, inside a Vim buffer. In other words it is a terminal emulator which uses a Vim buffer to display the program output.

http://code.google.com/p/conque/

http://www.vim.org/scripts/script.php?script_id=2771

community wiki 2 revs ,Nov 20, 2009 at 14:51

The vcscommand plugin provides global ex commands for manipulating version-controlled source files and it supports CVS,SVN and some other repositories.

You can do almost all repository related tasks from with in vim:
* Taking the diff of current buffer with repository copy
* Adding new files
* Reverting the current buffer to the repository copy by nullifying the local changes....

community wiki Sirupsen ,Nov 20, 2009 at 15:00

Just gonna name a few I didn't see here, but which I still find extremely helpful:

community wiki thestoneage ,Dec 22, 2011 at 16:25

One Plugin that is missing in the answers is NERDCommenter , which let's you do almost anything with comments. For example {add, toggle, remove} comments. And more. See this blog entry for some examples.

community wiki james ,Feb 19, 2010 at 7:17

I like taglist and fuzzyfinder, those are very cool plugin

community wiki JAVH ,Aug 15, 2010 at 11:54

TaskList

This script is based on the eclipse Task List. It will search the file for FIXME, TODO, and XXX (or a custom list) and put them in a handy list for you to browse which at the same time will update the location in the document so you can see exactly where the tag is located. Something like an interactive 'cw'

community wiki Peter Hoffmann ,Aug 29, 2008 at 4:07

I really love the snippetsEmu Plugin. It emulates some of the behaviour of Snippets from the OS X editor TextMate, in particular the variable bouncing and replacement behaviour.

community wiki Anon ,Sep 11, 2008 at 10:20

Zenburn color scheme and good fonts - [Droid Sans Mono]( http://en.wikipedia.org/wiki/Droid_(font)) on Linux, Consolas on Windows.

Gary Willoughby ,Jul 7, 2011 at 21:21

Take a look at DejaVu Sans Mono too dejavu-fonts.org/wiki/Main_Page – Gary Willoughby Jul 7 '11 at 21:21

Santosh Kumar ,Mar 28, 2013 at 4:48

Droid Sans Mono makes capital m and 0 appear same. – Santosh Kumar Mar 28 '13 at 4:48

community wiki julienXX ,Jun 22, 2010 at 12:05

If you're on a Mac, you got to use peepopen , fuzzyfinder on steroids.

Khaja Minhajuddin ,Apr 5, 2012 at 9:24

Command+T is a free alternative to this: github.com/wincent/Command-T – Khaja Minhajuddin Apr 5 '12 at 9:24

community wiki Peter Stuifzand ,Aug 25, 2008 at 19:16

I use the following two plugins all the time:

Csaba_H ,Jun 24, 2009 at 13:47

vimoutliner is really good for managing small pieces of information (from tasks/todo-s to links) – Csaba_H Jun 24 '09 at 13:47

ThiefMaster ♦ ,Nov 25, 2010 at 20:35

Adding some links/descriptions would be nice – ThiefMaster ♦ Nov 25 '10 at 20:35

community wiki chiggsy ,Aug 26, 2009 at 18:22

For vim I like a little help with completions. Vim has tons of completion modes, but really, I just want vim to complete anything it can, whenver it can.

I hate typing ending quotes, but fortunately this plugin obviates the need for such misery.

Those two are my heavy hitters.

This one may step up to roam my code like an unquiet shade, but I've yet to try it.

community wiki Brett Stahlman, Dec 11, 2009 at 13:28

Txtfmt (The Vim Highlighter) Screenshots

The Txtfmt plugin gives you a sort of "rich text" highlighting capability, similar to what is provided by RTF editors and word processors. You can use it to add colors (foreground and background) and formatting attributes (all combinations of bold, underline, italic, etc...) to your plain text documents in Vim.

The advantage of this plugin over something like Latex is that with Txtfmt, your highlighting changes are visible "in real time", and as with a word processor, the highlighting is WYSIWYG. Txtfmt embeds special tokens directly in the file to accomplish the highlighting, so the highlighting is unaffected when you move the file around, even from one computer to another. The special tokens are hidden by the syntax; each appears as a single space. For those who have applied Vince Negri's conceal/ownsyntax patch, the tokens can even be made "zero-width".

community wiki 2 revs, Dec 10, 2010 at 4:37

tcomment

"I map the "Command + /" keys so i can just comment stuff out while in insert mode imap :i

[Oct 21, 2018] What is your most productive shortcut with Vim?

Notable quotes:
"... less productive ..."
"... column oriented ..."
Feb 17, 2013 | stackoverflow.com
I've heard a lot about Vim, both pros and cons. It really seems you should be (as a developer) faster with Vim than with any other editor. I'm using Vim to do some basic stuff and I'm at best 10 times less productive with Vim.

The only two things you should care about when you talk about speed (you may not care enough about them, but you should) are:

  1. Using alternatively left and right hands is the fastest way to use the keyboard.
  2. Never touching the mouse is the second way to be as fast as possible. It takes ages for you to move your hand, grab the mouse, move it, and bring it back to the keyboard (and you often have to look at the keyboard to be sure you returned your hand properly to the right place)

Here are two examples demonstrating why I'm far less productive with Vim.

Copy/Cut & paste. I do it all the time. With all the contemporary editors you press Shift with the left hand, and you move the cursor with your right hand to select text. Then Ctrl + C copies, you move the cursor and Ctrl + V pastes.

With Vim it's horrible:

Another example? Search & replace.

And everything with Vim is like that: it seems I don't know how to handle it the right way.

NB : I've already read the Vim cheat sheet :)

My question is: What is the way you use Vim that makes you more productive than with a contemporary editor?

community wiki 18 revs, 16 users 64%, Dec 22, 2011 at 11:43

Your problem with Vim is that you don't grok vi .

You mention cutting with yy and complain that you almost never want to cut whole lines. In fact programmers, editing source code, very often want to work on whole lines, ranges of lines and blocks of code. However, yy is only one of many way to yank text into the anonymous copy buffer (or "register" as it's called in vi ).

The "Zen" of vi is that you're speaking a language. The initial y is a verb. The statement yy is a synonym for y_ . The y is doubled up to make it easier to type, since it is such a common operation.

This can also be expressed as dd P (delete the current line and paste a copy back into place; leaving a copy in the anonymous register as a side effect). The y and d "verbs" take any movement as their "subject." Thus yW is "yank from here (the cursor) to the end of the current/next (big) word" and y'a is "yank from here to the line containing the mark named ' a '."

If you only understand basic up, down, left, and right cursor movements then vi will be no more productive than a copy of "notepad" for you. (Okay, you'll still have syntax highlighting and the ability to handle files larger than a piddling ~45KB or so; but work with me here).

vi has 26 "marks" and 26 "registers." A mark is set to any cursor location using the m command. Each mark is designated by a single lower case letter. Thus ma sets the ' a ' mark to the current location, and mz sets the ' z ' mark. You can move to the line containing a mark using the ' (single quote) command. Thus 'a moves to the beginning of the line containing the ' a ' mark. You can move to the precise location of any mark using the ` (backquote) command. Thus `z will move directly to the exact location of the ' z ' mark.

Because these are "movements" they can also be used as subjects for other "statements."

So, one way to cut an arbitrary selection of text would be to drop a mark (I usually use ' a ' as my "first" mark, ' z ' as my next mark, ' b ' as another, and ' e ' as yet another (I don't recall ever having interactively used more than four marks in 15 years of using vi ; one creates one's own conventions regarding how marks and registers are used by macros that don't disturb one's interactive context). Then we go to the other end of our desired text; we can start at either end, it doesn't matter. Then we can simply use d`a to cut or y`a to copy. Thus the whole process has a 5 keystrokes overhead (six if we started in "insert" mode and needed to Esc out command mode). Once we've cut or copied then pasting in a copy is a single keystroke: p .

I say that this is one way to cut or copy text. However, it is only one of many. Frequently we can more succinctly describe the range of text without moving our cursor around and dropping a mark. For example if I'm in a paragraph of text I can use { and } movements to the beginning or end of the paragraph respectively. So, to move a paragraph of text I cut it using { d} (3 keystrokes). (If I happen to already be on the first or last line of the paragraph I can then simply use d} or d{ respectively.

The notion of "paragraph" defaults to something which is usually intuitively reasonable. Thus it often works for code as well as prose.

Frequently we know some pattern (regular expression) that marks one end or the other of the text in which we're interested. Searching forwards or backwards are movements in vi . Thus they can also be used as "subjects" in our "statements." So I can use d/foo to cut from the current line to the next line containing the string "foo" and y?bar to copy from the current line to the most recent (previous) line containing "bar." If I don't want whole lines I can still use the search movements (as statements of their own), drop my mark(s) and use the `x commands as described previously.

In addition to "verbs" and "subjects" vi also has "objects" (in the grammatical sense of the term). So far I've only described the use of the anonymous register. However, I can use any of the 26 "named" registers by prefixing the "object" reference with " (the double quote modifier). Thus if I use "add I'm cutting the current line into the ' a ' register and if I use "by/foo then I'm yanking a copy of the text from here to the next line containing "foo" into the ' b ' register. To paste from a register I simply prefix the paste with the same modifier sequence: "ap pastes a copy of the ' a ' register's contents into the text after the cursor and "bP pastes a copy from ' b ' to before the current line.

This notion of "prefixes" also adds the analogs of grammatical "adjectives" and "adverbs' to our text manipulation "language." Most commands (verbs) and movement (verbs or objects, depending on context) can also take numeric prefixes. Thus 3J means "join the next three lines" and d5} means "delete from the current line through the end of the fifth paragraph down from here."

This is all intermediate level vi . None of it is Vim specific and there are far more advanced tricks in vi if you're ready to learn them. If you were to master just these intermediate concepts then you'd probably find that you rarely need to write any macros because the text manipulation language is sufficiently concise and expressive to do most things easily enough using the editor's "native" language.


A sampling of more advanced tricks:

There are a number of : commands, most notably the :% s/foo/bar/g global substitution technique. (That's not advanced but other : commands can be). The whole : set of commands was historically inherited by vi 's previous incarnations as the ed (line editor) and later the ex (extended line editor) utilities. In fact vi is so named because it's the visual interface to ex .

: commands normally operate over lines of text. ed and ex were written in an era when terminal screens were uncommon and many terminals were "teletype" (TTY) devices. So it was common to work from printed copies of the text, using commands through an extremely terse interface (common connection speeds were 110 baud, or, roughly, 11 characters per second -- which is slower than a fast typist; lags were common on multi-user interactive sessions; additionally there was often some motivation to conserve paper).

So the syntax of most : commands includes an address or range of addresses (line number) followed by a command. Naturally one could use literal line numbers: :127,215 s/foo/bar to change the first occurrence of "foo" into "bar" on each line between 127 and 215. One could also use some abbreviations such as . or $ for current and last lines respectively. One could also use relative prefixes + and - to refer to offsets after or before the curent line, respectively. Thus: :.,$j meaning "from the current line to the last line, join them all into one line". :% is synonymous with :1,$ (all the lines).

The :... g and :... v commands bear some explanation as they are incredibly powerful. :... g is a prefix for "globally" applying a subsequent command to all lines which match a pattern (regular expression) while :... v applies such a command to all lines which do NOT match the given pattern ("v" from "conVerse"). As with other ex commands these can be prefixed by addressing/range references. Thus :.,+21g/foo/d means "delete any lines containing the string "foo" from the current one through the next 21 lines" while :.,$v/bar/d means "from here to the end of the file, delete any lines which DON'T contain the string "bar."

It's interesting that the common Unix command grep was actually inspired by this ex command (and is named after the way in which it was documented). The ex command :g/re/p (grep) was the way they documented how to "globally" "print" lines containing a "regular expression" (re). When ed and ex were used, the :p command was one of the first that anyone learned and often the first one used when editing any file. It was how you printed the current contents (usually just one page full at a time using :.,+25p or some such).

Note that :% g/.../d or (its reVerse/conVerse counterpart: :% v/.../d are the most common usage patterns. However there are couple of other ex commands which are worth remembering:

We can use m to move lines around, and j to join lines. For example if you have a list and you want to separate all the stuff matching (or conversely NOT matching some pattern) without deleting them, then you can use something like: :% g/foo/m$ ... and all the "foo" lines will have been moved to the end of the file. (Note the other tip about using the end of your file as a scratch space). This will have preserved the relative order of all the "foo" lines while having extracted them from the rest of the list. (This would be equivalent to doing something like: 1G!GGmap!Ggrep foo<ENTER>1G:1,'a g/foo'/d (copy the file to its own tail, filter the tail through grep, and delete all the stuff from the head).

To join lines usually I can find a pattern for all the lines which need to be joined to their predecessor (all the lines which start with "^ " rather than "^ * " in some bullet list, for example). For that case I'd use: :% g/^ /-1j (for every matching line, go up one line and join them). (BTW: for bullet lists trying to search for the bullet lines and join to the next doesn't work for a couple reasons ... it can join one bullet line to another, and it won't join any bullet line to all of its continuations; it'll only work pairwise on the matches).

Almost needless to mention you can use our old friend s (substitute) with the g and v (global/converse-global) commands. Usually you don't need to do so. However, consider some case where you want to perform a substitution only on lines matching some other pattern. Often you can use a complicated pattern with captures and use back references to preserve the portions of the lines that you DON'T want to change. However, it will often be easier to separate the match from the substitution: :% g/foo/s/bar/zzz/g -- for every line containing "foo" substitute all "bar" with "zzz." (Something like :% s/\(.*foo.*\)bar\(.*\)/\1zzz\2/g would only work for the cases those instances of "bar" which were PRECEDED by "foo" on the same line; it's ungainly enough already, and would have to be mangled further to catch all the cases where "bar" preceded "foo")

The point is that there are more than just p, s, and d lines in the ex command set.

The : addresses can also refer to marks. Thus you can use: :'a,'bg/foo/j to join any line containing the string foo to its subsequent line, if it lies between the lines between the ' a ' and ' b ' marks. (Yes, all of the preceding ex command examples can be limited to subsets of the file's lines by prefixing with these sorts of addressing expressions).

That's pretty obscure (I've only used something like that a few times in the last 15 years). However, I'll freely admit that I've often done things iteratively and interactively that could probably have been done more efficiently if I'd taken the time to think out the correct incantation.

Another very useful vi or ex command is :r to read in the contents of another file. Thus: :r foo inserts the contents of the file named "foo" at the current line.

More powerful is the :r! command. This reads the results of a command. It's the same as suspending the vi session, running a command, redirecting its output to a temporary file, resuming your vi session, and reading in the contents from the temp. file.

Even more powerful are the ! (bang) and :... ! ( ex bang) commands. These also execute external commands and read the results into the current text. However, they also filter selections of our text through the command! This we can sort all the lines in our file using 1G!Gsort ( G is the vi "goto" command; it defaults to going to the last line of the file, but can be prefixed by a line number, such as 1, the first line). This is equivalent to the ex variant :1,$!sort . Writers often use ! with the Unix fmt or fold utilities for reformating or "word wrapping" selections of text. A very common macro is {!}fmt (reformat the current paragraph). Programmers sometimes use it to run their code, or just portions of it, through indent or other code reformatting tools.

Using the :r! and ! commands means that any external utility or filter can be treated as an extension of our editor. I have occasionally used these with scripts that pulled data from a database, or with wget or lynx commands that pulled data off a website, or ssh commands that pulled data from remote systems.

Another useful ex command is :so (short for :source ). This reads the contents of a file as a series of commands. When you start vi it normally, implicitly, performs a :source on ~/.exinitrc file (and Vim usually does this on ~/.vimrc, naturally enough). The use of this is that you can change your editor profile on the fly by simply sourcing in a new set of macros, abbreviations, and editor settings. If you're sneaky you can even use this as a trick for storing sequences of ex editing commands to apply to files on demand.

For example I have a seven line file (36 characters) which runs a file through wc, and inserts a C-style comment at the top of the file containing that word count data. I can apply that "macro" to a file by using a command like: vim +'so mymacro.ex' ./mytarget

(The + command line option to vi and Vim is normally used to start the editing session at a given line number. However it's a little known fact that one can follow the + by any valid ex command/expression, such as a "source" command as I've done here; for a simple example I have scripts which invoke: vi +'/foo/d|wq!' ~/.ssh/known_hosts to remove an entry from my SSH known hosts file non-interactively while I'm re-imaging a set of servers).

Usually it's far easier to write such "macros" using Perl, AWK, sed (which is, in fact, like grep a utility inspired by the ed command).

The @ command is probably the most obscure vi command. In occasionally teaching advanced systems administration courses for close to a decade I've met very few people who've ever used it. @ executes the contents of a register as if it were a vi or ex command.
Example: I often use: :r!locate ... to find some file on my system and read its name into my document. From there I delete any extraneous hits, leaving only the full path to the file I'm interested in. Rather than laboriously Tab -ing through each component of the path (or worse, if I happen to be stuck on a machine without Tab completion support in its copy of vi ) I just use:

  1. 0i:r (to turn the current line into a valid :r command),
  2. "cdd (to delete the line into the "c" register) and
  3. @c execute that command.

That's only 10 keystrokes (and the expression "cdd @c is effectively a finger macro for me, so I can type it almost as quickly as any common six letter word).


A sobering thought

I've only scratched to surface of vi 's power and none of what I've described here is even part of the "improvements" for which vim is named! All of what I've described here should work on any old copy of vi from 20 or 30 years ago.

There are people who have used considerably more of vi 's power than I ever will.

Jim Dennis, Feb 12, 2010 at 4:08

@Wahnfieden -- grok is exactly what I meant: en.wikipedia.org/wiki/Grok (It's apparently even in the OED --- the closest we anglophones have to a canonical lexicon). To "grok" an editor is to find yourself using its commands fluently ... as if they were your natural language. – Jim Dennis Feb 12 '10 at 4:08

knittl, Feb 27, 2010 at 13:15

wow, a very well written answer! i couldn't agree more, although i use the @ command a lot (in combination with q : record macro) – knittl Feb 27 '10 at 13:15

Brandon Rhodes, Mar 29, 2010 at 15:26

Superb answer that utterly redeems a really horrible question. I am going to upvote this question, that normally I would downvote, just so that this answer becomes easier to find. (And I'm an Emacs guy! But this way I'll have somewhere to point new folks who want a good explanation of what vi power users find fun about vi. Then I'll tell them about Emacs and they can decide.) – Brandon Rhodes Mar 29 '10 at 15:26

Marko, Apr 1, 2010 at 14:47

Can you make a website and put this tutorial there, so it doesn't get burried here on stackoverflow. I have yet to read better introduction to vi then this. – Marko Apr 1 '10 at 14:47

CMS, Aug 2, 2009 at 8:27

You are talking about text selecting and copying, I think that you should give a look to the Vim Visual Mode .

In the visual mode, you are able to select text using Vim commands, then you can do whatever you want with the selection.

Consider the following common scenarios:

You need to select to the next matching parenthesis.

You could do:

You want to select text between quotes:

You want to select a curly brace block (very common on C-style languages):

You want to select the entire file:

Visual block selection is another really useful feature, it allows you to select a rectangular area of text, you just have to press Ctrl - V to start it, and then select the text block you want and perform any type of operation such as yank, delete, paste, edit, etc. It's great to edit column oriented text.

finnw, Aug 2, 2009 at 8:49

Every editor has something like this, it's not specific to vim. – finnw Aug 2 '09 at 8:49

guns, Aug 2, 2009 at 9:54

Yes, but it was a specific complaint of the poster. Visual mode is Vim's best method of direct text-selection and manipulation. And since vim's buffer traversal methods are superb, I find text selection in vim fairly pleasurable. – guns Aug 2 '09 at 9:54

Hamish Downer, Mar 16, 2010 at 13:34

I think it is also worth mentioning Ctrl-V to select a block - ie an arbitrary rectangle of text. When you need it it's a lifesaver. – Hamish Downer Mar 16 '10 at 13:34

CMS, Apr 2, 2010 at 2:07

@viksit: I'm using Camtasia, but there are plenty of alternatives: codinghorror.com/blog/2006/11/screencasting-for-windows.htmlCMS Apr 2 '10 at 2:07

Nathan Long, Mar 1, 2011 at 19:05

Also, if you've got a visual selection and want to adjust it, o will hop to the other end. So you can move both the beginning and the end of the selection as much as you like. – Nathan Long Mar 1 '11 at 19:05

community wiki
12 revs, 3 users 99%
,Oct 29, 2012 at 18:51

Some productivity tips:

Smart movements

Quick editing commands

Combining commands

Most commands accept a amount and direction, for example:

Useful programmer commands

Macro recording

By using very specific commands and movements, VIM can replay those exact actions for the next lines. (e.g. A for append-to-end, b / e to move the cursor to the begin or end of a word respectively)

Example of well built settings

# reset to vim-defaults
if &compatible          # only if not set before:
  set nocompatible      # use vim-defaults instead of vi-defaults (easier, more user friendly)
endif

# display settings
set background=dark     # enable for dark terminals
set nowrap              # dont wrap lines
set scrolloff=2         # 2 lines above/below cursor when scrolling
set number              # show line numbers
set showmatch           # show matching bracket (briefly jump)
set showmode            # show mode in status bar (insert/replace/...)
set showcmd             # show typed command in status bar
set ruler               # show cursor position in status bar
set title               # show file in titlebar
set wildmenu            # completion with menu
set wildignore=*.o,*.obj,*.bak,*.exe,*.py[co],*.swp,*~,*.pyc,.svn
set laststatus=2        # use 2 lines for the status bar
set matchtime=2         # show matching bracket for 0.2 seconds
set matchpairs+=<:>     # specially for html

# editor settings
set esckeys             # map missed escape sequences (enables keypad keys)
set ignorecase          # case insensitive searching
set smartcase           # but become case sensitive if you type uppercase characters
set smartindent         # smart auto indenting
set smarttab            # smart tab handling for indenting
set magic               # change the way backslashes are used in search patterns
set bs=indent,eol,start # Allow backspacing over everything in insert mode

set tabstop=4           # number of spaces a tab counts for
set shiftwidth=4        # spaces for autoindents
#set expandtab           # turn a tabs into spaces

set fileformat=unix     # file mode is unix
#set fileformats=unix,dos    # only detect unix file format, displays that ^M with dos files

# system settings
set lazyredraw          # no redraws in macros
set confirm             # get a dialog when :q, :w, or :wq fails
set nobackup            # no backup~ files.
set viminfo='20,\"500   # remember copy registers after quitting in the .viminfo file -- 20 jump links, regs up to 500 lines'
set hidden              # remember undo after quitting
set history=50          # keep 50 lines of command history
set mouse=v             # use mouse in visual mode (not normal,insert,command,help mode


# color settings (if terminal/gui supports it)
if &t_Co > 2 || has("gui_running")
  syntax on          # enable colors
  set hlsearch       # highlight search (very useful!)
  set incsearch      # search incremently (search while typing)
endif

# paste mode toggle (needed when using autoindent/smartindent)
map <F10> :set paste<CR>
map <F11> :set nopaste<CR>
imap <F10> <C-O>:set paste<CR>
imap <F11> <nop>
set pastetoggle=<F11>

# Use of the filetype plugins, auto completion and indentation support
filetype plugin indent on

# file type specific settings
if has("autocmd")
  # For debugging
  #set verbose=9

  # if bash is sh.
  let bash_is_sh=1

  # change to directory of current file automatically
  autocmd BufEnter * lcd %:p:h

  # Put these in an autocmd group, so that we can delete them easily.
  augroup mysettings
    au FileType xslt,xml,css,html,xhtml,javascript,sh,config,c,cpp,docbook set smartindent shiftwidth=2 softtabstop=2 expandtab
    au FileType tex set wrap shiftwidth=2 softtabstop=2 expandtab

    # Confirm to PEP8
    au FileType python set tabstop=4 softtabstop=4 expandtab shiftwidth=4 cinwords=if,elif,else,for,while,try,except,finally,def,class
  augroup END

  augroup perl
    # reset (disable previous 'augroup perl' settings)
    au!  

    au BufReadPre,BufNewFile
    \ *.pl,*.pm
    \ set formatoptions=croq smartindent shiftwidth=2 softtabstop=2 cindent cinkeys='0{,0},!^F,o,O,e' " tags=./tags,tags,~/devel/tags,~/devel/C
    # formatoption:
    #   t - wrap text using textwidth
    #   c - wrap comments using textwidth (and auto insert comment leader)
    #   r - auto insert comment leader when pressing <return> in insert mode
    #   o - auto insert comment leader when pressing 'o' or 'O'.
    #   q - allow formatting of comments with "gq"
    #   a - auto formatting for paragraphs
    #   n - auto wrap numbered lists
    #   
  augroup END


  # Always jump to the last known cursor position. 
  # Don't do it when the position is invalid or when inside
  # an event handler (happens when dropping a file on gvim). 
  autocmd BufReadPost * 
    \ if line("'\"") > 0 && line("'\"") <= line("$") | 
    \   exe "normal g`\"" | 
    \ endif 

endif # has("autocmd")

The settings can be stored in ~/.vimrc, or system-wide in /etc/vimrc.local and then by read from the /etc/vimrc file using:

source /etc/vimrc.local

(you'll have to replace the # comment character with " to make it work in VIM, I wanted to give proper syntax highlighting here).

The commands I've listed here are pretty basic, and the main ones I use so far. They already make me quite more productive, without having to know all the fancy stuff.

naught101, Apr 28, 2012 at 2:09

Better than '. is g;, which jumps back through the changelist . Goes to the last edited position, instead of last edited line – naught101 Apr 28 '12 at 2:09

community wiki
5 revs, 4 users 53%
,Apr 12, 2012 at 7:46

The Control + R mechanism is very useful :-) In either insert mode or command mode (i.e. on the : line when typing commands), continue with a numbered or named register:

See :help i_CTRL-R and :help c_CTRL-R for more details, and snoop around nearby for more CTRL-R goodness.

vdboor, Jun 3, 2010 at 9:08

FYI, this refers to Ctrl+R in insert mode . In normal mode, Ctrl+R is redo. – vdboor Jun 3 '10 at 9:08

Aryeh Leib Taurog, Feb 26, 2012 at 19:06

+1 for current/alternate file name. Control-A also works in insert mode for last inserted text, and Control-@ to both insert last inserted text and immediately switch to normal mode. – Aryeh Leib Taurog Feb 26 '12 at 19:06

community wiki
Benson
, Apr 1, 2010 at 3:44

Vim Plugins

There are a lot of good answers here, and one amazing one about the zen of vi. One thing I don't see mentioned is that vim is extremely extensible via plugins. There are scripts and plugins to make it do all kinds of crazy things the original author never considered. Here are a few examples of incredibly handy vim plugins:

rails.vim

Rails.vim is a plugin written by tpope. It's an incredible tool for people doing rails development. It does magical context-sensitive things that allow you to easily jump from a method in a controller to the associated view, over to a model, and down to unit tests for that model. It has saved dozens if not hundreds of hours as a rails developer.

gist.vim

This plugin allows you to select a region of text in visual mode and type a quick command to post it to gist.github.com . This allows for easy pastebin access, which is incredibly handy if you're collaborating with someone over IRC or IM.

space.vim

This plugin provides special functionality to the spacebar. It turns the spacebar into something analogous to the period, but instead of repeating actions it repeats motions. This can be very handy for moving quickly through a file in a way you define on the fly.

surround.vim

This plugin gives you the ability to work with text that is delimited in some fashion. It gives you objects which denote things inside of parens, things inside of quotes, etc. It can come in handy for manipulating delimited text.

supertab.vim

This script brings fancy tab completion functionality to vim. The autocomplete stuff is already there in the core of vim, but this brings it to a quick tab rather than multiple different multikey shortcuts. Very handy, and incredibly fun to use. While it's not VS's intellisense, it's a great step and brings a great deal of the functionality you'd like to expect from a tab completion tool.

syntastic.vim

This tool brings external syntax checking commands into vim. I haven't used it personally, but I've heard great things about it and the concept is hard to beat. Checking syntax without having to do it manually is a great time saver and can help you catch syntactic bugs as you introduce them rather than when you finally stop to test.

fugitive.vim

Direct access to git from inside of vim. Again, I haven't used this plugin, but I can see the utility. Unfortunately I'm in a culture where svn is considered "new", so I won't likely see git at work for quite some time.

nerdtree.vim

A tree browser for vim. I started using this recently, and it's really handy. It lets you put a treeview in a vertical split and open files easily. This is great for a project with a lot of source files you frequently jump between.

FuzzyFinderTextmate.vim

This is an unmaintained plugin, but still incredibly useful. It provides the ability to open files using a "fuzzy" descriptive syntax. It means that in a sparse tree of files you need only type enough characters to disambiguate the files you're interested in from the rest of the cruft.

Conclusion

There are a lot of incredible tools available for vim. I'm sure I've only scratched the surface here, and it's well worth searching for tools applicable to your domain. The combination of traditional vi's powerful toolset, vim's improvements on it, and plugins which extend vim even further, it's one of the most powerful ways to edit text ever conceived. Vim is easily as powerful as emacs, eclipse, visual studio, and textmate.

Thanks

Thanks to duwanis for his vim configs from which I have learned much and borrowed most of the plugins listed here.

Tom Morris, Apr 1, 2010 at 8:50

The magical tests-to-class navigation in rails.vim is one of the more general things I wish Vim had that TextMate absolutely nails across all languages: if I am working on Person.scala and I do Cmd+T, usually the first thing in the list is PersonTest.scala. – Tom Morris Apr 1 '10 at 8:50

Gavin Gilmour, Jan 15, 2011 at 13:44

I think it's time FuzzyFinderTextmate started to get replaced with github.com/wincent/Command-TGavin Gilmour Jan 15 '11 at 13:44

Nathan Long, Mar 1, 2011 at 19:07

+1 for Syntastic. That, combined with JSLint, has made my Javascript much less error-prone. See superuser.com/questions/247012/ about how to set up JSLint to work with Syntastic. – Nathan Long Mar 1 '11 at 19:07

AlG, Sep 13, 2011 at 17:37

@Benson Great list! I'd toss in snipMate as well. Very helpful automation of common coding stuff. if<tab> instant if block, etc. – AlG Sep 13 '11 at 17:37

EarlOfEgo, May 12, 2012 at 15:13

I think nerdcommenter is also a good plugin: here . Like its name says, it is for commenting your code. – EarlOfEgo May 12 '12 at 15:13

community wiki
4 revs, 2 users 89%
,Mar 31, 2010 at 23:01

. Repeat last text-changing command

I save a lot of time with this one.

Visual mode was mentioned previously, but block visual mode has saved me a lot of time when editing fixed size columns in text file. (accessed with Ctrl-V).

vdboor, Apr 1, 2010 at 8:34

Additionally, if you use a concise command (e.g. A for append-at-end) to edit the text, vim can repeat that exact same action for the next line you press the . key at. – vdboor Apr 1 '10 at 8:34

community wiki
3 revs, 3 users 87%
,Dec 24, 2012 at 14:50

gi

Go to last edited location (very useful if you performed some searching and than want go back to edit)

^P and ^N

Complete previous (^P) or next (^N) text.

^O and ^I

Go to previous ( ^O - "O" for old) location or to the next ( ^I - "I" just near to "O" ). When you perform searches, edit files etc., you can navigate through these "jumps" forward and back.

R. Martinho Fernandes, Apr 1, 2010 at 3:02

Thanks for gi ! Now I don't need marks for that! – R. Martinho Fernandes Apr 1 '10 at 3:02

Kungi, Feb 10, 2011 at 16:23

I Think this can also be done with `` – Kungi Feb 10 '11 at 16:23

Grant McLean, Aug 23, 2011 at 8:21

@Kungi `. will take you to the last edit `` will take you back to the position you were in before the last 'jump' - which /might/ also be the position of the last edit. – Grant McLean Aug 23 '11 at 8:21

community wiki
Ronny Brendel
, Mar 31, 2010 at 19:37

I recently (got) discovered this site: http://vimcasts.org/

It's pretty new and really really good. The guy who is running the site switched from textmate to vim and hosts very good and concise casts on specific vim topics. Check it out!

Jeromy Anglim, Jan 13, 2011 at 6:40

If you like vim tutorials, check out Derek Wyatt's vim videos as well. They're excellent. – Jeromy Anglim Jan 13 '11 at 6:40

community wiki
2 revs, 2 users 67%
,Feb 27, 2010 at 11:20

CTRL + A increments the number you are standing on.

innaM, Aug 3, 2009 at 9:14

... and CTRL-X decrements. – innaM Aug 3 '09 at 9:14

SolutionYogi, Feb 26, 2010 at 20:43

It's a neat shortcut but so far I have NEVER found any use for it. – SolutionYogi Feb 26 '10 at 20:43

matja, Feb 27, 2010 at 14:21

if you run vim in screen and wonder why this doesn't work - ctrl+A, A – matja Feb 27 '10 at 14:21

hcs42, Feb 27, 2010 at 19:05

@SolutionYogi: Consider that you want to add line number to the beginning of each line. Solution: ggI1<space><esc>0qqyawjP0<c-a>0q9999@q – hcs42 Feb 27 '10 at 19:05

blueyed, Apr 1, 2010 at 14:47

Extremely useful with Vimperator, where it increments (or decrements, Ctrl-X) the last number in the URL. Useful for quickly surfing through image galleries etc. – blueyed Apr 1 '10 at 14:47

community wiki
3 revs
,Aug 28, 2009 at 15:23

All in Normal mode:

f<char> to move to the next instance of a particular character on the current line, and ; to repeat.

F<char> to move to the previous instance of a particular character on the current line and ; to repeat.

If used intelligently, the above two can make you killer-quick moving around in a line.

* on a word to search for the next instance.

# on a word to search for the previous instance.

Jim Dennis, Mar 14, 2010 at 6:38

Whoa, I didn't know about the * and # (search forward/back for word under cursor) binding. That's kinda cool. The f/F and t/T and ; commands are quick jumps to characters on the current line. f/F put the cursor on the indicated character while t/T puts it just up "to" the character (the character just before or after it according to the direction chosen. ; simply repeats the most recent f/F/t/T jump (in the same direction). – Jim Dennis Mar 14 '10 at 6:38

Steve K, Apr 3, 2010 at 23:50

:) The tagline at the top of the tips page at vim.org: "Can you imagine how many keystrokes could have been saved, if I only had known the "*" command in time?" - Juergen Salk, 1/19/2001" – Steve K Apr 3 '10 at 23:50

puk, Feb 24, 2012 at 6:45

As Jim mentioned, the "t/T" combo is often just as good, if not better, for example, ct( will erase the word and put you in insert mode, but keep the parantheses! – puk Feb 24 '12 at 6:45

community wiki
agfe2
, Aug 19, 2010 at 8:08

Session

a. save session

:mks sessionname

b. force save session

:mks! sessionname

c. load session

gvim or vim -S sessionname


Adding and Subtracting

a. Adding and Subtracting

CTRL-A ;Add [count] to the number or alphabetic character at or after the cursor. {not in Vi

CTRL-X ;Subtract [count] from the number or alphabetic character at or after the cursor. {not in Vi}

b. Window key unmapping

In window, Ctrl-A already mapped for whole file selection you need to unmap in rc file. mark mswin.vim CTRL-A mapping part as comment or add your rc file with unmap

c. With Macro

The CTRL-A command is very useful in a macro. Example: Use the following steps to make a numbered list.

  1. Create the first list entry, make sure it starts with a number.
  2. qa - start recording into buffer 'a'
  3. Y - yank the entry
  4. p - put a copy of the entry below the first one
  5. CTRL-A - increment the number
  6. q - stop recording
  7. @a - repeat the yank, put and increment times

Don Reba, Aug 22, 2010 at 5:22

Any idea what the shortcuts are in Windows? – Don Reba Aug 22 '10 at 5:22

community wiki
8 revs, 2 users 98%
,Aug 18, 2012 at 21:44

Last week at work our project inherited a lot of Python code from another project. Unfortunately the code did not fit into our existing architecture - it was all done with global variables and functions, which would not work in a multi-threaded environment.

We had ~80 files that needed to be reworked to be object oriented - all the functions moved into classes, parameters changed, import statements added, etc. We had a list of about 20 types of fix that needed to be done to each file. I would estimate that doing it by hand one person could do maybe 2-4 per day.

So I did the first one by hand and then wrote a vim script to automate the changes. Most of it was a list of vim commands e.g.

" delete an un-needed function "
g/someFunction(/ d

" add wibble parameter to function foo "
%s/foo(/foo( wibble,/

" convert all function calls bar(thing) into method calls thing.bar() "
g/bar(/ normal nmaf(ldi(`aPa.

The last one deserves a bit of explanation:

g/bar(/  executes the following command on every line that contains "bar("
normal   execute the following text as if it was typed in in normal mode
n        goes to the next match of "bar(" (since the :g command leaves the cursor position at the start of the line)
ma       saves the cursor position in mark a
f(       moves forward to the next opening bracket
l        moves right one character, so the cursor is now inside the brackets
di(      delete all the text inside the brackets
`a       go back to the position saved as mark a (i.e. the first character of "bar")
P        paste the deleted text before the current cursor position
a.       go into insert mode and add a "."

For a couple of more complex transformations such as generating all the import statements I embedded some python into the vim script.

After a few hours of working on it I had a script that will do at least 95% of the conversion. I just open a file in vim then run :source fixit.vim and the file is transformed in a blink of the eye.

We still have the work of changing the remaining 5% that was not worth automating and of testing the results, but by spending a day writing this script I estimate we have saved weeks of work.

Of course it would have been possible to automate this with a scripting language like Python or Ruby, but it would have taken far longer to write and would be less flexible - the last example would have been difficult since regex alone would not be able to handle nested brackets, e.g. to convert bar(foo(xxx)) to foo(xxx).bar() . Vim was perfect for the task.

Olivier Pons, Feb 28, 2010 at 14:41

Thanks a lot for sharing that's really nice to learn from "useful & not classical" macros. – Olivier Pons Feb 28 '10 at 14:41

Ipsquiggle, Mar 23, 2010 at 16:55

%s/\(bar\)(\(.\+\))/\2.\1()/ would do that too. (Escapes are compatible with :set magic .) Just for the record. :) – Ipsquiggle Mar 23 '10 at 16:55

Ipsquiggle, Mar 23, 2010 at 16:56

Of if you don't like vim-style escapes, use \v to turn on Very Magic: %s/\v(bar)\((.+)\)/\2.\1()/Ipsquiggle Mar 23 '10 at 16:56

Dave Kirby, Mar 23, 2010 at 17:16

@lpsquiggle: your suggestion would not handle complex expressions with more than one set of brackets. e.g. if bar(foo(xxx)) or wibble(xxx): becomes if foo(xxx)) or wibble(xxx.bar(): which is completely wrong. – Dave Kirby Mar 23 '10 at 17:16

community wiki
2 revs
,Aug 2, 2009 at 11:17

Use the builtin file explorer! The command is :Explore and it allows you to navigate through your source code very very fast. I have these mapping in my .vimrc :
map <silent> <F8>   :Explore<CR>
map <silent> <S-F8> :sp +Explore<CR>

The explorer allows you to make file modifications, too. I'll post some of my favorite keys, pressing <F1> will give you the full list:

Svend, Aug 2, 2009 at 8:48

I always thought the default methods for browsing kinda sucked for most stuff. It's just slow to browse, if you know where you wanna go. LustyExplorer from vim.org's script section is a much needed improvement. – Svend Aug 2 '09 at 8:48

Taurus Olson, Aug 6, 2009 at 17:37

Your second mapping could be more simple: map <silent> <S-F8> :Sexplore<CR> – Taurus Olson Aug 6 '09 at 17:37

kprobst, Apr 1, 2010 at 3:53

I recommend NERDtree instead of the built-in explorer. It has changed the way I used vim for projects and made me much more productive. Just google for it. – kprobst Apr 1 '10 at 3:53

dash-tom-bang, Aug 24, 2011 at 0:35

I never feel the need to explore the source tree, I just use :find, :tag and the various related keystrokes to jump around. (Maybe this is because the source trees I work on are big and organized differently than I would have done? :) ) – dash-tom-bang Aug 24 '11 at 0:35

community wiki
2 revs, 2 users 92%
,Jun 15, 2011 at 13:39

I am a member of the American Cryptogram Association. The bimonthly magazine includes over 100 cryptograms of various sorts. Roughly 15 of these are "cryptarithms" - various types of arithmetic problems with letters substituted for the digits. Two or three of these are sudokus, except with letters instead of numbers. When the grid is completed, the nine distinct letters will spell out a word or words, on some line, diagonal, spiral, etc., somewhere in the grid.

Rather than working with pencil, or typing the problems in by hand, I download the problems from the members area of their website.

When working with these sudokus, I use vi, simply because I'm using facilities that vi has that few other editors have. Mostly in converting the lettered grid into a numbered grid, because I find it easier to solve, and then the completed numbered grid back into the lettered grid to find the solution word or words.

The problem is formatted as nine groups of nine letters, with - s representing the blanks, written in two lines. The first step is to format these into nine lines of nine characters each. There's nothing special about this, just inserting eight linebreaks in the appropriate places.

The result will look like this:

T-O-----C
-E-----S-
--AT--N-L
---NASO--
---E-T---
--SPCL---
E-T--OS--
-A-----P-
S-----C-T

So, first step in converting this into numbers is to make a list of the distinct letters. First, I make a copy of the block. I position the cursor at the top of the block, then type :y}}p . : puts me in command mode, y yanks the next movement command. Since } is a move to the end of the next paragraph, y} yanks the paragraph. } then moves the cursor to the end of the paragraph, and p pastes what we had yanked just after the cursor. So y}}p creates a copy of the next paragraph, and ends up with the cursor between the two copies.

Next, I to turn one of those copies into a list of distinct letters. That command is a bit more complex:

:!}tr -cd A-Z | sed 's/\(.\)/\1\n/g' | sort -u | tr -d '\n'

: again puts me in command mode. ! indicates that the content of the next yank should be piped through a command line. } yanks the next paragraph, and the command line then uses the tr command to strip out everything except for upper-case letters, the sed command to print each letter on a single line, and the sort command to sort those lines, removing duplicates, and then tr strips out the newlines, leaving the nine distinct letters in a single line, replacing the nine lines that had made up the paragraph originally. In this case, the letters are: ACELNOPST .

Next step is to make another copy of the grid. And then to use the letters I've just identified to replace each of those letters with a digit from 1 to 9. That's simple: :!}tr ACELNOPST 0-9 . The result is:

8-5-----1
-2-----7-
--08--4-3
---4075--
---2-8---
--7613---
2-8--57--
-0-----6-
7-----1-8

This can then be solved in the usual way, or entered into any sudoku solver you might prefer. The completed solution can then be converted back into letters with :!}tr 1-9 ACELNOPST .

There is power in vi that is matched by very few others. The biggest problem is that only a very few of the vi tutorial books, websites, help-files, etc., do more than barely touch the surface of what is possible.

hhh, Jan 14, 2011 at 17:12

and an irritation is that some distros such as ubuntu has aliases from the word "vi" to "vim" so people won't really see vi. Excellent example, have to try... +1 – hhh Jan 14 '11 at 17:12

dash-tom-bang, Aug 24, 2011 at 0:45

Doesn't vim check the name it was started with so that it can come up in the right 'mode'? – dash-tom-bang Aug 24 '11 at 0:45

sehe, Mar 4, 2012 at 20:47

I'm baffled by this repeated error: you say you need : to go into command mode, but then invariably you specify normal mode commands (like y}}p ) which cannot possibly work from the command mode?! – sehe Mar 4 '12 at 20:47

sehe, Mar 4, 2012 at 20:56

My take on the unique chars challenge: :se tw=1 fo= (preparation) VG:s/./& /g (insert spaces), gvgq (split onto separate lines), V{:sort u (sort and remove duplicates) – sehe Mar 4 '12 at 20:56

community wiki
jqno
, Aug 2, 2009 at 8:59

Bulk text manipulations!

Either through macros:

Or through regular expressions:

(But be warned: if you do the latter, you'll have 2 problems :).)

Jim Dennis, Jan 10, 2010 at 4:03

+1 for the Jamie Zawinski reference. (No points taken back for failing to link to it, even). :) – Jim Dennis Jan 10 '10 at 4:03

jqno, Jan 10, 2010 at 10:06

@Jim I didn't even know it was a Jamie Zawinski quote :). I'll try to remember it from now on. – jqno Jan 10 '10 at 10:06

Jim Dennis, Feb 12, 2010 at 4:15

I find the following trick increasingly useful ... for cases where you want to join lines that match (or that do NOT match) some pattern to the previous line: :% g/foo/-1j or :'a,'z v/bar/-1j for example (where the former is "all lines and matching the pattern" while the latter is "lines between mark a and mark z which fail to match the pattern"). The part after the patter in a g or v ex command can be any other ex commmands, -1j is just a relative line movement and join command. – Jim Dennis Feb 12 '10 at 4:15

JustJeff, Feb 27, 2010 at 12:54

of course, if you name your macro '2', then when it comes time to use it, you don't even have to move your finger from the '@' key to the 'q' key. Probably saves 50 to 100 milliseconds every time right there. =P – JustJeff Feb 27 '10 at 12:54

Simon Steele, Apr 1, 2010 at 13:12

@JustJeff Depends entirely on your keyboard layout, my @ key is at the other side of the keyboard from my 2 key. – Simon Steele Apr 1 '10 at 13:12

community wiki
David Pope
, Apr 2, 2012 at 7:56

I recently discovered q: . It opens the "command window" and shows your most recent ex-mode (command-mode) commands. You can move as usual within the window, and pressing <CR> executes the command. You can edit, etc. too. Priceless when you're messing around with some complex command or regex and you don't want to retype the whole thing, or if the complex thing you want to do was 3 commands back. It's almost like bash's set -o vi, but for vim itself (heh!).

See :help q: for more interesting bits for going back and forth.

community wiki
2 revs, 2 users 56%
,Feb 27, 2010 at 11:29

I just discovered Vim's omnicompletion the other day, and while I'll admit I'm a bit hazy on what does which, I've had surprisingly good results just mashing either Ctrl + x Ctrl + u or Ctrl + n / Ctrl + p in insert mode. It's not quite IntelliSense, but I'm still learning it.

Try it out! :help ins-completion

community wiki
tfmoraes
, Mar 14, 2010 at 19:49

These are not shortcuts, but they are related:
  1. Make capslock an additional ESC (or Ctrl)
  2. map leader to "," (comma), with this command: let mapleader=","

They boost my productivity.

Olivier Pons, Mar 15, 2010 at 10:09

Hey nice hint about the "\"! Far better to type "," than "\". – Olivier Pons Mar 15 '10 at 10:09

R. Martinho Fernandes, Apr 1, 2010 at 3:30

To make Caps Lock an additional Esc in Windows (what's a caps lock key for? An "any key"?), try this: web.archive.org/web/20100418005858/http://webpages.charter.net/R. Martinho Fernandes Apr 1 '10 at 3:30

Tom Morris, Apr 1, 2010 at 8:45

On Mac, you need PCKeyboardHack - details at superuser.com/questions/34223/Tom Morris Apr 1 '10 at 8:45

Jeromy Anglim, Jan 10, 2011 at 4:43

On Windows I use AutoHotKey with Capslock::EscapeJeromy Anglim Jan 10 '11 at 4:43

community wiki
Costyn
, Sep 20, 2010 at 10:34

Another useful vi "shortcut" I frequently use is 'xp'. This will swap the character under the cursor with the next character.

tester, Aug 22, 2011 at 17:19

Xp to go the other way – tester Aug 22 '11 at 17:19

kguest, Aug 27, 2011 at 8:21

Around the time that Windows xp came out, I used to joke that this is the only good use for it. – kguest Aug 27 '11 at 8:21

community wiki
Peter Ellis
, Aug 2, 2009 at 9:47

<Ctrl> + W, V to split the screen vertically
<Ctrl> + W, W to shift between the windows

!python % [args] to run the script I am editing in this window

ZF in visual mode to fold arbitrary lines

Andrew Scagnelli, Apr 1, 2010 at 2:58

<Ctrl> + W and j/k will let you navigate absolutely (j up, k down, as with normal vim). This is great when you have 3+ splits. – Andrew Scagnelli Apr 1 '10 at 2:58

coder_tim, Jan 30, 2012 at 20:08

+1 for zf in visual mode, I like code folding, but did not know about that. – coder_tim Jan 30 '12 at 20:08

puk, Feb 24, 2012 at 7:00

after bashing my keyboard I have deduced that <C-w>n or <C-w>s is new horizontal window, <C-w>b is bottom right window, <C-w>c or <C-w>q is close window, <C-w>x is increase and then decrease window width (??), <C-w>p is last window, <C-w>backspace is move left(ish) window – puk Feb 24 '12 at 7:00

sjas, Jun 25, 2012 at 0:25

:help ctrl-w FTW... do yourself a favour, and force yourself to try these things for at least 15 minutes! – sjas Jun 25 '12 at 0:25

community wiki
2 revs
,Apr 1, 2010 at 17:00

Visual Mode

As several other people have said, visual mode is the answer to your copy/cut & paste problem. Vim gives you 'v', 'V', and C-v. Lower case 'v' in vim is essentially the same as the shift key in notepad. The nice thing is that you don't have to hold it down. You can use any movement technique to navigate efficiently to the starting (or ending) point of your selection. Then hit 'v', and use efficient movement techniques again to navigate to the other end of your selection. Then 'd' or 'y' allows you to cut or copy that selection.

The advantage vim's visual mode has over Jim Dennis's description of cut/copy/paste in vi is that you don't have to get the location exactly right. Sometimes it's more efficient to use a quick movement to get to the general vicinity of where you want to go and then refine that with other movements than to think up a more complex single movement command that gets you exactly where you want to go.

The downside to using visual mode extensively in this manner is that it can become a crutch that you use all the time which prevents you from learning new vi(m) commands that might allow you to do things more efficiently. However, if you are very proactive about learning new aspects of vi(m), then this probably won't affect you much.

I'll also re-emphasize that the visual line and visual block modes give you variations on this same theme that can be very powerful...especially the visual block mode.

On Efficient Use of the Keyboard

I also disagree with your assertion that alternating hands is the fastest way to use the keyboard. It has an element of truth in it. Speaking very generally, repeated use of the same thing is slow. This most significant example of this principle is that consecutive keystrokes typed with the same finger are very slow. Your assertion probably stems from the natural tendency to use the s/finger/hand/ transformation on this pattern. To some extent it's correct, but at the extremely high end of the efficiency spectrum it's incorrect.

Just ask any pianist. Ask them whether it's faster to play a succession of a few notes alternating hands or using consecutive fingers of a single hand in sequence. The fastest way to type 4 keystrokes is not to alternate hands, but to type them with 4 fingers of the same hand in either ascending or descending order (call this a "run"). This should be self-evident once you've considered this possibility.

The more difficult problem is optimizing for this. It's pretty easy to optimize for absolute distance on the keyboard. Vim does that. It's much harder to optimize at the "run" level, but vi(m) with it's modal editing gives you a better chance at being able to do it than any non-modal approach (ahem, emacs) ever could.

On Emacs

Lest the emacs zealots completely disregard my whole post on account of that last parenthetical comment, I feel I must describe the root of the difference between the emacs and vim religions. I've never spoken up in the editor wars and I probably won't do it again, but I've never heard anyone describe the differences this way, so here it goes. The difference is the following tradeoff:

Vim gives you unmatched raw text editing efficiency Emacs gives you unmatched ability to customize and program the editor

The blind vim zealots will claim that vim has a scripting language. But it's an obscure, ad-hoc language that was designed to serve the editor. Emacs has Lisp! Enough said. If you don't appreciate the significance of those last two sentences or have a desire to learn enough about functional programming and Lisp to develop that appreciation, then you should use vim.

The emacs zealots will claim that emacs has viper mode, and so it is a superset of vim. But viper mode isn't standard. My understanding is that viper mode is not used by the majority of emacs users. Since it's not the default, most emacs users probably don't develop a true appreciation for the benefits of the modal paradigm.

In my opinion these differences are orthogonal. I believe the benefits of vim and emacs as I have stated them are both valid. This means that the ultimate editor doesn't exist yet. It's probably true that emacs would be the easiest platform on which to base the ultimate editor. But modal editing is not entrenched in the emacs mindset. The emacs community could move that way in the future, but that doesn't seem very likely.

So if you want raw editing efficiency, use vim. If you want the ultimate environment for scripting and programming your editor use emacs. If you want some of both with an emphasis on programmability, use emacs with viper mode (or program your own mode). If you want the best of both worlds, you're out of luck for now.

community wiki
konryd
, Mar 31, 2010 at 22:44

Spend 30 mins doing the vim tutorial (run vimtutor instead of vim in terminal). You will learn the basic movements, and some keystrokes, this will make you at least as productive with vim as with the text editor you used before. After that, well, read Jim Dennis' answer again :)

dash-tom-bang, Aug 24, 2011 at 0:47

This is the first thing I thought of when reading the OP. It's obvious that the poster has never run this; I ran through it when first learning vim two years ago and it cemented in my mind the superiority of Vim to any of the other editors I've used (including, for me, Emacs since the key combos are annoying to use on a Mac). – dash-tom-bang Aug 24 '11 at 0:47

community wiki
Johnsyweb
, Jan 12, 2011 at 22:52

What is the way you use Vim that makes you more productive than with a contemporary editor?

Being able to execute complex, repetitive edits with very few keystrokes (often using macros ). Take a look at VimGolf to witness the power of Vim!

After over ten years of almost daily usage, it's hard to imagine using any other editor.

community wiki
2 revs, 2 users 67%
,Jun 15, 2011 at 13:42

Use \c anywhere in a search to ignore case (overriding your ignorecase or smartcase settings). E.g. /\cfoo or /foo\c will match foo, Foo, fOO, FOO, etc.

Use \C anywhere in a search to force case matching. E.g. /\Cfoo or /foo\C will only match foo.

community wiki
2 revs, 2 users 67%
,Jun 15, 2011 at 13:44

I was surprised to find no one mention the t movement. I frequently use it with parameter lists in the form of dt, or yt,

hhh, Jan 14, 2011 at 17:09

or dfx, dFx, dtx, ytx, etc where x is a char, +1 – hhh Jan 14 '11 at 17:09

dash-tom-bang, Aug 24, 2011 at 0:48

@hhh yep, T t f and F are all pretty regular keys for me to hit... – dash-tom-bang Aug 24 '11 at 0:48

markle976, Mar 30, 2012 at 13:52

Yes! And don't forget ct (change to). – markle976 Mar 30 '12 at 13:52

sjas, Jun 24, 2012 at 23:35

t for teh win!!! – sjas Jun 24 '12 at 23:35

community wiki
3 revs
,May 6, 2012 at 20:50

Odd nobody's mentioned ctags. Download "exuberant ctags" and put it ahead of the crappy preinstalled version you already have in your search path. Cd to the root of whatever you're working on; for example the Android kernel distribution. Type "ctags -R ." to build an index of source files anywhere beneath that dir in a file named "tags". This contains all tags, nomatter the language nor where in the dir, in one file, so cross-language work is easy.

Then open vim in that folder and read :help ctags for some commands. A few I use often:

community wiki
2 revs, 2 users 67%
,Feb 27, 2010 at 11:19

Automatic indentation:

gg (go to start of document)
= (indent time!)
shift-g (go to end of document)

You'll need 'filetype plugin indent on' in your .vimrc file, and probably appropriate 'shiftwidth' and 'expandtab' settings.

xcramps, Aug 28, 2009 at 17:14

Or just use the ":set ai" (auto-indent) facility, which has been in vi since the beginning. – xcramps Aug 28 '09 at 17:14

community wiki
autodidakto
, Jul 24, 2010 at 5:41

You asked about productive shortcuts, but I think your real question is: Is vim worth it? The answer to this stackoverflow question is -> "Yes"

You must have noticed two things. Vim is powerful, and vim is hard to learn. Much of it's power lies in it's expandability and endless combination of commands. Don't feel overwhelmed. Go slow. One command, one plugin at a time. Don't overdo it.

All that investment you put into vim will pay back a thousand fold. You're going to be inside a text editor for many, many hours before you die. Vim will be your companion.

community wiki
2 revs, 2 users 67%
,Feb 27, 2010 at 11:23

Multiple buffers, and in particular fast jumping between them to compare two files with :bp and :bn (properly remapped to a single Shift + p or Shift + n )

vimdiff mode (splits in two vertical buffers, with colors to show the differences)

Area-copy with Ctrl + v

And finally, tab completion of identifiers (search for "mosh_tab_or_complete"). That's a life changer.

community wiki
David Wolever
, Aug 28, 2009 at 16:07

Agreed with the top poster - the :r! command is very useful.

Most often I use it to "paste" things:

:r!cat
**Ctrl-V to paste from the OS clipboard**
^D

This way I don't have to fiddle with :set paste .

R. Martinho Fernandes, Apr 1, 2010 at 3:17

Probably better to set the clipboard option to unnamed ( set clipboard=unnamed in your .vimrc) to use the system clipboard by default. Or if you still want the system clipboard separate from the unnamed register, use the appropriately named clipboard register: "*p . – R. Martinho Fernandes Apr 1 '10 at 3:17

kevpie, Oct 12, 2010 at 22:38

Love it! After being exasperated by pasting code examples from the web and I was just starting to feel proficient in vim. That was the command I dreamed up on the spot. This was when vim totally hooked me. – kevpie Oct 12 '10 at 22:38

Ben Mordecai, Feb 6, 2013 at 19:54

If you're developing on a Mac, Command+C and Command+V copy and paste using the system clipboard, no remap required. – Ben Mordecai Feb 6 '13 at 19:54

David Wolever, Feb 6, 2013 at 20:55

Only with GVIm From the console, pasting without :set paste doesn't work so well if autoindent is enabled. – David Wolever Feb 6 '13 at 20:55

[Oct 21, 2018] What are the dark corners of Vim your mom never told you about?

Notable quotes:
"... Want to look at your :command history? q: Then browse, edit and finally to execute the command. ..."
"... from the ex editor (:), you can do CTRL-f to pop up the command history window. ..."
"... q/ and q? can be used to do a similar thing for your search patterns. ..."
"... adjacent to the one I just edit ..."
Nov 16, 2011 | stackoverflow.com

Ask Question, Nov 16, 2011 at 0:44

There are a plethora of questions where people talk about common tricks, notably " Vim+ctags tips and tricks ".

However, I don't refer to commonly used shortcuts that someone new to Vim would find cool. I am talking about a seasoned Unix user (be they a developer, administrator, both, etc.), who thinks they know something 99% of us never heard or dreamed about. Something that not only makes their work easier, but also is COOL and hackish .

After all, Vim resides in the most dark-corner-rich OS in the world, thus it should have intricacies that only a few privileged know about and want to share with us.

user3218088, Jun 16, 2014 at 9:51

:Sex -- Split window and open integrated file explorer (horizontal split) – user3218088 Jun 16 '14 at 9:51

community wiki, 2 revs, Apr 7, 2009 at 19:04

Might not be one that 99% of Vim users don't know about, but it's something I use daily and that any Linux+Vim poweruser must know.

Basic command, yet extremely useful.

:w !sudo tee %

I often forget to sudo before editing a file I don't have write permissions on. When I come to save that file and get a permission error, I just issue that vim command in order to save the file without the need to save it to a temp file and then copy it back again.

You obviously have to be on a system with sudo installed and have sudo rights.

jm666, May 12, 2011 at 6:09

cmap w!! w !sudo tee % – jm666 May 12 '11 at 6:09

Gerardo Marset, Jul 5, 2011 at 0:49

You should never run sudo vim . Instead you should export EDITOR as vim and run sudoedit . – Gerardo Marset Jul 5 '11 at 0:49

migu, Sep 2, 2013 at 20:42

@maximus: vim replaces % by the name of the current buffer/file. – migu Sep 2 '13 at 20:42

community wiki
Chad Birch
, Apr 7, 2009 at 18:09

Something I just discovered recently that I thought was very cool:
:earlier 15m

Reverts the document back to how it was 15 minutes ago. Can take various arguments for the amount of time you want to roll back, and is dependent on undolevels. Can be reversed with the opposite command :later

ephemient, Apr 8, 2009 at 16:15

@skinp: If you undo and then make further changes from the undone state, you lose that redo history. This lets you go back to a state which is no longer in the undo stack. – ephemient Apr 8 '09 at 16:15

Etienne PIERRE, Jul 21, 2009 at 13:53

Also very usefull is g+ and g- to go backward and forward in time. This is so much more powerfull than an undo/redo stack since you don't loose the history when you do something after an undo. – Etienne PIERRE Jul 21 '09 at 13:53

Ehtesh Choudhury, Nov 29, 2011 at 12:09

You don't lose the redo history if you make a change after an undo. It's just not easily accessed. There are plugins to help you visualize this, like Gundo.vim – Ehtesh Choudhury Nov 29 '11 at 12:09

Igor Popov, Dec 29, 2011 at 6:59

Wow, so now I can just do :later 8h and I'm done for today? :P – Igor Popov Dec 29 '11 at 6:59

Ring Ø, Jul 11, 2014 at 5:14

Your command assumes one will spend at least 15 minutes in vim ! – Ring Ø Jul 11 '14 at 5:14

community wiki,2 revs, 2 users 92%, ,Mar 31, 2016 at 17:54

:! [command] executes an external command while you're in Vim.

But add a dot after the colon, :.! [command], and it'll dump the output of the command into your current window. That's : . !

For example:

:.! ls

I use this a lot for things like adding the current date into a document I'm typing:

:.! date

saffsd, May 6, 2009 at 14:41

This is quite similar to :r! The only difference as far as I can tell is that :r! opens a new line, :.! overwrites the current line. – saffsd May 6 '09 at 14:41

hlovdal, Jan 25, 2010 at 21:11

An alternative to :.!date is to write "date" on a line and then run !$sh (alternatively having the command followed by a blank line and run !jsh ). This will pipe the line to the "sh" shell and substitute with the output from the command. – hlovdal Jan 25 '10 at 21:11

Nefrubyr, Mar 25, 2010 at 16:24

:.! is actually a special case of :{range}!, which filters a range of lines (the current line when the range is . ) through a command and replaces those lines with the output. I find :%! useful for filtering whole buffers. – Nefrubyr Mar 25 '10 at 16:24

jabirali, Jul 13, 2010 at 4:30

@sundar: Why pass a line to sed, when you can use the similar built-in ed / ex commands? Try running :.s/old/new/g ;-) – jabirali Jul 13 '10 at 4:30

aqn, Apr 26, 2013 at 20:52

And also note that '!' is like 'y', 'd', 'c' etc. i.e. you can do: !!, number!!, !motion (e.g. !Gshell_command<cr> replace from current line to end of file ('G') with output of shell_command). – aqn Apr 26 '13 at 20:52

community wiki 2 revs , Apr 8, 2009 at 12:17

Not exactly obscure, but there are several "delete in" commands which are extremely useful, like..

Others can be found on :help text-objects

sjh, Apr 8, 2009 at 15:33

dab "delete arounb brackets", daB for around curly brackets, t for xml type tags, combinations with normal commands are as expected cib/yaB/dit/vat etc – sjh Apr 8 '09 at 15:33

Don Reba, Apr 13, 2009 at 21:41

@Masi: yi(va(p deletes only the brackets – Don Reba Apr 13 '09 at 21:41

thomasrutter, Apr 26, 2009 at 11:11

This is possibly the biggest reason for me staying with Vim. That and its equivalent "change" commands: ciw, ci(, ci", as well as dt<space> and ct<space> – thomasrutter Apr 26 '09 at 11:11

Roger Pate Oct 12 '10 at 16:40 ,

@thomasrutter: Why not dW/cW instead of dt<space>? –

Roger Pate Oct 12 '10 at 16:43, Oct 12, 2010 at 16:43

@Masi: With the surround plugin: ds(. –

community wiki, 9 revs, 9 users 84%, ultraman, Apr 21, 2017 at 14:06

de Delete everything till the end of the word by pressing . at your heart's desire.

ci(xyz[Esc] -- This is a weird one. Here, the 'i' does not mean insert mode. Instead it means inside the parenthesis. So this sequence cuts the text inside parenthesis you're standing in and replaces it with "xyz". It also works inside square and figure brackets -- just do ci[ or ci{ correspondingly. Naturally, you can do di (if you just want to delete all text without typing anything. You can also do a instead of i if you want to delete the parentheses as well and not just text inside them.

ci" - cuts the text in current quotes

ciw - cuts the current word. This works just like the previous one except that ( is replaced with w .

C - cut the rest of the line and switch to insert mode.

ZZ -- save and close current file (WAY faster than Ctrl-F4 to close the current tab!)

ddp - move current line one row down

xp -- move current character one position to the right

U - uppercase, so viwU upercases the word

~ - switches case, so viw~ will reverse casing of entire word

Ctrl+u / Ctrl+d scroll the page half-a-screen up or down. This seems to be more useful than the usual full-screen paging as it makes it easier to see how the two screens relate. For those who still want to scroll entire screen at a time there's Ctrl+f for Forward and Ctrl+b for Backward. Ctrl+Y and Ctrl+E scroll down or up one line at a time.

Crazy but very useful command is zz -- it scrolls the screen to make this line appear in the middle. This is excellent for putting the piece of code you're working on in the center of your attention. Sibling commands -- zt and zb -- make this line the top or the bottom one on the sreen which is not quite as useful.

% finds and jumps to the matching parenthesis.

de -- delete from cursor to the end of the word (you can also do dE to delete until the next space)

bde -- delete the current word, from left to right delimiter

df[space] -- delete up until and including the next space

dt. -- delete until next dot

dd -- delete this entire line

ye (or yE) -- yanks text from here to the end of the word

ce - cuts through the end of the word

bye -- copies current word (makes me wonder what "hi" does!)

yy -- copies the current line

cc -- cuts the current line, you can also do S instead. There's also lower cap s which cuts current character and switches to insert mode.

viwy or viwc . Yank or change current word. Hit w multiple times to keep selecting each subsequent word, use b to move backwards

vi{ - select all text in figure brackets. va{ - select all text including {}s

vi(p - highlight everything inside the ()s and replace with the pasted text

b and e move the cursor word-by-word, similarly to how Ctrl+Arrows normally do . The definition of word is a little different though, as several consecutive delmiters are treated as one word. If you start at the middle of a word, pressing b will always get you to the beginning of the current word, and each consecutive b will jump to the beginning of the next word. Similarly, and easy to remember, e gets the cursor to the end of the current, and each subsequent, word.

similar to b / e, capital B and E move the cursor word-by-word using only whitespaces as delimiters.

capital D (take a deep breath) Deletes the rest of the line to the right of the cursor, same as Shift+End/Del in normal editors (notice 2 keypresses -- Shift+D -- instead of 3)

Nick Lewis, Jul 17, 2009 at 16:41

zt is quite useful if you use it at the start of a function or class definition. – Nick Lewis Jul 17 '09 at 16:41

Nathan Fellman, Sep 7, 2009 at 8:27

vity and vitc can be shortened to yit and cit respectively. – Nathan Fellman Sep 7 '09 at 8:27

Laurence Gonsalves, Feb 19, 2011 at 23:49

All the things you're calling "cut" is "change". eg: C is change until the end of the line. Vim's equivalent of "cut" is "delete", done with d/D. The main difference between change and delete is that delete leaves you in normal mode but change puts you into a sort of insert mode (though you're still in the change command which is handy as the whole change can be repeated with . ). – Laurence Gonsalves Feb 19 '11 at 23:49

Almo, May 29, 2012 at 20:09

I thought this was for a list of things that not many people know. yy is very common, I would have thought. – Almo May 29 '12 at 20:09

Andrea Francia, Jul 3, 2012 at 20:50

bye does not work when you are in the first character of the word. yiw always does. – Andrea Francia Jul 3 '12 at 20:50

community wiki 2 revs, 2 users 83%, ,Sep 17, 2010 at 16:55

One that I rarely find in most Vim tutorials, but it's INCREDIBLY useful (at least to me), is the

g; and g,

to move (forward, backward) through the changelist.

Let me show how I use it. Sometimes I need to copy and paste a piece of code or string, say a hex color code in a CSS file, so I search, jump (not caring where the match is), copy it and then jump back (g;) to where I was editing the code to finally paste it. No need to create marks. Simpler.

Just my 2cents.

aehlke, Feb 12, 2010 at 1:19

similarly, '. will go to the last edited line, And `. will go to the last edited position – aehlke Feb 12 '10 at 1:19

Kimball Robinson, Apr 16, 2010 at 0:29

Ctrl-O and Ctrl-I (tab) will work similarly, but not the same. They move backward and forward in the "jump list", which you can view by doing :jumps or :ju For more information do a :help jumplist – Kimball Robinson Apr 16 '10 at 0:29

Kimball Robinson, Apr 16, 2010 at 0:30

You can list the change list by doing :changes – Kimball Robinson Apr 16 '10 at 0:30

Wayne Werner, Jan 30, 2013 at 14:49

Hot dang that's useful. I use <C-o> / <C-i> for this all the time - or marking my place. – Wayne Werner Jan 30 '13 at 14:49

community wiki, 4 revs, 4 users 36%, ,May 5, 2014 at 13:06

:%!xxd

Make vim into a hex editor.

:%!xxd -r

Revert.

Warning: If you don't edit with binary (-b), you might damage the file. – Josh Lee in the comments.

Christian, Jul 7, 2009 at 19:11

And how do you revert it back? – Christian Jul 7 '09 at 19:11

Naga Kiran, Jul 8, 2009 at 13:46

:!xxd -r //To revert back from HEX – Naga Kiran Jul 8 '09 at 13:46

Andreas Grech, Nov 14, 2009 at 10:37

I actually think it's :%!xxd -r to revert it back – Andreas Grech Nov 14 '09 at 10:37

dotancohen, Jun 7, 2013 at 5:50

@JoshLee: If one is careful not to traverse newlines, is it safe to not use the -b option? I ask because sometimes I want to make a hex change, but I don't want to close and reopen the file to do so. – dotancohen Jun 7 '13 at 5:50

Bambu, Nov 23, 2014 at 23:58

@dotancohen: If you don't want to close/reopen the file you can do :set binary – Bambu Nov 23 '14 at 23:58

community wiki AaronS, Jan 12, 2011 at 20:03

gv

Reselects last visual selection.

community wiki 3 revs, 2 users 92%
,Jul 7, 2014 at 19:10

Sometimes a setting in your .vimrc will get overridden by a plugin or autocommand. To debug this a useful trick is to use the :verbose command in conjunction with :set. For example, to figure out where cindent got set/unset:
:verbose set cindent?

This will output something like:

cindent
    Last set from /usr/share/vim/vim71/indent/c.vim

This also works with maps and highlights. (Thanks joeytwiddle for pointing this out.) For example:

:verbose nmap U
n  U             <C-R>
        Last set from ~/.vimrc

:verbose highlight Normal
Normal         xxx guifg=#dddddd guibg=#111111 font=Inconsolata Medium 14
        Last set from ~/src/vim-holodark/colors/holodark.vim

Artem Russakovskii, Oct 23, 2009 at 22:09

Excellent tip - exactly what I was looking for today. – Artem Russakovskii Oct 23 '09 at 22:09

joeytwiddle, Jul 5, 2014 at 22:08

:verbose can also be used before nmap l or highlight Normal to find out where the l keymap or the Normal highlight were last defined. Very useful for debugging! – joeytwiddle Jul 5 '14 at 22:08

SidOfc, Sep 24, 2017 at 11:26

When you get into creating custom mappings, this will save your ass so many times, probably one of the most useful ones here (IMO)! – SidOfc Sep 24 '17 at 11:26

community wiki 3 revs, 3 users 70% ,May 31, 2015 at 19:30

Not sure if this counts as dark-corner-ish at all, but I've only just learnt it...
:g/match/y A

will yank (copy) all lines containing "match" into the "a / @a register. (The capitalization as A makes vim append yankings instead of replacing the previous register contents.) I used it a lot recently when making Internet Explorer stylesheets.

tsukimi, May 27, 2012 at 6:17

You can use :g! to find lines that don't match a pattern e.x. :g!/set/normal dd (delete all lines that don't contain set) – tsukimi May 27 '12 at 6:17

pandubear, Oct 12, 2013 at 8:39

Sometimes it's better to do what tsukimi said and just filter out lines that don't match your pattern. An abbreviated version of that command though: :v/PATTERN/d Explanation: :v is an abbreviation for :g!, and the :g command applies any ex command to lines. :y[ank] works and so does :normal, but here the most natural thing to do is just :d[elete] . – pandubear Oct 12 '13 at 8:39

Kimball Robinson, Feb 5, 2016 at 17:58

You can also do :g/match/normal "Ayy -- the normal keyword lets you tell it to run normal-mode commands (which you are probably more familiar with). – Kimball Robinson Feb 5 '16 at 17:58

community wiki 2 revs, 2 users 80% ,Apr 5, 2013 at 15:55

:%TOhtml Creates an html rendering of the current file.

kenorb, Feb 19, 2015 at 11:27

Related: How to convert a source code file into HTML? at Vim SE – kenorb Feb 19 '15 at 11:27

community wiki 2 revs, 2 users 86% ,May 11, 2011 at 19:30

Want to look at your :command history? q: Then browse, edit and finally to execute the command.

Ever make similar changes to two files and switch back and forth between them? (Say, source and header files?)

:set hidden
:map <TAB> :e#<CR>

Then tab back and forth between those files.

Josh Lee, Sep 22, 2009 at 16:58

I hit q: by accident all the time... – Josh Lee Sep 22 '09 at 16:58

Jason Down, Oct 6, 2009 at 4:14

Alternatively, from the ex editor (:), you can do CTRL-f to pop up the command history window.Jason Down Oct 6 '09 at 4:14

bradlis7, Mar 23, 2010 at 17:10

@jleedev me too. I almost hate this command, just because I use it accidentally way too much. – bradlis7 Mar 23 '10 at 17:10

bpw1621, Feb 19, 2011 at 15:01

q/ and q? can be used to do a similar thing for your search patterns. bpw1621 Feb 19 '11 at 15:01

idbrii, Feb 23, 2011 at 19:07

Hitting <C-f> after : or / (or any time you're in command mode) will bring up the same history menu. So you can remap q: if you hit it accidentally a lot and still access this awesome mode. – idbrii Feb 23 '11 at 19:07

community wiki, 2 revs, 2 users 89%, ,Jun 4, 2014 at 14:52

Vim will open a URL, for example
vim http://stackoverflow.com/

Nice when you need to pull up the source of a page for reference.

Ivan Vučica, Sep 21, 2010 at 8:07

For me it didn't open the source; instead it apparently used elinks to dump rendered page into a buffer, and then opened that. – Ivan Vučica Sep 21 '10 at 8:07

Thomas, Apr 19, 2013 at 21:00

Works better with a slash at the end. Neat trick! – Thomas Apr 19 '13 at 21:00

Isaac Remuant, Jun 3, 2013 at 15:23

@Vdt: It'd be useful if you posted your error. If it's this one: " error (netrw) neither the wget nor the fetch command is available" you obviously need to make one of those tools available from your PATH environment variable. – Isaac Remuant Jun 3 '13 at 15:23

Dettorer, Oct 29, 2014 at 13:47

I find this one particularly useful when people send links to a paste service and forgot to select a syntax highlighting, I generally just have to open the link in vim after appending "&raw". – Dettorer Oct 29 '14 at 13:47

community wiki 2 revs, 2 users 94% ,Jan 20, 2015 at 23:14

Macros can call other macros, and can also call itself.

eg:

qq0dwj@qq@q

...will delete the first word from every line until the end of the file.

This is quite a simple example but it demonstrates a very powerful feature of vim

Kimball Robinson, Apr 16, 2010 at 0:39

I didn't know macros could repeat themselves. Cool. Note: qx starts recording into register x (he uses qq for register q). 0 moves to the start of the line. dw delets a word. j moves down a line. @q will run the macro again (defining a loop). But you forgot to end the recording with a final "q", then actually run the macro by typing @q. – Kimball Robinson Apr 16 '10 at 0:39

Yktula, Apr 18, 2010 at 5:32

I think that's intentional, as a nested and recursive macro. – Yktula Apr 18 '10 at 5:32

Gerardo Marset, Jul 5, 2011 at 1:38

qqqqqifuu<Esc>h@qq@qGerardo Marset Jul 5 '11 at 1:38

Nathan Long, Aug 29, 2011 at 15:33

Another way of accomplishing this is to record a macro in register a that does some transformation to a single line, then linewise highlight a bunch of lines with V and type :normal! @a to applyyour macro to every line in your selection. – Nathan Long Aug 29 '11 at 15:33

dotancohen, May 14, 2013 at 6:00

I found this post googling recursive VIM macros. I could find no way to stop the macro other than killing the VIM process. – dotancohen May 14 '13 at 6:00

community wiki
Brian Carper
, Apr 8, 2009 at 1:15

Assuming you have Perl and/or Ruby support compiled in, :rubydo and :perldo will run a Ruby or Perl one-liner on every line in a range (defaults to entire buffer), with $_ bound to the text of the current line (minus the newline). Manipulating $_ will change the text of that line.

You can use this to do certain things that are easy to do in a scripting language but not so obvious using Vim builtins. For example to reverse the order of the words in a line:

:perldo $_ = join ' ', reverse split

To insert a random string of 8 characters (A-Z) at the end of every line:

:rubydo $_ += ' ' + (1..8).collect{('A'..'Z').to_a[rand 26]}.join

You are limited to acting on one line at a time and you can't add newlines.

Sujoy, May 6, 2009 at 18:27

what if i only want perldo to run on a specified line? or a selected few lines? – Sujoy May 6 '09 at 18:27

Brian Carper, May 6, 2009 at 18:52

You can give it a range like any other command. For example :1,5perldo will only operate on lines 1-5. – Brian Carper May 6 '09 at 18:52

Greg, Jul 2, 2009 at 16:41

Could you do $_ += '\nNEWLINE!!!' to get a newline after the current one? – Greg Jul 2 '09 at 16:41

Brian Carper, Jul 2, 2009 at 17:26

Sadly not, it just adds a funky control character to the end of the line. You could then use a Vim search/replace to change all those control characters to real newlines though. – Brian Carper Jul 2 '09 at 17:26

Derecho, Mar 14, 2014 at 8:48

Similarly, pydo and py3do work for python if you have the required support compiled in. – Derecho Mar 14 '14 at 8:48

community wiki
4 revs
,Jul 28, 2009 at 19:05

^O and ^I

Go to older/newer position. When you are moving through the file (by searching, moving commands etc.) vim rember these "jumps", so you can repeat these jumps backward (^O - O for old) and forward (^I - just next to I on keyboard). I find it very useful when writing code and performing a lot of searches.

gi

Go to position where Insert mode was stopped last. I find myself often editing and then searching for something. To return to editing place press gi.

gf

put cursor on file name (e.g. include header file), press gf and the file is opened

gF

similar to gf but recognizes format "[file name]:[line number]". Pressing gF will open [file name] and set cursor to [line number].

^P and ^N

Auto complete text while editing (^P - previous match and ^N next match)

^X^L

While editing completes to the same line (useful for programming). You write code and then you recall that you have the same code somewhere in file. Just press ^X^L and the full line completed

^X^F

Complete file names. You write "/etc/pass" Hmm. You forgot the file name. Just press ^X^F and the filename is completed

^Z or :sh

Move temporary to the shell. If you need a quick bashing:

sehe, Mar 4, 2012 at 21:50

With ^X^F my pet peeve is that filenames include = signs, making it do rotten things in many occasions (ini files, makefiles etc). I use se isfname-== to end that nuisance – sehe Mar 4 '12 at 21:50

joeytwiddle, Jul 5, 2014 at 22:10

+1 the built-in autocomplete is just sitting there waiting to be discovered. – joeytwiddle Jul 5 '14 at 22:10

community wiki
2 revs
,Apr 7, 2009 at 18:59

This is a nice trick to reopen the current file with a different encoding:
:e ++enc=cp1250 %:p

Useful when you have to work with legacy encodings. The supported encodings are listed in a table under encoding-values (see help encoding-values ). Similar thing also works for ++ff, so that you can reopen file with Windows/Unix line ends if you get it wrong for the first time (see help ff ).

>, Apr 7, 2009 at 18:43

Never had to use this sort of a thing, but we'll certainly add to my arsenal of tricks... – Sasha Apr 7 '09 at 18:43

Adriano Varoli Piazza, Apr 7, 2009 at 18:44

great tip, thanks. For bonus points, add a list of common valid encodings. – Adriano Varoli Piazza Apr 7 '09 at 18:44

Ivan Vučica, Jul 8, 2009 at 19:29

I have used this today, but I think I didn't need to specify "%:p"; just opening the file and :e ++enc=cp1250 was enough. I – Ivan Vučica Jul 8 '09 at 19:29

laz, Jul 8, 2009 at 19:32

would :set encoding=cp1250 have the same effect? – laz Jul 8 '09 at 19:32

intuited, Jun 4, 2010 at 2:51

`:e +b %' is similarly useful for reopening in binary mode (no munging of newlines) – intuited Jun 4 '10 at 2:51

community wiki
4 revs, 3 users 48%
,Nov 6, 2012 at 8:32

" insert range ip's
"
"          ( O O )
" =======oOO=(_)==OOo======

:for i in range(1,255) | .put='10.0.0.'.i | endfor

Ryan Edwards, Nov 16, 2011 at 0:42

I don't see what this is good for (besides looking like a joke answer). Can anybody else enlighten me? – Ryan Edwards Nov 16 '11 at 0:42

Codygman, Nov 6, 2012 at 8:33

open vim and then do ":for i in range(1,255) | .put='10.0.0.'.i | endfor" – Codygman Nov 6 '12 at 8:33

Ruslan, Sep 30, 2013 at 10:30

@RyanEdwards filling /etc/hosts maybe – Ruslan Sep 30 '13 at 10:30

dotancohen, Nov 30, 2014 at 14:56

This is a terrific answer. Not the bit about creating the IP addresses, but the bit that implies that VIM can use for loops in commands . – dotancohen Nov 30 '14 at 14:56

BlackCap, Aug 31, 2017 at 7:54

Without ex-mode: i10.0.0.1<Esc>Y254p$<C-v>}g<C-a>BlackCap Aug 31 '17 at 7:54

community wiki
2 revs
,Aug 6, 2010 at 0:30

Typing == will correct the indentation of the current line based on the line above.

Actually, you can do one = sign followed by any movement command. = {movement}

For example, you can use the % movement which moves between matching braces. Position the cursor on the { in the following code:

if (thisA == that) {
//not indented
if (some == other) {
x = y;
}
}

And press =% to instantly get this:

if (thisA == that) {
    //not indented
    if (some == other) {
        x = y;
    }
}

Alternately, you could do =a{ within the code block, rather than positioning yourself right on the { character.

Ehtesh Choudhury, May 2, 2011 at 0:48

Hm, I didn't know this about the indentation. – Ehtesh Choudhury May 2 '11 at 0:48

sehe, Mar 4, 2012 at 22:03

No need, usually, to be exactly on the braces. Thought frequently I'd just =} or vaBaB= because it is less dependent. Also, v}}:!astyle -bj matches my code style better, but I can get it back into your style with a simple %!astyle -ajsehe Mar 4 '12 at 22:03

kyrias, Oct 19, 2013 at 12:12

gg=G is quite neat when pasting in something. – kyrias Oct 19 '13 at 12:12

kenorb, Feb 19, 2015 at 11:30

Related: Re-indenting badly indented code at Vim SE – kenorb Feb 19 '15 at 11:30

Braden Best, Feb 4, 2016 at 16:16

@kyrias Oh, I've been doing it like ggVG= . – Braden Best Feb 4 '16 at 16:16

community wiki
Trumpi
, Apr 19, 2009 at 18:33

imap jj <esc>

hasen, Jun 12, 2009 at 6:08

how will you type jj then? :P – hasen Jun 12 '09 at 6:08

ojblass, Jul 5, 2009 at 18:29

How often to you type jj? In English at least? – ojblass Jul 5 '09 at 18:29

Alex, Oct 5, 2009 at 5:32

I remapped capslock to esc instead, as it's an otherwise useless key. My mapping was OS wide though, so it has the added benefit of never having to worry about accidentally hitting it. The only drawback IS ITS HARDER TO YELL AT PEOPLE. :) – Alex Oct 5 '09 at 5:32

intuited, Jun 4, 2010 at 4:18

@Alex: definitely, capslock is death. "wait, wtf? oh, that was ZZ?....crap." – intuited Jun 4 '10 at 4:18

brianmearns, Oct 3, 2012 at 12:45

@ojblass: Not sure how many people ever right matlab code in Vim, but ii and jj are commonly used for counter variables, because i and j are reserved for complex numbers. – brianmearns Oct 3 '12 at 12:45

community wiki
4 revs, 3 users 71%
,Feb 12, 2015 at 15:55

Let's see some pretty little IDE editor do column transposition.
:%s/\(.*\)^I\(.*\)/\2^I\1/

Explanation

\( and \) is how to remember stuff in regex-land. And \1, \2 etc is how to retrieve the remembered stuff.

>>> \(.*\)^I\(.*\)

Remember everything followed by ^I (tab) followed by everything.

>>> \2^I\1

Replace the above stuff with "2nd stuff you remembered" followed by "1st stuff you remembered" - essentially doing a transpose.

chaos, Apr 7, 2009 at 18:33

Switches a pair of tab-separated columns (separator arbitrary, it's all regex) with each other. – chaos Apr 7 '09 at 18:33

rlbond, Apr 26, 2009 at 4:11

This is just a regex; plenty of IDEs have regex search-and-replace. – rlbond Apr 26 '09 at 4:11

romandas, Jun 19, 2009 at 16:58

@rlbond - It comes down to how good is the regex engine in the IDE. Vim's regexes are pretty powerful; others.. not so much sometimes. – romandas Jun 19 '09 at 16:58

Kimball Robinson, Apr 16, 2010 at 0:32

The * will be greedy, so this regex assumes you have just two columns. If you want it to be nongreedy use {-} instead of * (see :help non-greedy for more information on the {} multiplier) – Kimball Robinson Apr 16 '10 at 0:32

mk12, Jun 22, 2012 at 17:31

This is actually a pretty simple regex, it's only escaping the group parentheses that makes it look complicated. – mk12 Jun 22 '12 at 17:31

community wiki
KKovacs
, Apr 11, 2009 at 7:14

Not exactly a dark secret, but I like to put the following mapping into my .vimrc file, so I can hit "-" (minus) anytime to open the file explorer to show files adjacent to the one I just edit . In the file explorer, I can hit another "-" to move up one directory, providing seamless browsing of a complex directory structures (like the ones used by the MVC frameworks nowadays):
map - :Explore<cr>

These may be also useful for somebody. I like to scroll the screen and advance the cursor at the same time:

map <c-j> j<c-e>
map <c-k> k<c-y>

Tab navigation - I love tabs and I need to move easily between them:

map <c-l> :tabnext<enter>
map <c-h> :tabprevious<enter>

Only on Mac OS X: Safari-like tab navigation:

map <S-D-Right> :tabnext<cr>
map <S-D-Left> :tabprevious<cr>

Roman Plášil, Oct 1, 2009 at 21:33

You can also browse files within Vim itself, using :Explore – Roman Plášil Oct 1 '09 at 21:33

KKovacs, Oct 15, 2009 at 15:20

Hi Roman, this is exactly what this mapping does, but assigns it to a "hot key". :) – KKovacs Oct 15 '09 at 15:20

community wiki
rampion
, Apr 7, 2009 at 20:11

Often, I like changing current directories while editing - so I have to specify paths less.
cd %:h

Leonard, May 8, 2009 at 1:54

What does this do? And does it work with autchdir? – Leonard May 8 '09 at 1:54

rampion, May 8, 2009 at 2:55

I suppose it would override autochdir temporarily (until you switched buffers again). Basically, it changes directory to the root directory of the current file. It gives me a bit more manual control than autochdir does. – rampion May 8 '09 at 2:55

Naga Kiran, Jul 8, 2009 at 13:44

:set autochdir //this also serves the same functionality and it changes the current directory to that of file in buffer – Naga Kiran Jul 8 '09 at 13:44

community wiki
4 revs
,Jul 21, 2009 at 1:12

I like to use 'sudo bash', and my sysadmin hates this. He locked down 'sudo' so it could only be used with a handful of commands (ls, chmod, chown, vi, etc), but I was able to use vim to get a root shell anyway:
bash$ sudo vi +'silent !bash' +q
Password: ******
root#

RJHunter, Jul 21, 2009 at 0:53

FWIW, sudoedit (or sudo -e) edits privileged files but runs your editor as your normal user. – RJHunter Jul 21 '09 at 0:53

sundar, Sep 23, 2009 at 9:41

@OP: That was cunning. :) – sundar Sep 23 '09 at 9:41

jnylen, Feb 22, 2011 at 15:58

yeah... I'd hate you too ;) you should only need a root shell VERY RARELY, unless you're already in the habit of running too many commands as root which means your permissions are all screwed up. – jnylen Feb 22 '11 at 15:58

d33tah, Mar 30, 2014 at 17:50

Why does your sysadmin even give you root? :D – d33tah Mar 30 '14 at 17:50

community wiki
Taurus Olson
, Apr 7, 2009 at 21:11

I often use many windows when I work on a project and sometimes I need to resize them. Here's what I use:
map + <C-W>+
map - <C-W>-

These mappings allow to increase and decrease the size of the current window. It's quite simple but it's fast.

Bill Lynch, Apr 8, 2009 at 2:49

There's also Ctrl-W =, which makes the windows equal width. – Bill Lynch Apr 8 '09 at 2:49

joeytwiddle, Jan 29, 2012 at 18:12

Don't forget you can prepend numbers to perform an action multiple times in Vim. So to expand the current window height by 8 lines: 8<C-W>+ – joeytwiddle Jan 29 '12 at 18:12

community wiki
Roberto Bonvallet
, May 6, 2009 at 7:38

:r! <command>

pastes the output of an external command into the buffer.

Do some math and get the result directly in the text:

:r! echo $((3 + 5 + 8))

Get the list of files to compile when writing a Makefile:

:r! ls *.c

Don't look up that fact you read on wikipedia, have it directly pasted into the document you are writing:

:r! lynx -dump http://en.wikipedia.org/wiki/Whatever

Sudhanshu, Jun 7, 2010 at 8:40

^R=3+5+8 in insert mode will let you insert the value of the expression (3+5+8) in text with fewer keystrokes. – Sudhanshu Jun 7 '10 at 8:40

dcn, Mar 27, 2011 at 10:13

How can I get the result/output to a different buffer than the current? – dcn Mar 27 '11 at 10:13

kenorb, Feb 19, 2015 at 11:31

Related: How to dump output from external command into editor? at Vim SE – kenorb Feb 19 '15 at 11:31

community wiki
jqno
, Jul 8, 2009 at 19:19

Map F5 to quickly ROT13 your buffer:
map <F5> ggg?G``

You can use it as a boss key :).

sehe, Mar 4, 2012 at 21:57

I don't know what you are writing... But surely, my boss would be more curious when he saw me write ROT13 jumble :) – sehe Mar 4 '12 at 21:57

romeovs, Jun 19, 2014 at 19:22

or to spoof your friends: nmap i ggg?G`` . Or the diabolical: nmap i ggg?G``i ! – romeovs Jun 19 '14 at 19:22

Amit Gold, Aug 7, 2016 at 10:14

@romeovs 2nd one is infinite loop, use nnoremap – Amit Gold Aug 7 '16 at 10:14

community wiki
mohi666
, Mar 4, 2011 at 2:20

Not an obscure feature, but very useful and time saving.

If you want to save a session of your open buffers, tabs, markers and other settings, you can issue the following:

mksession session.vim

You can open your session using:

vim -S session.vim

TankorSmash, Nov 3, 2012 at 13:45

You can also :so session.vim inside vim. – TankorSmash Nov 3 '12 at 13:45

community wiki
Grant Limberg
, May 11, 2009 at 21:59

I just found this one today via NSFAQ :

Comment blocks of code.

Enter Blockwise Visual mode by hitting CTRL-V.

Mark the block you wish to comment.

Hit I (capital I) and enter your comment string at the beginning of the line. (// for C++)

Hit ESC and all lines selected will have // prepended to the front of the line.

Neeraj Singh, Jun 17, 2009 at 16:56

I added # to comment out a block of code in ruby. How do I undo it. – Neeraj Singh Jun 17 '09 at 16:56

Grant Limberg, Jun 17, 2009 at 19:29

well, if you haven't done anything else to the file, you can simply type u for undo. Otherwise, I haven't figured that out yet. – Grant Limberg Jun 17 '09 at 19:29

nos, Jul 28, 2009 at 20:00

You can just hit ctrl+v again, mark the //'s and hit x to "uncomment" – nos Jul 28 '09 at 20:00

ZyX, Mar 7, 2010 at 14:18

I use NERDCommenter for this. – ZyX Mar 7 '10 at 14:18

Braden Best, Feb 4, 2016 at 16:23

Commented out code is probably one of the worst types of comment you could possibly put in your code. There are better uses for the awesome block insert. – Braden Best Feb 4 '16 at 16:23

community wiki
2 revs, 2 users 84%
Ian H
, Jul 3, 2015 at 23:44

I use vim for just about any text editing I do, so I often times use copy and paste. The problem is that vim by default will often times distort imported text via paste. The way to stop this is to use
:set paste

before pasting in your data. This will keep it from messing up.

Note that you will have to issue :set nopaste to recover auto-indentation. Alternative ways of pasting pre-formatted text are the clipboard registers ( * and + ), and :r!cat (you will have to end the pasted fragment with ^D).

It is also sometimes helpful to turn on a high contrast color scheme. This can be done with

:color blue

I've noticed that it does not work on all the versions of vim I use but it does on most.

jamessan, Dec 28, 2009 at 8:27

The "distortion" is happening because you have some form of automatic indentation enabled. Using set paste or specifying a key for the pastetoggle option is a common way to work around this, but the same effect can be achieved with set mouse=a as then Vim knows that the flood of text it sees is a paste triggered by the mouse. – jamessan Dec 28 '09 at 8:27

kyrias, Oct 19, 2013 at 12:15

If you have gvim installed you can often (though it depends on what your options your distro compiles vim with) use the X clipboard directly from vim through the * register. For example "*p to paste from the X xlipboard. (It works from terminal vim, too, it's just that you might need the gvim package if they're separate) – kyrias Oct 19 '13 at 12:15

Braden Best, Feb 4, 2016 at 16:26

@kyrias for the record, * is the PRIMARY ("middle-click") register. The clipboard is +Braden Best Feb 4 '16 at 16:26

community wiki
viraptor
, Apr 7, 2009 at 22:29

Here's something not obvious. If you have a lot of custom plugins / extensions in your $HOME and you need to work from su / sudo / ... sometimes, then this might be useful.

In your ~/.bashrc:

export VIMINIT=":so $HOME/.vimrc"

In your ~/.vimrc:

if $HOME=='/root'
        if $USER=='root'
                if isdirectory('/home/your_typical_username')
                        let rtuser = 'your_typical_username'
                elseif isdirectory('/home/your_other_username')
                        let rtuser = 'your_other_username'
                endif
        else
                let rtuser = $USER
        endif
        let &runtimepath = substitute(&runtimepath, $HOME, '/home/'.rtuser, 'g')
endif

It will allow your local plugins to load - whatever way you use to change the user.

You might also like to take the *.swp files out of your current path and into ~/vimtmp (this goes into .vimrc):

if ! isdirectory(expand('~/vimtmp'))
   call mkdir(expand('~/vimtmp'))
endif
if isdirectory(expand('~/vimtmp'))
   set directory=~/vimtmp
else
   set directory=.,/var/tmp,/tmp
endif

Also, some mappings I use to make editing easier - makes ctrl+s work like escape and ctrl+h/l switch the tabs:

inoremap <C-s> <ESC>
vnoremap <C-s> <ESC>
noremap <C-l> gt
noremap <C-h> gT

Kyle Challis, Apr 2, 2014 at 21:18

Just in case you didn't already know, ctrl+c already works like escape. – Kyle Challis Apr 2 '14 at 21:18

shalomb, Aug 24, 2015 at 8:02

I prefer never to run vim as root/under sudo - and would just run the command from vim e.g. :!sudo tee %, :!sudo mv % /etc or even launch a login shell :!sudo -ishalomb Aug 24 '15 at 8:02

community wiki
2 revs, 2 users 67%
,Nov 7, 2009 at 7:54

Ctrl-n while in insert mode will auto complete whatever word you're typing based on all the words that are in open buffers. If there is more than one match it will give you a list of possible words that you can cycle through using ctrl-n and ctrl-p.

community wiki
daltonb
, Feb 22, 2010 at 4:28

gg=G

Corrects indentation for entire file. I was missing my trusty <C-a><C-i> in Eclipse but just found out vim handles it nicely.

sjas, Jul 15, 2012 at 22:43

I find G=gg easier to type. – sjas Jul 15 '12 at 22:43

sri, May 12, 2013 at 16:12

=% should do it too. – sri May 12 '13 at 16:12

community wiki
mohi666
, Mar 24, 2011 at 22:44

Ability to run Vim on a client/server based modes.

For example, suppose you're working on a project with a lot of buffers, tabs and other info saved on a session file called session.vim.

You can open your session and create a server by issuing the following command:

vim --servername SAMPLESERVER -S session.vim

Note that you can open regular text files if you want to create a server and it doesn't have to be necessarily a session.

Now, suppose you're in another terminal and need to open another file. If you open it regularly by issuing:

vim new_file.txt

Your file would be opened in a separate Vim buffer, which is hard to do interactions with the files on your session. In order to open new_file.txt in a new tab on your server use this command:

vim --servername SAMPLESERVER --remote-tab-silent new_file.txt

If there's no server running, this file will be opened just like a regular file.

Since providing those flags every time you want to run them is very tedious, you can create a separate alias for creating client and server.

I placed the followings on my bashrc file:

alias vims='vim --servername SAMPLESERVER'
alias vimc='vim --servername SAMPLESERVER --remote-tab-silent'

You can find more information about this at: http://vimdoc.sourceforge.net/htmldoc/remote.html

community wiki
jm666
, May 11, 2011 at 19:54

Variation of sudo write:

into .vimrc

cmap w!! w !sudo tee % >/dev/null

After reload vim you can do "sudo save" as

:w!!

community wiki
3 revs, 3 users 74%
,Sep 17, 2010 at 17:06

HOWTO: Auto-complete Ctags when using Vim in Bash. For anyone else who uses Vim and Ctags, I've written a small auto-completer function for Bash. Add the following into your ~/.bash_completion file (create it if it does not exist):

Thanks go to stylishpants for his many fixes and improvements.

_vim_ctags() {
    local cur prev

    COMPREPLY=()
    cur="${COMP_WORDS[COMP_CWORD]}"
    prev="${COMP_WORDS[COMP_CWORD-1]}"

    case "${prev}" in
        -t)
            # Avoid the complaint message when no tags file exists
            if [ ! -r ./tags ]
            then
                return
            fi

            # Escape slashes to avoid confusing awk
            cur=${cur////\\/}

            COMPREPLY=( $(compgen -W "`awk -vORS=" "  "/^${cur}/ { print \\$1 }" tags`" ) )
            ;;
        *)
            _filedir_xspec
            ;;
    esac
}

# Files matching this pattern are excluded
excludelist='*.@(o|O|so|SO|so.!(conf)|SO.!(CONF)|a|A|rpm|RPM|deb|DEB|gif|GIF|jp?(e)g|JP?(E)G|mp3|MP3|mp?(e)g|MP?(E)G|avi|AVI|asf|ASF|ogg|OGG|class|CLASS)'

complete -F _vim_ctags -f -X "${excludelist}" vi vim gvim rvim view rview rgvim rgview gview

Once you restart your Bash session (or create a new one) you can type:

Code:

~$ vim -t MyC<tab key>

and it will auto-complete the tag the same way it does for files and directories:

Code:

MyClass MyClassFactory
~$ vim -t MyC

I find it really useful when I'm jumping into a quick bug fix.

>, Apr 8, 2009 at 3:05

Amazing....I really needed it – Sasha Apr 8 '09 at 3:05

TREE, Apr 27, 2009 at 13:19

can you summarize? If that external page goes away, this answer is useless. :( – TREE Apr 27 '09 at 13:19

Hamish Downer, May 5, 2009 at 16:38

Summary - it allows ctags autocomplete from the bash prompt for opening files with vim. – Hamish Downer May 5 '09 at 16:38

community wiki
2 revs, 2 users 80%
,Dec 22, 2016 at 7:44

I often want to highlight a particular word/function name, but don't want to search to the next instance of it yet:
map m* *#

René Nyffenegger, Dec 3, 2009 at 7:36

I don't understand this one. – René Nyffenegger Dec 3 '09 at 7:36

Scotty Allen, Dec 3, 2009 at 19:55

Try it:) It basically highlights a given word, without moving the cursor to the next occurrance (like * would). – Scotty Allen Dec 3 '09 at 19:55

jamessan, Dec 27, 2009 at 19:10

You can do the same with "nnoremap m* :let @/ = '\<' . expand('<cword>') . '\>'<cr>" – jamessan Dec 27 '09 at 19:10

community wiki
Ben
, Apr 9, 2009 at 12:37

% is also good when you want to diff files across two different copies of a project without wearing out the pinkies (from root of project1):
:vert diffs /project2/root/%

community wiki
Naga Kiran
, Jul 8, 2009 at 19:07

:setlocal autoread

Auto reloads the current buffer..especially useful while viewing log files and it almost serves the functionality of "tail" program in unix from within vim.

Checking for compile errors from within vim. set the makeprg variable depending on the language let's say for perl

:setlocal makeprg = perl\ -c \ %

For PHP

set makeprg=php\ -l\ %
set errorformat=%m\ in\ %f\ on\ line\ %l

Issuing ":make" runs the associated makeprg and displays the compilation errors/warnings in quickfix window and can easily navigate to the corresponding line numbers.

community wiki
2 revs, 2 users 73%
,Sep 14 at 20:16

Want an IDE?

:make will run the makefile in the current directory, parse the compiler output, you can then use :cn and :cp to step through the compiler errors opening each file and seeking to the line number in question.

:syntax on turns on vim's syntax highlighting.

community wiki
Luper Rouch
, Apr 9, 2009 at 12:53

Input a character from its hexadecimal value (insert mode):
<C-Q>x[type the hexadecimal byte]

MikeyB, Sep 22, 2009 at 21:57

<C-V> is the more generic command that works in both the text-mode and gui – MikeyB Sep 22 '09 at 21:57

jamessan, Dec 27, 2009 at 19:06

It's only <C-q> if you're using the awful mswin.vim (or you mapped it yourself). – jamessan Dec 27 '09 at 19:06

community wiki
Brad Cox
, May 8, 2009 at 1:54

I was sure someone would have posted this already, but here goes.

Take any build system you please; make, mvn, ant, whatever. In the root of the project directory, create a file of the commands you use all the time, like this:

mvn install
mvn clean install
... and so forth

To do a build, put the cursor on the line and type !!sh. I.e. filter that line; write it to a shell and replace with the results.

The build log replaces the line, ready to scroll, search, whatever.

When you're done viewing the log, type u to undo and you're back to your file of commands.

ojblass, Jul 5, 2009 at 18:27

This doesn't seem to fly on my system. Can you show an example only using the ls command? – ojblass Jul 5 '09 at 18:27

Brad Cox, Jul 29, 2009 at 19:30

!!ls replaces current line with ls output (adding more lines as needed). – Brad Cox Jul 29 '09 at 19:30

jamessan, Dec 28, 2009 at 8:29

Why wouldn't you just set makeprg to the proper tool you use for your build (if it isn't set already) and then use :make ? :copen will show you the output of the build as well as allowing you to jump to any warnings/errors. – jamessan Dec 28 '09 at 8:29

community wiki
2 revs, 2 users 95%
,Dec 28, 2009 at 8:38

==========================================================
In normal mode
==========================================================
gf ................ open file under cursor in same window --> see :h path
Ctrl-w f .......... open file under cursor in new window
Ctrl-w q .......... close current window
Ctrl-w 6 .......... open alternate file --> see :h #
gi ................ init insert mode in last insertion position
'0 ................ place the cursor where it was when the file was last edited

Braden Best, Feb 4, 2016 at 16:33

I believe it's <C-w> c to close a window, actually. :h ctrl-wBraden Best Feb 4 '16 at 16:33

community wiki
2 revs, 2 users 84%
,Sep 17, 2010 at 16:53

Due to the latency and lack of colors (I love color schemes :) I don't like programming on remote machines in PuTTY . So I developed this trick to work around this problem. I use it on Windows.

You will need

Setting up remote machine

Configure rsync to make your working directory accessible. I use an SSH tunnel and only allow connections from the tunnel:

address = 127.0.0.1
hosts allow = 127.0.0.1
port = 40000
use chroot = false
[bledge_ce]
    path = /home/xplasil/divine/bledge_ce
    read only = false

Then start rsyncd: rsync --daemon --config=rsyncd.conf

Setting up local machine

Install rsync from Cygwin. Start Pageant and load your private key for the remote machine. If you're using SSH tunelling, start PuTTY to create the tunnel. Create a batch file push.bat in your working directory which will upload changed files to the remote machine using rsync:

rsync --blocking-io *.cc *.h SConstruct rsync://localhost:40001/bledge_ce

SConstruct is a build file for scons. Modify the list of files to suit your needs. Replace localhost with the name of remote machine if you don't use SSH tunelling.

Configuring Vim That is now easy. We will use the quickfix feature (:make and error list), but the compilation will run on the remote machine. So we need to set makeprg:

set makeprg=push\ &&\ plink\ -batch\ xplasil@anna.fi.muni.cz\ \"cd\ /home/xplasil/divine/bledge_ce\ &&\ scons\ -j\ 2\"

This will first start the push.bat task to upload the files and then execute the commands on remote machine using SSH ( Plink from the PuTTY suite). The command first changes directory to the working dir and then starts build (I use scons).

The results of build will show conviniently in your local gVim errors list.

matpie, Sep 17, 2010 at 23:02

A much simpler solution would be to use bcvi: sshmenu.sourceforge.net/articles/bcvimatpie Sep 17 '10 at 23:02

Uri Goren, Jul 20 at 20:21

cmder is much easier and simpler, it also comes with its own ssh client – Uri Goren Jul 20 at 20:21

community wiki
3 revs, 2 users 94%
,Jan 16, 2014 at 14:10

I use Vim for everything. When I'm editing an e-mail message, I use:

gqap (or gwap )

extensively to easily and correctly reformat on a paragraph-by-paragraph basis, even with quote leadin characters. In order to achieve this functionality, I also add:

-c 'set fo=tcrq' -c 'set tw=76'

to the command to invoke the editor externally. One noteworthy addition would be to add ' a ' to the fo (formatoptions) parameter. This will automatically reformat the paragraph as you type and navigate the content, but may interfere or cause problems with errant or odd formatting contained in the message.

Andrew Ferrier, Jul 14, 2014 at 22:22

autocmd FileType mail set tw=76 fo=tcrq in your ~/.vimrc will also work, if you can't edit the external editor command. – Andrew Ferrier Jul 14 '14 at 22:22

community wiki
2 revs, 2 users 94%
,May 6, 2009 at 12:22

Put this in your .vimrc to have a command to pretty-print xml:
function FormatXml()
    %s:\(\S\)\(<[^/]\)\|\(>\)\(</\):\1\3\r\2\4:g
    set filetype=xml
    normal gg=G
endfunction

command FormatXml :call FormatXml()

David Winslow, Nov 24, 2009 at 20:43

On linuxes (where xmllint is pretty commonly installed) I usually just do :%! xmllint - for this. – David Winslow Nov 24 '09 at 20:43

community wiki
searlea
, Aug 6, 2009 at 9:33

:sp %:h - directory listing / file-chooser using the current file's directory

(belongs as a comment under rampion's cd tip, but I don't have commenting-rights yet)

bpw1621, Feb 19, 2011 at 15:13

":e ." does the same thing for your current working directory which will be the same as your current file's directory if you set autochdir – bpw1621 Feb 19 '11 at 15:13

community wiki
2 revs
,Sep 22, 2009 at 22:23

Just before copying and pasting to stackoverflow:
:retab 1
:% s/^I/ /g
:% s/^/    /

Now copy and paste code.

As requested in the comments:

retab 1. This sets the tab size to one. But it also goes through the code and adds extra tabs and spaces so that the formatting does not move any of the actual text (ie the text looks the same after ratab).

% s/^I/ /g: Note the ^I is tthe result of hitting tab. This searches for all tabs and replaces them with a single space. Since we just did a retab this should not cause the formatting to change but since putting tabs into a website is hit and miss it is good to remove them.

% s/^/ /: Replace the beginning of the line with four spaces. Since you cant actually replace the beginning of the line with anything it inserts four spaces at the beging of the line (this is needed by SO formatting to make the code stand out).

vehomzzz, Sep 22, 2009 at 20:52

explain it please... – vehomzzz Sep 22 '09 at 20:52

cmcginty, Sep 22, 2009 at 22:31

so I guess this won't work if you use 'set expandtab' to force all tabs to spaces. – cmcginty Sep 22 '09 at 22:31

Martin York, Sep 23, 2009 at 0:07

@Casey: The first two lines will not apply. The last line will make sure you can just cut and paste into SO. – Martin York Sep 23 '09 at 0:07

Braden Best, Feb 4, 2016 at 16:40

Note that you can achieve the same thing with cat <file> | awk '{print " " $line}' . So try :w ! awk '{print " " $line}' | xclip -i . That's supposed to be four spaces between the ""Braden Best Feb 4 '16 at 16:40

community wiki
Anders Holmberg
, Dec 28, 2009 at 9:21

When working on a project where the build process is slow I always build in the background and pipe the output to a file called errors.err (something like make debug 2>&1 | tee errors.err ). This makes it possible for me to continue editing or reviewing the source code during the build process. When it is ready (using pynotify on GTK to inform me that it is complete) I can look at the result in vim using quickfix . Start by issuing :cf[ile] which reads the error file and jumps to the first error. I personally like to use cwindow to get the build result in a separate window.

community wiki
quabug
, Jul 12, 2011 at 12:21

set colorcolumn=+1 or set cc=+1 for vim 7.3

Luc M, Oct 31, 2012 at 15:12

A short explanation would be appreciated... I tried it and could be very usefull! You can even do something like set colorcolumn=+1,+10,+20 :-) – Luc M Oct 31 '12 at 15:12

DBedrenko, Oct 31, 2014 at 16:17

@LucM If you tried it why didn't you provide an explanation? – DBedrenko Oct 31 '14 at 16:17

mjturner, Aug 19, 2015 at 11:16

colorcolumn allows you to specify columns that are highlighted (it's ideal for making sure your lines aren't too long). In the original answer, set cc=+1 highlights the column after textwidth . See the documentation for more information. – mjturner Aug 19 '15 at 11:16

community wiki
mpe
, May 11, 2009 at 4:39

For making vim a little more like an IDE editor:

Rook, May 11, 2009 at 4:42

How does that make Vim more like an IDE ?? – Rook May 11 '09 at 4:42

mpe, May 12, 2009 at 12:29

I did say "a little" :) But it is something many IDEs do, and some people like it, eg: eclipse.org/screenshots/images/JavaPerspective-WinXP.pngmpe May 12 '09 at 12:29

Rook, May 12, 2009 at 21:25

Yes, but that's like saying yank/paste functions make an editor "a little" more like an IDE. Those are editor functions. Pretty much everything that goes with the editor that concerns editing text and that particular area is an editor function. IDE functions would be, for example, project/files management, connectivity with compiler&linker, error reporting, building automation tools, debugger ... i.e. the stuff that doesn't actually do nothing with editing text. Vim has some functions & plugins so he can gravitate a little more towards being an IDE, but these are not the ones in question. – Rook May 12 '09 at 21:25

Rook, May 12, 2009 at 21:26

After all, an IDE = editor + compiler + debugger + building tools + ... – Rook May 12 '09 at 21:26

Rook, May 12, 2009 at 21:31

Also, just FYI, vim has an option to set invnumber. That way you don't have to "set nu" and "set nonu", i.e. remember two functions - you can just toggle. – Rook May 12 '09 at 21:31

community wiki
2 revs, 2 users 50%
PuzzleCracker
, Sep 13, 2009 at 23:20

I love :ls command.

aehlke, Oct 28, 2009 at 3:16

Well what does it do? – aehlke Oct 28 '09 at 3:16

>, Dec 7, 2009 at 10:51

gives the current file name opened ? – user59634 Dec 7 '09 at 10:51

Nona Urbiz, Dec 20, 2010 at 8:25

:ls lists all the currently opened buffers. :be opens a file in a new buffer, :bn goes to the next buffer, :bp to the previous, :b filename opens buffer filename (it auto-completes too). buffers are distinct from tabs, which i'm told are more analogous to views. – Nona Urbiz Dec 20 '10 at 8:25

community wiki
2 revs, 2 users 80%
,Sep 17, 2010 at 16:45

A few useful ones:
:set nu # displays lines
:44     # go to line 44
'.      # go to last modification line

My favourite: Ctrl + n WORD COMPLETION!

community wiki
2 revs
,Jun 18, 2013 at 11:10

In insert mode, ctrl + x, ctrl + p will complete (with menu of possible completions if that's how you like it) the current long identifier that you are typing.
if (SomeCall(LONG_ID_ <-- type c-x c-p here
            [LONG_ID_I_CANT_POSSIBLY_REMEMBER]
             LONG_ID_BUT_I_NEW_IT_WASNT_THIS_ONE
             LONG_ID_GOSH_FORGOT_THIS
             LONG_ID_ETC
             ∶

Justin L., Jun 13, 2013 at 16:21

i type <kbd>ctrl</kbd>+<kbd>p</kbd> way too much by accident while trying to hit <kbd>ctrl</kbd>+<kbd>[</kbd> >< – Justin L. Jun 13 '13 at 16:21

community wiki
Fritz G. Mehner
, Apr 22, 2009 at 16:41

Use the right mouse key to toggle insert mode in gVim with the following settings in ~/.gvimrc :
"
"------------------------------------------------------------------
" toggle insert mode <--> 'normal mode with the <RightMouse>-key
"------------------------------------------------------------------
nnoremap  <RightMouse> <Insert>
inoremap  <RightMouse> <ESC>
"

Andreas Grech, Jun 20, 2010 at 17:22

This is stupid. Defeats the productivity gains from not using the mouse. – Andreas Grech Jun 20 '10 at 17:22

Brady Trainor, Jul 5, 2014 at 21:07

Maybe fgm has head gestures mapped to mouse clicks. – Brady Trainor Jul 5 '14 at 21:07

community wiki
AIB
, Apr 27, 2009 at 13:06

Replace all
  :%s/oldtext/newtext/igc

Give a to replace all :)

Nathan Fellman, Jan 12, 2011 at 20:58

or better yet, instead of typing a, just remove the c . c means confirm replacementNathan Fellman Jan 12 '11 at 20:58

community wiki
2 revs
,Sep 13, 2009 at 18:39

Neither of the following is really diehard, but I find it extremely useful.

Trivial bindings, but I just can't live without. It enables hjkl-style movement in insert mode (using the ctrl key). In normal mode: ctrl-k/j scrolls half a screen up/down and ctrl-l/h goes to the next/previous buffer. The µ and ù mappings are especially for an AZERTY-keyboard and go to the next/previous make error.

imap <c-j> <Down>
imap <c-k> <Up>
imap <c-h> <Left>
imap <c-l> <Right>
nmap <c-j> <c-d>
nmap <c-k> <c-u>
nmap <c-h> <c-left>
nmap <c-l> <c-right>

nmap ù :cp<RETURN>
nmap µ :cn<RETURN>

A small function I wrote to highlight functions, globals, macro's, structs and typedefs. (Might be slow on very large files). Each type gets different highlighting (see ":help group-name" to get an idea of your current colortheme's settings) Usage: save the file with ww (default "\ww"). You need ctags for this.

nmap <Leader>ww :call SaveCtagsHighlight()<CR>

"Based on: http://stackoverflow.com/questions/736701/class-function-names-highlighting-in-vim
function SaveCtagsHighlight()
    write

    let extension = expand("%:e")
    if extension!="c" && extension!="cpp" && extension!="h" && extension!="hpp"
        return
    endif

    silent !ctags --fields=+KS *
    redraw!

    let list = taglist('.*')
    for item in list
        let kind = item.kind

        if     kind == 'member'
            let kw = 'Identifier'
        elseif kind == 'function'
            let kw = 'Function'
        elseif kind == 'macro'
            let kw = 'Macro'
        elseif kind == 'struct'
            let kw = 'Structure'
        elseif kind == 'typedef'
            let kw = 'Typedef'
        else
            continue
        endif

        let name = item.name
        if name != 'operator=' && name != 'operator ='
            exec 'syntax keyword '.kw.' '.name
        endif
    endfor
    echo expand("%")." written, tags updated"
endfunction

I have the habit of writing lots of code and functions and I don't like to write prototypes for them. So I made some function to generate a list of prototypes within a C-style sourcefile. It comes in two flavors: one that removes the formal parameter's name and one that preserves it. I just refresh the entire list every time I need to update the prototypes. It avoids having out of sync prototypes and function definitions. Also needs ctags.

"Usage: in normal mode, where you want the prototypes to be pasted:
":call GenerateProptotypes()
function GeneratePrototypes()
    execute "silent !ctags --fields=+KS ".expand("%")
    redraw!
    let list = taglist('.*')
    let line = line(".")
    for item in list
        if item.kind == "function"  &&  item.name != "main"
            let name = item.name
            let retType = item.cmd
            let retType = substitute( retType, '^/\^\s*','','' )
            let retType = substitute( retType, '\s*'.name.'.*', '', '' ) 

            if has_key( item, 'signature' )
                let sig = item.signature
                let sig = substitute( sig, '\s*\w\+\s*,',        ',',   'g')
                let sig = substitute( sig, '\s*\w\+\(\s)\)', '\1', '' )
            else
                let sig = '()'
            endif
            let proto = retType . "\t" . name . sig . ';'
            call append( line, proto )
            let line = line + 1
        endif
    endfor
endfunction


function GeneratePrototypesFullSignature()
    "execute "silent !ctags --fields=+KS ".expand("%")
    let dir = expand("%:p:h");
    execute "silent !ctags --fields=+KSi --extra=+q".dir."/* "
    redraw!
    let list = taglist('.*')
    let line = line(".")
    for item in list
        if item.kind == "function"  &&  item.name != "main"
            let name = item.name
            let retType = item.cmd
            let retType = substitute( retType, '^/\^\s*','','' )
            let retType = substitute( retType, '\s*'.name.'.*', '', '' ) 

            if has_key( item, 'signature' )
                let sig = item.signature
            else
                let sig = '(void)'
            endif
            let proto = retType . "\t" . name . sig . ';'
            call append( line, proto )
            let line = line + 1
        endif
    endfor
endfunction

community wiki
Yada
, Nov 24, 2009 at 20:21

I collected these over the years.
" Pasting in normal mode should append to the right of cursor
nmap <C-V>      a<C-V><ESC>
" Saving
imap <C-S>      <C-o>:up<CR>
nmap <C-S>      :up<CR>
" Insert mode control delete
imap <C-Backspace> <C-W>
imap <C-Delete> <C-O>dw
nmap    <Leader>o       o<ESC>k
nmap    <Leader>O       O<ESC>j
" tired of my typo
nmap :W     :w

community wiki
jonyamo
, May 10, 2010 at 15:01

Create a function to execute the current buffer using it's shebang (assuming one is set) and call it with crtl-x.
map <C-X> :call CallInterpreter()<CR>

au BufEnter *
\ if match (getline(1),  '^\#!') == 0 |
\   execute("let b:interpreter = getline(1)[2:]") |
\ endif

fun! CallInterpreter()
    if exists("b:interpreter")
        exec("! ".b:interpreter." %")
    endif
endfun

community wiki
Marcus Borkenhagen
, Jan 12, 2011 at 15:22

map macros

I rather often find it useful to on-the-fly define some key mapping just like one would define a macro. The twist here is, that the mapping is recursive and is executed until it fails.

Example:

enum ProcStats
{
        ps_pid,
        ps_comm,
        ps_state,
        ps_ppid,
        ps_pgrp,
:map X /ps_<CR>3xixy<Esc>X

Gives:

enum ProcStats
{
        xypid,
        xycomm,
        xystate,
        xyppid,
        xypgrp,

Just an silly example :).

I am completely aware of all the downsides - it just so happens that I found it rather useful in some occasions. Also it can be interesting to watch it at work ;).

00dani, Aug 2, 2013 at 11:25

Macros are also allowed to be recursive and work in pretty much the same fashion when they are, so it's not particularly necessary to use a mapping for this. – 00dani Aug 2 '13 at 11:25

>,

Reuse

Motions to mix with other commands, more here .

tx
fx
Fx

Use your favorite tools in Vim.

:r !python anything you want or awk or Y something

Repeat in visual mode, powerful when combined with tips above.

;

[Oct 21, 2018] What are your suggestions for an ideal Vim configuration for Perl development?

Notable quotes:
"... The .vimrc settings should be heavily commented ..."
"... Look also at perl-support.vim (a Perl IDE for Vim/gVim). Comes with suggestions for customizing Vim (.vimrc), gVim (.gvimrc), ctags, perltidy, and Devel:SmallProf beside many other things. ..."
"... Perl Best Practices has an appendix on Editor Configurations . vim is the first editor listed. ..."
"... Andy Lester and others maintain the official Perl, Perl 6 and Pod support files for Vim on Github: https://github.com/vim-perl/vim-perl ..."
Aug 18, 2016 | stackoverflow.com
There are a lot of threads pertaining to how to configure Vim/GVim for Perl development on PerlMonks.org .

My purpose in posting this question is to try to create, as much as possible, an ideal configuration for Perl development using Vim/GVim. Please post your suggestions for .vimrc settings as well as useful plugins.

I will try to merge the recommendations into a set of .vimrc settings and to a list of recommended plugins, ftplugins and syntax files.

.vimrc settings
"Create a command :Tidy to invoke perltidy"
"By default it operates on the whole file, but you can give it a"
"range or visual range as well if you know what you're doing."
command -range=% -nargs=* Tidy <line1>,<line2>!
    \perltidy -your -preferred -default -options <args>

vmap <tab> >gv    "make tab in v mode indent code"
vmap <s-tab> <gv

nmap <tab> I<tab><esc> "make tab in normal mode indent code"
nmap <s-tab> ^i<bs><esc>

let perl_include_pod   = 1    "include pod.vim syntax file with perl.vim"
let perl_extended_vars = 1    "highlight complex expressions such as @{[$x, $y]}"
let perl_sync_dist     = 250  "use more context for highlighting"

set nocompatible "Use Vim defaults"
set backspace=2  "Allow backspacing over everything in insert mode"

set autoindent   "Always set auto-indenting on"
set expandtab    "Insert spaces instead of tabs in insert mode. Use spaces for indents"
set tabstop=4    "Number of spaces that a <Tab> in the file counts for"
set shiftwidth=4 "Number of spaces to use for each step of (auto)indent"

set showmatch    "When a bracket is inserted, briefly jump to the matching one"
syntax plugins ftplugins CPAN modules Debugging tools

I just found out about VimDebug . I have not yet been able to install it on Windows, but looks promising from the description.

innaM, Oct 15, 2009 at 19:06

The .vimrc settings should be heavily commented. E.g., what does perl_include_pod do? – innaM Oct 15 '09 at 19:06

Sinan άnόr, Oct 15, 2009 at 20:02

@Manni: You are welcome. I have been using the same .vimrc for many years and a recent bunch of vim related questions got me curious. I was too lazy to wade through everything that was posted on PerlMonks (and see what was current etc.), so I figured we could put together something here. – Sinan άnόr Oct 15 '09 at 20:02

innaM, Oct 16, 2009 at 8:22

I think that that's a great idea. Sorry that my own contribution is that lame. – innaM Oct 16 '09 at 8:22

Telemachus, Jul 8, 2010 at 0:40

Rather than closepairs, I would recommend delimitMate or one of the various autoclose plugins. (There are about three named autoclose, I think.) The closepairs plugin can't handle a single apostrophe inside a string (i.e. print "This isn't so hard, is it?" ), but delimitMate and others can. github.com/Raimondi/delimitMate – Telemachus Jul 8 '10 at 0:40

community wiki 2 revs, 2 users 74% ,Dec 21, 2009 at 20:57

From chromatic's blog (slightly adapted to be able to use the same mapping from all modes).
vmap, pt :!perltidy<CR> 
nmap, pt :%! perltidy<CR>

hit, pt in normal mode to clean up the whole file, or in visual mode to clean up the selection. You could also add:

imap, pt <ESC>:%! perltidy<CR>

But using commands from input mode is not recommended.

innaM, Oct 21, 2009 at 9:21

I seem to be missing something here: How can I type, ptv without vim running perltidy on the entire file? – innaM Oct 21 '09 at 9:21

innaM, Oct 21, 2009 at 9:23

Ovid's comment (#3) seems offer a much better solution. – innaM Oct 21 '09 at 9:23

innaM, Oct 21, 2009 at 13:22

Three hours later: turns out that the 'p' in that mapping is a really bad idea. It will bite you when vim's got something to paste. – innaM Oct 21 '09 at 13:22

Ether, Oct 21, 2009 at 15:23

@Manni: select a region first: with the mouse if using gvim, or with visual mode ( v and then use motion commands). – Ether Oct 21 '09 at 15:23

Ether, Oct 21, 2009 at 19:44

@Manni: I just gave it a try: if you type, pt, vim waits for you to type something else (e.g. <cr>) as a signal that the command is ended. Hitting, ptv will immediately format the region. So I would expect that vim recognizes that there is overlap between the mappings, and waits for disambiguation before proceeding. – Ether Oct 21 '09 at 19:44

community wiki hobbs, Oct 16, 2009 at 0:35

" Create a command :Tidy to invoke perltidy.
" By default it operates on the whole file, but you can give it a
" range or visual range as well if you know what you're doing.
command -range=% -nargs=* Tidy <line1>,<line2>!
    \perltidy -your -preferred -default -options <args>

community wiki Fritz G. Mehner, Oct 17, 2009 at 7:44

Look also at perl-support.vim (a Perl IDE for Vim/gVim). Comes with suggestions for customizing Vim (.vimrc), gVim (.gvimrc), ctags, perltidy, and Devel:SmallProf beside many other things.

innaM, Oct 19, 2009 at 12:32

I hate that one. The comments feature alone deserves a thorough 'rm -rf', IMHO. – innaM Oct 19 '09 at 12:32

sundar, Mar 11, 2010 at 20:54

I hate the fact that \$ is changed automatically to a "my $" declaration (same with \@ and \%). Does the author never use references or what?! – sundar Mar 11 '10 at 20:54

chiggsy, Sep 14, 2010 at 13:48

I take pieces of that one. If it were a man, you'd say about him, "He was only good for transplants..." – chiggsy Sep 14 '10 at 13:48

community wiki Permanuno, Oct 20, 2009 at 16:55

Perl Best Practices has an appendix on Editor Configurations . vim is the first editor listed.

community wiki 2 revs, 2 users 67% ,May 10, 2014 at 21:08

Andy Lester and others maintain the official Perl, Perl 6 and Pod support files for Vim on Github: https://github.com/vim-perl/vim-perl

Sinan άnόr, Jan 26, 2010 at 21:20

Note that that link is already listed in the body of the question (look under syntax ). – Sinan άnόr Jan 26 '10 at 21:20

community wiki 2 revs, 2 users 94% ,Dec 7, 2010 at 19:42

For tidying, I use the following; either \t to tidy the whole file, or I select a few lines in shift+V mode and then do \t
nnoremap <silent> \t :%!perltidy -q<Enter>
vnoremap <silent> \t :!perltidy -q<Enter>

Sometimes it's also useful to deparse code. As the above lines, either for the whole file or for a selection.

nnoremap <silent> \D :.!perl -MO=Deparse 2>/dev/null<CR>
vnoremap <silent> \D :!perl -MO=Deparse 2>/dev/null<CR>

community wiki 3 revs, 3 users 36% ,Oct 16, 2009 at 14:25

.vimrc:
" Allow :make to run 'perl -c' on the current buffer, jumping to 
" errors as appropriate
" My copy of vimparse: http://irc.peeron.com/~zigdon/misc/vimparse.pl

set makeprg=$HOME/bin/vimparse.pl\ -c\ %\ $*

" point at wherever you keep the output of pltags.pl, allowing use of ^-]
" to jump to function definitions.

set tags+=/path/to/tags

innaM, Oct 15, 2009 at 19:34

What is pltags.pl? Is it better than ctags? – innaM Oct 15 '09 at 19:34

Sinan άnόr, Oct 15, 2009 at 19:46

I think search.cpan.org/perldoc/Perl::Tags is based on it. – Sinan άnόr Oct 15 '09 at 19:46

Sinan άnόr, Oct 15, 2009 at 20:00

Could you please explain if there are any advantages to using pltags.pl rather than taglist.vim w/ ctags ? – Sinan άnόr Oct 15 '09 at 20:00

innaM, Oct 16, 2009 at 14:24

And vimparse.pl really works for you? Is that really the correct URL? – innaM Oct 16 '09 at 14:24

zigdon, Oct 16, 2009 at 18:51

@sinan it enables quickfix - all it does is reformat the output of perl -c so that vim parses it as compiler errors. The the usual quickfix commands work. – zigdon Oct 16 '09 at 18:51

community wiki
innaM
, Oct 19, 2009 at 8:57

Here's an interesting module I found on the weekend: App::EditorTools::Vim . Its most interesting feature seems to be its ability to rename lexical variables. Unfortunately, my tests revealed that it doesn't seem to be ready yet for any production use, but it sure seems worth to keep an eye on.

community wiki 3 revs, 2 users 79%, Oct 19, 2009 at 13:50

Here are a couple of my .vimrc settings. They may not be Perl specific, but I couldn't work without them:
set nocompatible        " Use Vim defaults (much better!) "
set bs=2                " Allow backspacing over everything in insert mode "
set ai                  " Always set auto-indenting on "
set showmatch           " show matching brackets "

" for quick scripts, just open a new buffer and type '_perls' "
iab _perls #!/usr/bin/perl<CR><BS><CR>use strict;<CR>use warnings;<CR>

community wiki, J.J., Feb 17, 2010 at 21:35

I have 2.

The first one I know I picked up part of it from someone else, but I can't remember who. Sorry unknown person. Here's how I made "C^N" auto complete work with Perl. Here's my .vimrc commands.

" to use CTRL+N with modules for autocomplete "
set iskeyword+=:
set complete+=k~/.vim_extras/installed_modules.dat

Then I set up a cron to create the installed_modules.dat file. Mine is for my mandriva system. Adjust accordingly.

locate *.pm | grep "perl5" | sed -e "s/\/usr\/lib\/perl5\///" | sed -e "s/5.8.8\///" | sed -e "s/5.8.7\///" | sed -e "s/vendor_perl\///" | sed -e "s/site_perl\///" | sed -e "s/x86_64-linux\///" | sed -e "s/\//::/g" | sed -e "s/\.pm//" >/home/jeremy/.vim_extras/installed_modules.dat

The second one allows me to use gf in Perl. Gf is a shortcut to other files. just place your cursor over the file and type gf and it will open that file.

" To use gf with perl "
set path+=$PWD/**,
set path +=/usr/lib/perl5/*,
set path+=/CompanyCode/*,   " directory containing work code "
autocmd BufRead *.p? set include=^use
autocmd BufRead *.pl set includeexpr=substitute(v:fname,'\\(.*\\)','\\1.pm','i')

community wiki
zzapper
, Sep 23, 2010 at 9:41

I find the following abbreviations useful
iab perlb  print "Content-type: text/html\n\n <p>zdebug + $_ + $' + $`  line ".__LINE__.__FILE__."\n";exit;
iab perlbb print "Content-type: text/html\n\n<p>zdebug  <C-R>a  line ".__LINE__.__FILE__."\n";exit;
iab perlbd do{print "Content-type: text/html\n\n<p>zdebug  <C-R>a  line ".__LINE__."\n";exit} if $_ =~ /\w\w/i;
iab perld print "Content-type: text/html\n\n dumper";use Data::Dumper;$Data::Dumper::Pad="<br>";print Dumper <C-R>a ;exit;

iab perlf foreach $line ( keys %ENV )<CR> {<CR> }<LEFT><LEFT>
iab perle while (($k,$v) = each %ENV) { print "<br>$k = $v\n"; }
iab perli x = (i<4) ? 4 : i;
iab perlif if ($i==1)<CR>{<CR>}<CR>else<CR>{<CR>}
iab perlh $html=<<___HTML___;<CR>___HTML___<CR>

You can make them perl only with

au bufenter *.pl iab xbug print "<p>zdebug ::: $_ :: $' :: $`  line ".__LINE__."\n";exit;

mfontani, Dec 7, 2010 at 14:47

there's no my anywhere there; I take it you usually write CGIs with no use strict; ? (just curious if this is so) – mfontani Dec 7 '10 at 14:47

Sean McMillan, Jun 30, 2011 at 0:43

Oh wow, and without CGI.pm as well. It's like a 15 year flashback. – Sean McMillan Jun 30 '11 at 0:43

community wiki Benno, May 3, 2013 at 12:45

By far the most useful are
  1. Perl filetype pluging (ftplugin) - this colour-codes various code elements
  2. Creating a check-syntax-before-saving command "W" preventing you from saving bad code (you can override with the normal 'w').

Installing he plugins are a bit dicky as the version of vim (and linux) put the plugins in different places. Mine are in ~/.vim/after/

my .vimrc below.

set vb
set ts=2
set sw=2
set enc=utf-8
set fileencoding=utf-8
set fileencodings=ucs-bom,utf8,prc
set guifont=Monaco:h11
set guifontwide=NSimsun:h12
set pastetoggle=<F3>
command -range=% -nargs=* Tidy <line1>,<line2>!
    \perltidy
filetype plugin on
augroup JumpCursorOnEdit
 au!
 autocmd BufReadPost *
 \ if expand("<afile>:p:h") !=? $TEMP |
 \ if line("'\"") > 1 && line("'\"") <= line("$") |
 \ let JumpCursorOnEdit_foo = line("'\"") |
 \ let b:doopenfold = 1 |
 \ if (foldlevel(JumpCursorOnEdit_foo) > foldlevel(JumpCursorOnEdit_foo - 1)) |
 \ let JumpCursorOnEdit_foo = JumpCursorOnEdit_foo - 1 |
 \ let b:doopenfold = 2 |
 \ endif |
 \ exe JumpCursorOnEdit_foo |
 \ endif |
 \ endif
 " Need to postpone using "zv" until after reading the modelines.
 autocmd BufWinEnter *
 \ if exists("b:doopenfold") |
 \ exe "normal zv" |
 \ if(b:doopenfold > 1) |
 \ exe "+".1 |
 \ endif |
 \ unlet b:doopenfold |
 \ endif
augroup END

[Oct 21, 2018] Duplicate a whole line in Vim

Notable quotes:
"... Do people not run vimtutor anymore? This is probably within the first five minutes of learning how to use Vim. ..."
"... Can also use capital Y to copy the whole line. ..."
"... I think the Y should be "copy from the cursor to the end" ..."
"... In normal mode what this does is copy . copy this line to just below this line . ..."
"... And in visual mode it turns into '<,'> copy '> copy from start of selection to end of selection to the line below end of selection . ..."
"... I like: Shift + v (to select the whole line immediately and let you select other lines if you want), y, p ..."
"... Multiple lines with a number in between: y7yp ..."
"... 7yy is equivalent to y7y and is probably easier to remember how to do. ..."
"... or :.,.+7 copy .+7 ..."
"... When you press : in visual mode, it is transformed to '<,'> so it pre-selects the line range the visual selection spanned over ..."
Oct 21, 2018 | stackoverflow.com

sumek, Sep 16, 2008 at 15:02

How do I duplicate a whole line in Vim in a similar way to Ctrl + D in IntelliJ IDEA/Resharper or Ctrl + Alt + / in Eclipse?

dash-tom-bang, Feb 15, 2016 at 23:31

Do people not run vimtutor anymore? This is probably within the first five minutes of learning how to use Vim.dash-tom-bang Feb 15 '16 at 23:31

Mark Biek, Sep 16, 2008 at 15:06

yy or Y to copy the line
or
dd to delete (cutting) the line

then

p to paste the copied or deleted text after the current line
or
P to paste the copied or deleted text before the current line

camflan, Sep 28, 2008 at 15:55

Can also use capital Y to copy the whole line.camflan Sep 28 '08 at 15:55

nXqd, Jul 19, 2012 at 11:35

@camflan I think the Y should be "copy from the cursor to the end" nXqd Jul 19 '12 at 11:35

Amir Ali Akbari, Oct 9, 2012 at 10:33

and 2yy can be used to copy 2 lines (and for any other n) – Amir Ali Akbari Oct 9 '12 at 10:33

zelk, Mar 9, 2014 at 13:29

To copy two lines, it's even faster just to go yj or yk, especially since you don't double up on one character. Plus, yk is a backwards version that 2yy can't do, and you can put the number of lines to reach backwards in y9j or y2k, etc.. Only difference is that your count has to be n-1 for a total of n lines, but your head can learn that anyway. – zelk Mar 9 '14 at 13:29

DarkWiiPlayer, Apr 13 at 7:26

I know I'm late to the party, but whatever; I have this in my .vimrc:
nnoremap <C-d> :copy .<CR>
vnoremap <C-d> :copy '><CR>

the :copy command just copies the selected line or the range (always whole lines) to below the line number given as its argument.

In normal mode what this does is copy . copy this line to just below this line .

And in visual mode it turns into '<,'> copy '> copy from start of selection to end of selection to the line below end of selection .

yolenoyer, Apr 11 at 16:34

I like to use this mapping:
:nnoremap yp Yp

because it makes it consistent to use alongside the native YP command.

Gabe add a comment, Jul 14, 2009 at 4:45

I like: Shift + v (to select the whole line immediately and let you select other lines if you want), y, p

jedi, Feb 11 at 17:20

If you would like to duplicate a line and paste it right away below the current like, just like in Sublime Ctrl + Shift + D, then you can add this to your .vimrc file.

imap <S-C-d> <Esc>Yp

jedi, Apr 14 at 17:48

This works perfectly fine for me: imap <S-C-d> <Esc>Ypi insert mode and nmap <S-C-d> <Esc>Yp in normal mode – jedi Apr 14 at 17:48

Chris Penner, Apr 20, 2015 at 4:33

Default is yyp, but I've been using this rebinding for a year or so and love it:

" set Y to duplicate lines, works in visual mode as well. nnoremap Y yyp vnoremap Y y`>pgv

yemu, Oct 12, 2013 at 18:23

yyp - paste after

yyP - paste before

Mikk, Dec 4, 2015 at 9:09

@A-B-B However, there is a miniature difference here - what line will your cursor land on. – Mikk Dec 4 '15 at 9:09

theschmitzer, Sep 16, 2008 at 15:16

yyp - remember it with "yippee!"

Multiple lines with a number in between: y7yp

graywh, Jan 4, 2009 at 21:25

7yy is equivalent to y7y and is probably easier to remember how to do.graywh Jan 4 '09 at 21:25

Nefrubyr, Jul 29, 2014 at 14:09

y7yp (or 7yyp) is rarely useful; the cursor remains on the first line copied so that p pastes the copied lines between the first and second line of the source. To duplicate a block of lines use 7yyP – Nefrubyr Jul 29 '14 at 14:09

DarkWiiPlayer, Apr 13 at 7:28

@Nefrubyr or :.,.+7 copy .+7 :P – DarkWiiPlayer Apr 13 at 7:28

Michael, May 12, 2016 at 14:54

For someone who doesn't know vi, some answers from above might mislead him with phrases like "paste ... after/before current line ".
It's actually "paste ... after/before cursor ".

yy or Y to copy the line
or
dd to delete the line

then

p to paste the copied or deleted text after the cursor
or
P to paste the copied or deleted text before the cursor


For more key bindings, you can visit this site: vi Complete Key Binding List

ap-osd, Feb 10, 2016 at 13:23

For those starting to learn vi, here is a good introduction to vi by listing side by side vi commands to typical Windows GUI Editor cursor movement and shortcut keys. It lists all the basic commands including yy (copy line) and p (paste after) or P (paste before).

vi (Vim) for Windows Users

pjz, Sep 16, 2008 at 15:04

yy

will yank the current line without deleting it

dd

will delete the current line

p

will put a line grabbed by either of the previous methods

Benoit, Apr 17, 2012 at 15:17

Normal mode: see other answers.

The Ex way:

If you need to move instead of copying, use :m instead of :t .

This can be really powerful if you combine it with :g or :v :

Reference: :help range, :help :t, :help :g, :help :m and :help :v

Benoit, Jun 30, 2012 at 14:17

When you press : in visual mode, it is transformed to '<,'> so it pre-selects the line range the visual selection spanned over. So, in visual mode, :t0 will copy the lines at the beginning. – Benoit Jun 30 '12 at 14:17

Niels Bom, Jul 31, 2012 at 8:21

For the record: when you type a colon (:) you go into command line mode where you can enter Ex commands. vimdoc.sourceforge.net/htmldoc/cmdline.html Ex commands can be really powerful and terse. The yyp solutions are "Normal mode" commands. If you want to copy/move/delete a far-away line or range of lines an Ex command can be a lot faster. – Niels Bom Jul 31 '12 at 8:21

Burak Erdem, Jul 8, 2016 at 16:55

:t. is the exact answer to the question. – Burak Erdem Jul 8 '16 at 16:55

Aaron Thoma, Aug 22, 2013 at 23:31

Y is usually remapped to y$ (yank (copy) until end of line (from current cursor position, not beginning of line)) though. With this line in .vimrc : :nnoremap Y y$Aaron Thoma Aug 22 '13 at 23:31

Kwondri, Sep 16, 2008 at 15:37

If you want another way :-)

"ayy this will store the line in buffer a

"ap this will put the contents of buffer a at the cursor.

There are many variations on this.

"a5yy this will store the 5 lines in buffer a

see http://www.vim.org/htmldoc/help.html for more fun

frbl, Jun 21, 2015 at 21:04

Thanks, I used this as a bind: map <Leader>d "ayy"ap – frbl Jun 21 '15 at 21:04

Rook, Jul 14, 2009 at 4:37

Another option would be to go with:
nmap <C-d> mzyyp`z

gives you the advantage of preserving the cursor position.

,Sep 18, 2008 at 20:32

You can also try <C-x><C-l> which will repeat the last line from insert mode and brings you a completion window with all of the lines. It works almost like <C-p>

Jorge Gajon, May 11, 2009 at 6:38

This is very useful, but to avoid having to press many keys I have mapped it to just CTRL-L, this is my map: inoremap ^L ^X^L – Jorge Gajon May 11 '09 at 6:38

cori, Sep 16, 2008 at 15:06

1 gotcha: when you use "p" to put the line, it puts it after the line your cursor is on, so if you want to add the line after the line you're yanking, don't move the cursor down a line before putting the new line.

Ghoti, Jan 31, 2016 at 11:05

or use capital P - put before – Ghoti Jan 31 '16 at 11:05

[Oct 21, 2018] Indent multiple lines quickly in vi

Oct 21, 2018 | stackoverflow.com

Allain Lalonde, Oct 25, 2008 at 3:27

Should be trivial, and it might even be in the help, but I can't figure out how to navigate it. How do I indent multiple lines quickly in vi?

Greg Hewgill, Oct 25, 2008 at 3:28

Use the > command. To indent 5 lines, 5>> . To mark a block of lines and indent it, Vjj> to indent 3 lines (vim only). To indent a curly-braces block, put your cursor on one of the curly braces and use >% .

If you're copying blocks of text around and need to align the indent of a block in its new location, use ]p instead of just p . This aligns the pasted block with the surrounding text.

Also, the shiftwidth setting allows you to control how many spaces to indent.

akdom, Oct 25, 2008 at 3:31

<shift>-v also works to select a line in Vim. – akdom Oct 25 '08 at 3:31

R. Martinho Fernandes, Feb 15, 2009 at 17:26

I use >i} (indent inner {} block). Works in vim. Not sure it works in vi. – R. Martinho Fernandes Feb 15 '09 at 17:26

Kamran Bigdely, Feb 28, 2011 at 23:25

My problem(in gVim) is that the command > indents much more than 2 blanks (I want just two blanks but > indent something like 5 blanks) – Kamran Bigdely Feb 28 '11 at 23:25

Greg Hewgill, Mar 1, 2011 at 18:42

@Kamran: See the shiftwidth setting for the way to change that. – Greg Hewgill Mar 1 '11 at 18:42

Greg Hewgill, Feb 28, 2013 at 3:36

@MattStevens: You can find extended discussion about this phenomenon here: meta.stackexchange.com/questions/9731/ – Greg Hewgill Feb 28 '13 at 3:36

Michael Ekoka, Feb 15, 2009 at 5:42

When you select a block and use > to indent, it indents then goes back to normal mode. I have this in my .vimrc file:
vnoremap < <gv

vnoremap > >gv

It lets you indent your selection as many time as you want.

sundar, Sep 1, 2009 at 17:14

To indent the selection multiple times, you can simply press . to repeat the previous command. – sundar Sep 1 '09 at 17:14

masukomi, Dec 6, 2013 at 21:24

The problem with . in this situation is that you have to move your fingers. With @mike's solution (same one i use) you've already got your fingers on the indent key and can just keep whacking it to keep indenting rather than switching and doing something else. Using period takes longer because you have to move your hands and it requires more thought because it's a second, different, operation. – masukomi Dec 6 '13 at 21:24

Johan, Jan 20, 2009 at 21:11

A big selection would be:
gg=G

It is really fast, and everything gets indented ;-)

asgs, Jan 28, 2014 at 21:57

I've an XML file and turned on syntax highlighting. Typing gg=G just puts every line starting from position 1. All the white spaces have been removed. Is there anything else specific to XML? – asgs Jan 28 '14 at 21:57

Johan, Jan 29, 2014 at 6:10

stackoverflow.com/questions/7600860/

Amanuel Nega, May 19, 2015 at 19:51

I think set cindent should be in vimrc or should run :set cindent before running that command – Amanuel Nega May 19 '15 at 19:51

Amanuel Nega, May 19, 2015 at 19:57

I think cindent must be set first. and @asgs i think this only works for cstyle programming languages. – Amanuel Nega May 19 '15 at 19:57

sqqqrly, Sep 28, 2017 at 23:59

I use block-mode visual selection:

This is not a uni-tasker. It works:

oligofren, Mar 27 at 15:23

This is cumbersome, but is the way to go if you do formatting outside of core VIM (for instance, using vim-prettier instead of the default indenting engine). Using > will otherwise royally scew up the formatting done by Prettier. – oligofren Mar 27 at 15:23

sqqqrly, Jun 15 at 16:30

Funny, I find it anything but cumbersome. This is not a uni-tasker! Learning this method has many uses beyond indenting. – sqqqrly Jun 15 at 16:30

user4052054, Aug 17 at 17:50

I find it better than the accepted answer, as I can see what is happening, the lines I'm selecting and the action I'm doing, and not just type some sort of vim incantation. – user4052054 Aug 17 at 17:50

Sergio, Apr 25 at 7:14

Suppose | represents the position of the cursor in Vim. If the text to be indented is enclosed in a code block like:
int main() {
line1
line2|
line3
}

you can do >i{ which means " indent ( > ) inside ( i ) block ( { ) " and get:

int main() {
    line1
    line2|
    line3
}

Now suppose the lines are contiguous but outside a block, like:

do
line2|
line3
line4
done

To indent lines 2 thru 4 you can visually select the lines and type > . Or even faster you can do >2j to get:

do
    line2|
    line3
    line4
done

Note that >Nj means indent from current line to N lines below. If the number of lines to be indented is large, it could take some seconds for the user to count the proper value of N . To save valuable seconds you can activate the option of relative number with set relativenumber (available since Vim version 7.3).

Sagar Jain, Apr 18, 2014 at 18:41

The master of all commands is
gg=G

This indents the entire file!

And below are some of the simple and elegant commands used to indent lines quickly in Vim or gVim.

To indent the current line
==

To indent the all the lines below the current line

=G

To indent n lines below the current line

n==

For example, to indent 4 lines below the current line

4==

To indent a block of code, go to one of the braces and use command

=%

These are the simplest, yet powerful commands to indent multiple lines.

rojomoke, Jul 30, 2014 at 15:48

This is just in vim, not vi . – rojomoke Jul 30 '14 at 15:48

Sagar Jain, Jul 31, 2014 at 3:40

@rojomoke: No, it works in vi as well – Sagar Jain Jul 31 '14 at 3:40

rojomoke, Jul 31, 2014 at 10:09

Not on my Solaris or AIX boxes it doesn't. The equals key has always been one of my standard ad hoc macro assignments. Are you sure you're not looking at a vim that's been linked to as vi ? – rojomoke Jul 31 '14 at 10:09

rojomoke, Aug 1, 2014 at 8:22

Yeah, on Linux, vi is almost always a link to vim. Try running the :ve command inside vi. – rojomoke Aug 1 '14 at 8:22

datelligence, Dec 28, 2015 at 17:28

I love this kind of answers: clear, precise and succinct. Worked for me in Debian Jessie. Thanks, @SJain – datelligence Dec 28 '15 at 17:28

kapil, Mar 1, 2015 at 13:20

To indent every line in a file type, esc then G=gg

zundarz, Sep 10, 2015 at 18:41

:help left

In ex mode you can use :left or :le to align lines a specified amount. Specifically, :left will Left align lines in the [range]. It sets the indent in the lines to [indent] (default 0).

:%le3 or :%le 3 or :%left3 or :%left 3 will align the entire file by padding with three spaces.

:5,7 le 3 will align lines 5 through 7 by padding them with 3 spaces.

:le without any value or :le 0 will left align with a padding of 0.

This works in vim and gvim .

Subfuzion, Aug 11, 2017 at 22:02

Awesome, just what I was looking for (a way to insert a specific number of spaces -- 4 spaces for markdown code -- to override my normal indent). In my case I wanted to indent a specific number of lines in visual mode, so shift-v to highlight the lines, then :'<,'>le4 to insert the spaces. Thanks! – Subfuzion Aug 11 '17 at 22:02

Nykakin, Aug 21, 2015 at 13:33

There is one more way that hasn't been mentioned yet - you can use norm i command to insert given text at the beginning of the line. To insert 10 spaces before lines 2-10:
:2,10norm 10i

Remember that there has to be space character at the end of the command - this will be the character we want to have inserted. We can also indent line with any other text, for example to indent every line in file with 5 underscore characters:

:%norm 5i_

Or something even more fancy:

:%norm 2i[ ]

More practical example is commenting Bash/Python/etc code with # character:

:1,20norm i#

To re-indent use x instead of i . For example to remove first 5 characters from every line:

:%norm 5x

Eliethesaiyan, Jun 13, 2016 at 14:18

this starts from the left side of the file...not the the current position of the block – Eliethesaiyan Jun 13 '16 at 14:18

John Sonderson, Jan 31, 2015 at 19:17

Suppose you use 2 spaces to indent your code. Type:
:set shiftwidth=2

Then:

You get the idea.

( Empty lines will not get indented, which I think is kind of nice. )


I found the answer in the (g)vim documentation for indenting blocks:

:help visual-block
/indent

If you want to give a count to the command, do this just before typing the operator character: "v{move-around}3>" (move lines 3 indents to the right).

Michael, Dec 19, 2014 at 20:18

To indent all file by 4:
esc 4G=G

underscore_d, Oct 17, 2015 at 19:35

...what? 'indent by 4 spaces'? No, this jumps to line 4 and then indents everything from there to the end of the file, using the currently selected indent mode (if any). – underscore_d Oct 17 '15 at 19:35

Abhishesh Sharma, Jul 15, 2014 at 9:22

:line_num_start,line_num_end>

e.g.

14,21> shifts line number 14 to 21 to one tab

Increase the '>' symbol for more tabs

e.g.

14,21>>> for 3 tabs

HoldOffHunger, Dec 5, 2017 at 15:50

There are clearly a lot of ways to solve this, but this is the easiest to implement, as line numbers show by default in vim and it doesn't require math. – HoldOffHunger Dec 5 '17 at 15:50

rohitkadam19, May 7, 2013 at 7:13

5== will indent 5 lines from current cursor position. so you can type any number before ==, it will indent number of lines. This is in command mode.

gg=G will indent whole file from top to bottom.

Kamlesh Karwande, Feb 6, 2014 at 4:04

I dont know why its so difficult to find a simple answer like this one...

I myself had to struggle a lot to know this its its very simple

edit your .vimrc file under home directory add this line

set cindent

in you file where you want to indent properly

in normal/command mode type

10==   (this will indent 10 lines from the current cursor location )
gg=G   (complete file will be properly indented)

Michael Durrant, Nov 4, 2013 at 22:57

Go to the start of the text

Eric Leschinski, Dec 23, 2013 at 3:30

How to indent highlighted code in vi immediately by a # of spaces:

Option 1: Indent a block of code in vi to three spaces with Visual Block mode:

  1. Select the block of code you want to indent. Do this using Ctrl+V in normal mode and arrowing down to select text. While it is selected, enter : to give a command to the block of selected text.
  2. The following will appear in the command line: :'<,'>
  3. To set indent to 3 spaces, type le 3 and press enter. This is what appears: :'<,'>le 3
  4. The selected text is immediately indented to 3 spaces.

Option 2: Indent a block of code in vi to three spaces with Visual Line mode:

  1. Open your file in VI.
  2. Put your cursor over some code
  3. Be in normal mode press the following keys:
    Vjjjj:le 3
    

    Interpretation of what you did:

    V means start selecting text.

    jjjj arrows down 4 lines, highlighting 4 lines.

    : tells vi you will enter an instruction for the highlighted text.

    le 3 means indent highlighted text 3 lines.

    The selected code is immediately increased or decreased to three spaces indentation.

Option 3: use Visual Block mode and special insert mode to increase indent:

  1. Open your file in VI.
  2. Put your cursor over some code
  3. Be in normal mode press the following keys:

    Ctrl+V

    jjjj
    

    (press spacebar 5 times)

    Esc Shift+i

    All the highlighted text is indented an additional 5 spaces.

ire_and_curses, Mar 6, 2011 at 17:29

This answer summarises the other answers and comments of this question, and adds extra information based on the Vim documentation and the Vim wiki . For conciseness, this answer doesn't distinguish between Vi and Vim-specific commands.

In the commands below, "re-indent" means "indent lines according to your indentation settings ." shiftwidth is the primary variable that controls indentation.

General Commands

>>   Indent line by shiftwidth spaces
<<   De-indent line by shiftwidth spaces
5>>  Indent 5 lines
5==  Re-indent 5 lines

>%   Increase indent of a braced or bracketed block (place cursor on brace first)
=%   Reindent a braced or bracketed block (cursor on brace)
<%   Decrease indent of a braced or bracketed block (cursor on brace)
]p   Paste text, aligning indentation with surroundings

=i{  Re-indent the 'inner block', i.e. the contents of the block
=a{  Re-indent 'a block', i.e. block and containing braces
=2a{ Re-indent '2 blocks', i.e. this block and containing block

>i{  Increase inner block indent
<i{  Decrease inner block indent

You can replace { with } or B, e.g. =iB is a valid block indent command. Take a look at "Indent a Code Block" for a nice example to try these commands out on.

Also, remember that

.    Repeat last command

, so indentation commands can be easily and conveniently repeated.

Re-indenting complete files

Another common situation is requiring indentation to be fixed throughout a source file:

gg=G  Re-indent entire buffer

You can extend this idea to multiple files:

" Re-indent all your c source code:
:args *.c
:argdo normal gg=G
:wall

Or multiple buffers:

" Re-indent all open buffers:
:bufdo normal gg=G:wall

In Visual Mode

Vjj> Visually mark and then indent 3 lines

In insert mode

These commands apply to the current line:

CTRL-t   insert indent at start of line
CTRL-d   remove indent at start of line
0 CTRL-d remove all indentation from line

Ex commands

These are useful when you want to indent a specific range of lines, without moving your cursor.

:< and :> Given a range, apply indentation e.g.
:4,8>   indent lines 4 to 8, inclusive

Indenting using markers

Another approach is via markers :

ma     Mark top of block to indent as marker 'a'

...move cursor to end location

>'a    Indent from marker 'a' to current location

Variables that govern indentation

You can set these in your .vimrc file .

set expandtab       "Use softtabstop spaces instead of tab characters for indentation
set shiftwidth=4    "Indent by 4 spaces when using >>, <<, == etc.
set softtabstop=4   "Indent by 4 spaces when pressing <TAB>

set autoindent      "Keep indentation from previous line
set smartindent     "Automatically inserts indentation in some cases
set cindent         "Like smartindent, but stricter and more customisable

Vim has intelligent indentation based on filetype. Try adding this to your .vimrc:

if has ("autocmd")
    " File type detection. Indent based on filetype. Recommended.
    filetype plugin indent on
endif

References

Amit, Aug 10, 2011 at 13:26

Both this answer and the one above it were great. But I +1'd this because it reminded me of the 'dot' operator, which repeats the last command. This is extremely useful when needing to indent an entire block several shiftspaces (or indentations) without needing to keep pressing >} . Thanks a long – Amit Aug 10 '11 at 13:26

Wipqozn, Aug 24, 2011 at 16:00

5>> Indent 5 lines : This command indents the fifth line, not 5 lines. Could this be due to my VIM settings, or is your wording incorrect? – Wipqozn Aug 24 '11 at 16:00

ire_and_curses, Aug 24, 2011 at 16:21

@Wipqozn - That's strange. It definitely indents the next five lines for me, tested on Vim 7.2.330. – ire_and_curses Aug 24 '11 at 16:21

Steve, Jan 6, 2012 at 20:13

>42gg Indent from where you are to line 42. – Steve Jan 6 '12 at 20:13

aqn, Mar 6, 2013 at 4:42

Great summary! Also note that the "indent inside block" and "indent all block" (<i{ >a{ etc.) also works with parentheses and brackets: >a( <i] etc. (And while I'm at it, in addition to <>'s, they also work with d,c,y etc.) – aqn Mar 6 '13 at 4:42

NickSoft, Nov 5, 2013 at 16:19

I didn't find a method I use in the comments, so I'll share it (I think vim only):
  1. Esc to enter command mode
  2. Move to the first character of the last line you want to ident
  3. ctrl - v to start block select
  4. Move to the first character of the first line you want to ident
  5. shift - i to enter special insert mode
  6. type as many spases/tabs as you need to indent to (2 for example
  7. press Esc and spaces will appear in all lines

This is useful when you don't want to change ident/tab settings in vimrc or to remember them to change it while editing.

To unindent I use the same ctrl - v block select to select spaces and delete it with d .

svec, Oct 25, 2008 at 4:21

Also try this for C-indenting indentation, do :help = for more info:

={

That will auto-indent the current code block you're in.

Or just:

==

to auto-indent the current line.

underscore_d, Oct 17, 2015 at 19:39

doesn't work for me, just dumps my cursor to the line above the opening brace of 'the current code block i'm in'. – underscore_d Oct 17 '15 at 19:39

John La Rooy, Jul 2, 2013 at 7:24

Using Python a lot, I find myself needing frequently needing to shift blocks by more than one indent. You can do this by using any of the block selection methods, and then just enter the number of indents you wish to jump right before the >

Eg. V5j3> will indent 5 lines 3 times - which is 12 spaces if you use 4 spaces for indents

Juan Lanus, Sep 18, 2012 at 14:12

The beauty of vim's UI is that it's consistent. Editing commands are made up of the command and a cursor move. The cursor moves are always the same:

So, in order to use vim you have to learn to move the cursor and remember a repertoire of commands like, for example, > to indent (and < to "outdent").
Thus, for indenting the lines from the cursor position to the top of the screen you do >H, >G to indent to the bottom of the file.

If, instead of typing >H, you type dH then you are deleting the same block of lines, cH for replacing it, etc.

Some cursor movements fit better with specific commands. In particular, the % command is handy to indent a whole HTML or XML block.
If the file has syntax highlighted ( :syn on ) then setting the cursor in the text of a tag (like, in the "i" of <div> and entering >% will indent up to the closing </div> tag.

This is how vim works: one has to remember only the cursor movements and the commands, and how to mix them.
So my answer to this question would be "go to one end of the block of lines you want to indent, and then type the > command and a movement to the other end of the block" if indent is interpreted as shifting the lines, = if indent is interpreted as in pretty-printing.

aqn, Mar 6, 2013 at 4:38

I would say that vi/vim is mostly consistent. For instance, D does not behave the same as S and Y! :) – aqn Mar 6 '13 at 4:38

Kent Fredric, Oct 25, 2008 at 9:16

Key-Presses for more visual people:
  1. Enter Command Mode:
    Escape
  2. Move around to the start of the area to indent:
    hjkl↑↓←→
  3. Start a block:
    v
  4. Move around to the end of the area to indent:
    hjkl↑↓←→
  5. (Optional) Type the number of indentation levels you want
    0..9
  6. Execute the indentation on the block:
    >

Shane Reustle, Mar 10, 2011 at 22:24

This is great, but it uses spaces and not tabs. Any possible way to fix this? – Shane Reustle Mar 10 '11 at 22:24

Kent Fredric, Mar 16, 2011 at 8:33

If its using spaces instead of tabs, then its probably because you have indentation set to use spaces. =). – Kent Fredric Mar 16 '11 at 8:33

Kent Fredric, Mar 16, 2011 at 8:36

When the 'expandtab' option is off (this is the default) Vim uses <Tab>s as much as possible to make the indent. ( :help :> ) – Kent Fredric Mar 16 '11 at 8:36

Shane Reustle, Dec 2, 2012 at 3:17

The only tab/space related vim setting I've changed is :set tabstop=3. It's actually inserting this every time I use >>: "<tab><space><space>". Same with indenting a block. Any ideas? – Shane Reustle Dec 2 '12 at 3:17

Kent Fredric, Dec 2, 2012 at 17:08

The three settings you want to look at for "spaces vs tabs" are 1. tabstop 2. shiftwidth 3. expandtab. You probably have "shiftwidth=5 noexpandtab", so a "tab" is 3 spaces, and an indentation level is "5" spaces, so it makes up the 5 with 1 tab, and 2 spaces. – Kent Fredric Dec 2 '12 at 17:08

mda, Jun 4, 2012 at 5:12

For me, the MacVim (Visual) solution was, select with mouse and press ">", but after putting the following lines in "~/.vimrc" since I like spaces instead of tabs:
set expandtab
set tabstop=2
set shiftwidth=2

Also it's useful to be able to call MacVim from the command-line (Terminal.app), so since I have the following helper directory "~/bin", where I place a script called "macvim":

#!/usr/bin/env bash
/usr/bin/open -a /Applications/MacPorts/MacVim.app $@

And of course in "~/.bashrc":

export PATH=$PATH:$HOME/bin

Macports messes with "~/.profile" a lot, so the PATH environment variable can get quite long.

jash, Feb 17, 2012 at 15:16

>} or >{ indent from current line up to next paragraph

<} or <{ same un-indent

Eric Kigathi, Jan 4, 2012 at 0:41

A quick way to do this using VISUAL MODE uses the same process as commenting a block of code.

This is useful if you would prefer not to change your shiftwidth or use any set directives and is flexible enough to work with TABS or SPACES or any other character.

  1. Position cursor at the beginning on the block
  2. v to switch to -- VISUAL MODE --
  3. Select the text to be indented
  4. Type : to switch to the prompt
  5. Replacing with 3 leading spaces:

    :'<,'>s/^/ /g

  6. Or replacing with leading tabs:

    :'<,'>s/^/\t/g

  7. Brief Explanation:

    '<,'> - Within the Visually Selected Range

    s/^/ /g - Insert 3 spaces at the beginning of every line within the whole range

    (or)

    s/^/\t/g - Insert Tab at the beginning of every line within the whole range

pankaj ukumar, Nov 11, 2009 at 17:33

do this
$vi .vimrc

and add this line

autocmd FileType cpp setlocal expandtab shiftwidth=4 softtabstop=4 cindent

this is only for cpp file you can do this for another file type also just by modifying the filetype...

SteveO, Nov 10, 2010 at 19:16

I like to mark text for indentation:
  1. go to beginning of line of text then type ma (a is the label from the 'm'ark: it could be any letter)
  2. go to end line of text and type mz (again z could be any letter)
  3. :'a,'z> or :'a,'z< will indent or outdent (is this a word?)
  4. Voila! the text is moved (empty lines remain empty with no spaces)

PS: you can use :'a,'z technique to mark a range for any operation (d,y,s///, etc) where you might use lines, numbers, or %

Paul Tomblin, Oct 25, 2008 at 4:08

As well as the offered solutions, I like to do things a paragraph at a time with >}

aqn, Mar 6, 2013 at 4:47

Yup, and this is why one of my big peeves is white spaces on an otherwise empty line: they messes up vim's notion of a "paragraph". – aqn Mar 6 '13 at 4:47

Daniel Spiewak, Oct 25, 2008 at 4:00

In addition to the answer already given and accepted, it is also possible to place a marker and then indent everything from the current cursor to the marker. Thus, enter ma where you want the top of your indented block, cursor down as far as you need and then type >'a (note that " a " can be substituted for any valid marker name). This is sometimes easier than 5>> or vjjj> .

user606723, Mar 17, 2011 at 15:31

This is really useful. I am going to have to look up what all works with this. I know d'a and y'a, what else? – user606723 Mar 17 '11 at 15:31

ziggy, Aug 25, 2014 at 14:14

This is very useful as it avoids the need to count how many lines you want to indent. – ziggy Aug 25 '14 at 14:14

[Oct 18, 2018] 'less' command clearing screen upon exit - how to switch it off?

Notable quotes:
"... To prevent less from clearing the screen upon exit, use -X . ..."
Oct 18, 2018 | superuser.com

Wojciech Kaczmarek ,Feb 9, 2010 at 11:21

How to force the less program to not clear the screen upon exit?

I'd like it to behave like git log command:

Any ideas? I haven't found any suitable less options nor env variables in a manual, I suspect it's set via some env variable though.

sleske ,Feb 9, 2010 at 11:59

To prevent less from clearing the screen upon exit, use -X .

From the manpage:

-X or --no-init

Disables sending the termcap initialization and deinitialization strings to the terminal. This is sometimes desirable if the deinitialization string does something unnecessary, like clearing the screen.

As to less exiting if the content fits on one screen, that's option -F :

-F or --quit-if-one-screen

Causes less to automatically exit if the entire file can be displayed on the first screen.

-F is not the default though, so it's likely preset somewhere for you. Check the env var LESS .

markpasc ,Oct 11, 2010 at 3:44

This is especially annoying if you know about -F but not -X , as then moving to a system that resets the screen on init will make short files simply not appear, for no apparent reason. This bit me with ack when I tried to take my ACK_PAGER='less -RF' setting to the Mac. Thanks a bunch! – markpasc Oct 11 '10 at 3:44

sleske ,Oct 11, 2010 at 8:45

@markpasc: Thanks for pointing that out. I would not have realized that this combination would cause this effect, but now it's obvious. – sleske Oct 11 '10 at 8:45

Michael Goldshteyn ,May 30, 2013 at 19:28

This is especially useful for the man pager, so that man pages do not disappear as soon as you quit less with the 'q' key. That is, you scroll to the position in a man page that you are interested in only for it to disappear when you quit the less pager in order to use the info. So, I added: export MANPAGER='less -s -X -F' to my .bashrc to keep man page info up on the screen when I quit less, so that I can actually use it instead of having to memorize it. – Michael Goldshteyn May 30 '13 at 19:28

Michael Burr ,Mar 18, 2014 at 22:00

It kinda sucks that you have to decide when you start less how it must behave when you're going to exit. – Michael Burr Mar 18 '14 at 22:00

Derek Douville ,Jul 11, 2014 at 19:11

If you want any of the command-line options to always be default, you can add to your .profile or .bashrc the LESS environment variable. For example:
export LESS="-XF"

will always apply -X -F whenever less is run from that login session.

Sometimes commands are aliased (even by default in certain distributions). To check for this, type

alias

without arguments to see if it got aliased with options that you don't want. To run the actual command in your $PATH instead of an alias, just preface it with a back-slash :

\less

To see if a LESS environment variable is set in your environment and affecting behavior:

echo $LESS

dotancohen ,Sep 2, 2014 at 10:12

In fact, I add export LESS="-XFR" so that the colors show through less as well. – dotancohen Sep 2 '14 at 10:12

Giles Thomas ,Jun 10, 2015 at 12:23

Thanks for that! -XF on its own was breaking the output of git diff , and -XFR gets the best of both worlds -- no screen-clearing, but coloured git diff output. – Giles Thomas Jun 10 '15 at 12:23

[Oct 18, 2018] Isn't less just more

Highly recommended!
Oct 18, 2018 | unix.stackexchange.com

Bauna ,Aug 18, 2010 at 3:07

less is a lot more than more , for instance you have a lot more functionality:
g: go top of the file
G: go bottom of the file
/: search forward
?: search backward
N: show line number
: goto line
F: similar to tail -f, stop with ctrl+c
S: split lines

And I don't remember more ;-)

törzsmókus ,Feb 19 at 13:19

h : everything you don't remember ;) – törzsmókus Feb 19 at 13:19

KeithB ,Aug 18, 2010 at 0:36

There are a couple of things that I do all the time in less , that doesn't work in more (at least the versions on the systems I use. One is using G to go to the end of the file, and g to go to the beginning. This is useful for log files, when you are looking for recent entries at the end of the file. The other is search, where less highlights the match, while more just brings you to the section of the file where the match occurs, but doesn't indicate where it is.

geoffc ,Sep 8, 2010 at 14:11

Less has a lot more functionality.

You can use v to jump into the current $EDITOR. You can convert to tail -f mode with f as well as all the other tips others offered.

Ubuntu still has distinct less/more bins. At least mine does, or the more command is sending different arguments to less.

In any case, to see the difference, find a file that has more rows than you can see at one time in your terminal. Type cat , then the file name. It will just dump the whole file. Type more , then the file name. If on ubuntu, or at least my version (9.10), you'll see the first screen, then --More--(27%) , which means there's more to the file, and you've seen 27% so far. Press space to see the next page. less allows moving line by line, back and forth, plus searching and a whole bunch of other stuff.

Basically, use less . You'll probably never need more for anything. I've used less on huge files and it seems OK. I don't think it does crazy things like load the whole thing into memory ( cough Notepad). Showing line numbers could take a while, though, with huge files.

[Oct 18, 2018] What are the differences between most, more and less

Highly recommended!
Jun 29, 2013 | unix.stackexchange.com

Smith John ,Jun 29, 2013 at 13:16

more

more is an old utility. When the text passed to it is too large to fit on one screen, it pages it. You can scroll down but not up.

Some systems hardlink more to less , providing users with a strange hybrid of the two programs that looks like more and quits at the end of the file like more but has some less features such as backwards scrolling. This is a result of less 's more compatibility mode. You can enable this compatibility mode temporarily with LESS_IS_MORE=1 less ... .

more passes raw escape sequences by default. Escape sequences tell your terminal which colors to display.

less

less was written by a man who was fed up with more 's inability to scroll backwards through a file. He turned less into an open source project and over time, various individuals added new features to it. less is massive now. That's why some small embedded systems have more but not less . For comparison, less 's source is over 27000 lines long. more implementations are generally only a little over 2000 lines long.

In order to get less to pass raw escape sequences, you have to pass it the -r flag. You can also tell it to only pass ANSI escape characters by passing it the -R flag.

most

most is supposed to be more than less . It can display multiple files at a time. By default, it truncates long lines instead of wrapping them and provides a left/right scrolling mechanism. most's website has no information about most 's features. Its manpage indicates that it is missing at least a few less features such as log-file writing (you can use tee for this though) and external command running.

By default, most uses strange non-vi-like keybindings. man most | grep '\<vi.?\>' doesn't return anything so it may be impossible to put most into a vi-like mode.

most has the ability to decompress gunzip-compressed files before reading. Its status bar has more information than less 's.

most passes raw escape sequences by default.

tifo ,Oct 14, 2014 at 8:44

Short answer: Just use less and forget about more

Longer version:

more is old utility. You can't browse step wise with more, you can use space to browse page wise, or enter line by line, that is about it. less is more + more additional features. You can browse page wise, line wise both up and down, search

Jonathan.Brink ,Aug 9, 2015 at 20:38

If "more" is lacking for you and you know a few vi commands use "less" – Jonathan.Brink Aug 9 '15 at 20:38

Wilko Fokken ,Jan 30, 2016 at 20:31

There is one single application whereby I prefer more to less :

To check my LATEST modified log files (in /var/log/ ), I use ls -AltF | more .

While less deletes the screen after exiting with q , more leaves those files and directories listed by ls on the screen, sparing me memorizing their names for examination.

(Should anybody know a parameter or configuration enabling less to keep it's text after exiting, that would render this post obsolete.)

Jan Warchoł ,Mar 9, 2016 at 10:18

The parameter you want is -X (long form: --no-init ). From less ' manpage:

Disables sending the termcap initialization and deinitialization strings to the terminal. This is sometimes desirable if the deinitialization string does something unnecessary, like clearing the screen.

Jan Warchoł Mar 9 '16 at 10:18

[Sep 17, 2018] Which HTML5 tag should I use to mark up an author's name - Stack Overflow

Sep 17, 2018 | stackoverflow.com

Which HTML5 tag should I use to mark up an author's name? Ask Question up vote 75 down vote favorite 19


Quang Van ,Sep 3, 2011 at 1:03

For example of a blog-post or article.
<article>
<h1>header<h1>
<time>09-02-2011</time>
<author>John</author>
My article....
</article>

The author tag doesn't exist though... So what is the commonly used HTML5 tag for authors? Thanks.

(If there isn't, shouldn't there be one?)

Joseph Marikle ,Sep 3, 2011 at 1:10

<cite> maybe? I don't know lol. :P Doesn't make very much of a difference in style though. – Joseph Marikle Sep 3 '11 at 1:10

jalgames ,Apr 23, 2014 at 12:53

It's not about style. Technically, you can use a <p> to create a heading just by increasing the font size. But search engines won't understand it like that. – jalgames Apr 23 '14 at 12:53

Andreas Rejbrand ,Nov 17, 2014 at 18:58

You are not allowed to use the time element like that. Since dd-mm-yyy isn't one of the recognised formats, you have to supply a machine-readable version (in one of the recognised formats) in a datetime attribute of the time element. See w3.org/TR/2014/REC-html5-20141028/Andreas Rejbrand Nov 17 '14 at 18:58

Dan Dascalescu ,Jun 13, 2016 at 0:23

There's a better answer now than the accepted (robertc's) one. – Dan Dascalescu Jun 13 '16 at 0:23

robertc ,Sep 3, 2011 at 1:13

HTML5 has an author link type :
<a href="http://johnsplace.com" rel="author">John</a>

The weakness here is that it needs to be on some sort of link, but if you have that there's a long discussion of alternatives here . If you don't have a link, then just use a class attribute, that's what it's for:

<span class="author">John</span>

robertc ,Sep 3, 2011 at 1:21

@Quang Yes, I think a link type without an actual link would defeat the purpose of trying to mark it up semantically. – robertc Sep 3 '11 at 1:21

Paul D. Waite ,Sep 5, 2011 at 7:19

@Quang: the rel attribute is there to describe what the link's destination is. If the link has no destination, rel is meaningless. – Paul D. Waite Sep 5 '11 at 7:19

Michael Mior ,Jan 19, 2013 at 14:36

You might also want to look at schema.org for ways of expressing this type of information. – Michael Mior Jan 19 '13 at 14:36

robertc ,Jan 1, 2014 at 15:09

@jdh8 Because John is not the title of a workrobertc Jan 1 '14 at 15:09

Dan Dascalescu ,Jun 13, 2016 at 0:19

This answer just isn't the best any longer. Google no longer supports rel="author" , and as ryanve and Jason mention, the address tag was explicitly design for expressing authorship as well. – Dan Dascalescu Jun 13 '16 at 0:19

ryanve ,Sep 3, 2011 at 18:28

Both rel="author" and <address> are designed for this exact purpose. Both are supported in HTML5. The spec tells us that rel="author" can be used on <link> <a> , and <area> elements. Google also recommends its usage . Combining use of <address> and rel="author" seems optimal. HTML5 best affords wrapping <article> headlines and bylines info in a <header> like so:
<article>
    <header>
        <h1 class="headline">Headline</h1>
        <div class="byline">
            <address class="author">By <a rel="author" href="/author/john-doe">John Doe</a></address> 
            on <time pubdate datetime="2011-08-28" title="August 28th, 2011">8/28/11</time>
        </div>
    </header>

    <div class="article-content">
    ...
    </div>
</article>

If you want to add the hcard microformat , then I would do so like this:

<article>
    <header>
        <h1 class="headline">Headline</h1>
        <div class="byline vcard">
            <address class="author">By <a rel="author" class="url fn n" href="/author/john-doe">John Doe</a></address> 
            on <time pubdate datetime="2011-08-28" title="August 28th, 2011">on 8/28/11</time>
        </div>
    </header>

    <div class="article-content">
    ...
    </div>
</article>

aridlehoover ,Jun 24, 2013 at 18:12

Shouldn't "By " precede the <address> tag? It's not actually a part of the address. – aridlehoover Jun 24 '13 at 18:12

ryanve ,Jun 24, 2013 at 20:40

@aridlehoover Either seems correct according to whatwg.org/specs/web-apps/current-work/multipage/ - If outside, use .byline address { display:inline; font-style:inherit } to override the block default in browsers. – ryanve Jun 24 '13 at 20:40

ryanve ,Jun 24, 2013 at 20:46

@aridlehoover I also think that <dl> is viable. See the byline markup in the source of demo.actiontheme.com/sample-page for example. – ryanve Jun 24 '13 at 20:46

Paul Kozlovitch ,Apr 16, 2015 at 11:36

Since the pubdate attribute is gone from both the WHATWG and W3C specs, as Bruce Lawson writes here , I suggest you to remove it from your answer. – Paul Kozlovitch Apr 16 '15 at 11:36

Nathan Hornby ,May 28, 2015 at 12:41

This should really be the accepted answer. – Nathan Hornby May 28 '15 at 12:41

Jason Gennaro ,Sep 3, 2011 at 2:18

According to the HTML5 spec, you probably want address .

The address element represents the contact information for its nearest article or body element ancestor.

The spec further references address in respect to authors here

Under 4.4.4

Author information associated with an article element (q.v. the address element) does not apply to nested article elements.

Under 4.4.9

Contact information for the author or editor of a section belongs in an address element, possibly itself inside a footer.

All of which makes it seems that address is the best tag for this info.

That said, you could also give your address a rel or class of author .

<address class="author">Jason Gennaro</address>

Read more: http://dev.w3.org/html5/spec/sections.html#the-address-element

Quang Van ,Feb 29, 2012 at 9:24

Thanks Jason, do you know what "q.v." means? Under >4.4.4 >Author information associated with an article element (q.v. the address element) does not apply to nested article elements. – Quang Van Feb 29 '12 at 9:24

pageman ,Feb 10, 2014 at 5:44

@QuangVan - (wait, your initials are ... q.v. hmm) - q.v. means "quod vide" or "on this (matter) go see" - son on the matter of "q.v." go see english.stackexchange.com/questions/25252/ (q.v.) haha – pageman Feb 10 '14 at 5:44

Jason Gennaro ,Feb 10, 2014 at 13:19

@pageman well done rocking the Latin! – Jason Gennaro Feb 10 '14 at 13:19

pageman ,Feb 11, 2014 at 1:10

@JasonGennaro haha nanos gigantum humeris insidentes! – pageman Feb 11 '14 at 1:10

Jason Gennaro ,Feb 11, 2014 at 13:16

@pageman aren't we all. – Jason Gennaro Feb 11 '14 at 13:16

remyActual ,Sep 7, 2015 at 0:49

Google support for rel="author" is deprecated :

"Authorship markup is no longer supported in web search."

Use a Description List (Definition List in HTML 4.01) element.

From the HTML5 spec :

The dl element represents an association list consisting of zero or more name-value groups (a description list). A name-value group consists of one or more names (dt elements) followed by one or more values (dd elements), ignoring any nodes other than dt and dd elements. Within a single dl element, there should not be more than one dt element for each name.

Name-value groups may be terms and definitions, metadata topics and values, questions and answers, or any other groups of name-value data.

Authorship and other article meta information fits perfectly into this key:value pair structure:

An opinionated example:

<article>
  <header>
    <h1>Article Title</h1>
    <p class="subtitle">Subtitle</p>
    <dl class="dateline">
      <dt>Author:</dt>
      <dd>Remy Schrader</dd>
      <dt>All posts by author:</dt>
      <dd><a href="http://www.blog.net/authors/remy-schrader/">Link</a></dd>
      <dt>Contact:</dt>
      <dd><a mailto="remy@blog.net"><img src="email-sprite.png"></a></dd>
    </dl>
  </header>
  <section class="content">
    <!-- article content goes here -->
  </section>
</article>

As you can see when using the <dl> element for article meta information, we are free to wrap <address> , <a> and even <img> tags in <dt> and/or <dd> tags according to the nature of the content and it's intended function .
The <dl> , <dt> and <dd> tags are free to do their job -- semantically -- conveying information about the parent <article> ; <a> , <img> and <address> are similarly free to do their job -- again, semantically -- conveying information regarding where to find related content, non-verbal visual presentation, and contact details for authoritative parties, respectively.

steveax ,Sep 3, 2011 at 1:39

How about microdata :
<article>
<h1>header<h1>
<time>09-02-2011</time>
<div id="john" itemscope itemtype="http://microformats.org/profile/hcard">
 <h2 itemprop="fn">
  <span itemprop="n" itemscope>
   <span itemprop="given-name">John</span>
  </span>
 </h2>
</div>
My article....
</article>

Raphael ,Feb 26, 2016 at 11:29

You can use
<meta name="author" content="John Doe">

in the header as per the HTML5 specification .

> ,

If you were including contact details for the author, then the <address> tag is appropriate:

But if it's literally just the author's name, there isn't a specific tag for that. HTML doesn't include much related to people.

[Aug 25, 2018] What are the various link types in Windows? How do I create them?

Notable quotes:
"... SeCreateSymbolicLinkPrivilege ..."
"... shell links ..."
"... directory junctions ..."
"... indirectly accessed ..."
"... junction point ..."
"... on the local drives ..."
"... will be gone ..."
"... volume mount point ..."
"... Why should you even be viewing anything but your personal files on a daily basis? ..."
"... and that folder ..."
"... one-to-many relationship ..."
"... My brain explodes... ..."
Aug 25, 2018 | superuser.com

Cookie ,Oct 18, 2011 at 17:43

Is it possible to link two files or folders without having a different extension under Windows?

I'm looking for functionality equivalent to the soft and hard links in Unix.

barlop ,Jun 11, 2016 at 17:30

this is related superuser.com/questions/343074/barlop Jun 11 '16 at 17:30

barlop ,Jun 11, 2016 at 21:07

great article here cects.com/ watch out for juctions pre w7 – barlop Jun 11 '16 at 21:07

grawity ,Oct 18, 2011 at 18:00

Please note that the only unfortunate difference is that you need Administrator rights to create symbolic links. IE, you need an elevated prompt. (A workaround is the SeCreateSymbolicLinkPrivilege can be granted to normal Users via secpol.msc .)

Note in terminology: Windows shortcuts are not called "symlinks"; they are shell links , as they are simply files that the Windows Explorer shell treats specially.


Symlinks: How do I create them on NTFS file system?

Windows Vista and later versions support Unix-style symlinks on NTFS filesystems. Remember that they also follow the same path resolution – relative links are created relative to the link's location, not to the current directory. People often forget that. They can also be implemented using an absolute path; EG c:\windows\system32 instead of \system32 (which goes to a system32 directory connected to the link's location).
Symlinks are implemented using reparse points and generally have the same behavior as Unix symlinks.

For files you can execute:

mklink linkname targetpath

For directories you can execute:

mklink /d linkname targetpath

Hardlinks: How do I create them on NTFS file systems?

All versions of Windows NT support Unix-style hard links on NTFS filesystems. Using mklink on Vista and up:

mklink /h linkname targetpath

For Windows 2000 and XP, use fsutil .

fsutil hardlink create linkname targetpath

These also work the same way as Unix hard links – multiple file table entries point to the same inode .


Directory Junctions: How do I create them on NTFS file systems?

Windows 2000 and later support directory junctions on NTFS filesystems. They are different from symlinks in that they are always absolute and only point to directories, never to files.

mklink /j linkname targetpath

On versions which do not have mklink , download junction from Sysinternals:

junction linkname targetpath

Junctions are implemented using reparse points .


How can I mount a volume using a reparse point in Windows?

For completeness, on Windows 2000 and later , reparse points can also point to volumes , resulting in persistent Unix-style disk mounts :

mountvol mountpoint \\?\Volume{volumeguid}

Volume GUIDs are listed by mountvol ; they are static but only within the same machine.


Is there a way to do this in Windows Explorer?

Yes, you can use the shell extension Link Shell Extension which makes it very easy to make the links that have been described above. You can find the downloads at the bottom of the page .

The NTFS file system implemented in NT4, Windows 2000, Windows XP, Windows XP64, and Windows7 supports a facility known as hard links (referred to herein as Hardlinks ). Hardlinks provide the ability to keep a single copy of a file yet have it appear in multiple folders (directories). They can be created with the POSIX command ln included in the Windows Resource Kit, the fsutil command utility included in Windows XP or my command line ln.exe utility.

The extension allows the user to select one or many files or folders, then using the mouse, complete the creation of the required Links - Hardlinks, Junctions or Symbolic Links or in the case of folders to create Clones consisting of Hard or Symbolic Links. LSE is supported on all Windows versions that support NTFS version 5.0 or later, including Windows XP64 and Windows7. Hardlinks, Junctions and Symbolic Links are NOT supported on FAT file systems, and nor is the Cloning and Smart Copy process supported on FAT file systems.

The source can simple be picked using a right click menu.

And depending on what you picked , you right click on a destination folder and get a menu with options.

This makes it very easy to create links. For an extensive guide, read the LSE documentation .

Downloads can be found at the bottom of their page .

Relevant MSDN URLs:

Tom Wijsman ,Dec 31, 2011 at 3:42

In this answer I will attempt to outline what the different types of links in directory management are as well as why they are useful as well as when they could be used. When trying to achieve a certain organization on your file volumes, knowing the various different types as well as creating them is valuable knowledge.

For information on how a certain link can be made, refer to grawity 's answer .

What is a link?

A link is a relationship between two entities; in the context of directory management, a link can be seen as a relationship between the following two entities:

  1. Directory Table

    This table keeps track of the files and folders that reside in a specific folder.

    A directory table is a special type of file that represents a directory (also known as a folder). Each file or directory stored within it is represented by a 32-byte entry in the table. Each entry records the name, extension, attributes (archive, directory, hidden, read-only, system and volume), the date and time of last modification, the address of the first cluster of the file/directory's data and finally the size of the file/directory.

  2. Data Cluster

    More specifically, the first cluster of the file or directory.

    A cluster is the smallest logical amount of disk space that can be allocated to hold a file.

The special thing about this relationship is that it allows one to have only one data cluster but many links to that data cluster, this allows us to show data as being present in multiple locations. However, there are multiple ways to do this and each method of doing so has its own effects.

To see where this roots from, lets go back to the past...

What is a shell link and why is in not always sufficient?

Although it might not sound familiar, we all know this one! File shortcuts are undoubtedly the most frequently used way of linking files. These were found in some of the early versions of Windows 9x and have been there for a long while.

These allow you to quickly create a shortcut to any file or folder, they are more specifically made to store extra information along just the link like for example the working directory the file is executed in, the arguments to provide to the program as well as options like whether to maximize the program.

The downside to this approach of linking is exactly that, the extra information requires this type of link to have a data cluster on its own to contain that file. The problem then is not necessarily that it takes disk space, but its rather that it the link is indirectly accessed as the Data Cluster first has to be requested before we get to the actual link. If the path referred to in the actual link is gone the shell link will still exist.

If you were to operate on the file being referred to, you would actually first have to figure out in which directory the file is. You can't simply open the link in an editor as you would then be editing the .lnk file rather than the file being linked to. This locks out a lot of possible use cases for shell links.

How does a junction point link try to solve these problems?

A NTFS junction point allows one to create a symbolic link to a directory on the local drives , in such a way that it behaves just like a normal directory. So, you have one directory of files stored on your disk but can access it from multiple locations.

When removing the junction point, the original directory remains. When removing the original directory, the junction point remains. It is very costly to enumerate the disk to check for junction points that have to be deleted. This is a downside as a result of its implementation.

The NTFS junction point is implemented using NTFS reparse points , which are NTFS file system objects introduced with Windows 2000.

An NTFS reparse point is a type of NTFS file system object. Reparse points provide a way to extend the NTFS filesystem by adding extra information to the directory entry, so a file system filter can interpret how the operating system will treat the data. This allows the creation of junction points, NTFS symbolic links and volume mount points, and is a key feature to Windows 2000's Hierarchical Storage System.

That's right, the invention of the reparse point allows us to do more sophisticated ways of linking.

The NTFS junction point is a soft link , which means that it just links to the name of the file. This means that whenever the link is deleted the original data stays intact ; but, whenever the original data is deleted the original data will be gone .

Can I also soft link files? Are there symbolic links?

Yes, when Windows Vista came around they decided to extend the functionality of the NTFS file system object(s) by providing the NTFS symbolic link , which is a soft link that acts in the same way as the NTFS junction point. But can be applied to file and directories.

They again share the same deletion behavior, in some use cases this can be a pain for files as you don't want to have an useless copy of a file hanging around. This is why also the notion of hard links have been implemented.

What is a hard link and how does it behave as opposed to soft links?

Hard links are not NTFS file system objects, but they are instead a link to a file (in detail, they refer to the MFT entry as that stores extra information about the actual file). The MFT entry has a field that remembers the amounts of time a file is being hard linked to. The data will still be accessible as long as at least one link that points to it still exists.

So, the data does no longer depend on a single MFT entry to exist . As long as there is a hard link around, the data will survive. This prevents accidental deleting for cases where one does not want to remember where the original file was.

You could for example make a folder with "movies I still have to watch" as well as a folder "movies that I take on vacation" as well as a folder "favorite movies". Movies that are none of these will be properly deleted while movies that are any of these will continue to exist even when you have watched a movie.

What is a volume mount point link for?

Some IT or business people might dislike having to remember or type the different drive letters their system has. What does M: really mean anyway? Was it Music? Movies? Models? Maps?

Microsoft has done efforts over the year to try to migrate users away from the work in drive C: to work in your user folder . I could undoubtedly say that the users with UAC and permission problems are those that do not follow these guidelines, but doesn't that make them wonder:

Why should you even be viewing anything but your personal files on a daily basis?

Volume mount points are the professional IT way of not being limited by drive letters as well as having a directory structure that makes sense for them, but...

My files are in different places, can I use links to get them together?

In Windows 7, Libraries were introduced exactly for this purpose. Done with music files that are located in this folder, and that folder and that folder . From a lower level of view, a library can be viewed as multiple links. They are again implemented as a file system object that can contain multiple references. It is in essence a one-to-many relationship ...

My brain explodes... Can you summarize when to use them?

Medinoc ,Sep 29, 2014 at 16:28

Libraries are shell-level like shortcut links, right? – Medinoc Sep 29 '14 at 16:28

Tom Wijsman ,Sep 29, 2014 at 21:53

@Medinoc: No, they aggregate the content of multiple locations. – Tom Wijsman Sep 29 '14 at 21:53

Medinoc ,Sep 30, 2014 at 20:52

But do they do so at filesystem level in such way that, say, cmd.exe and dir can list the aggregated content (in which case, where in the file system are they, I can't find it), or do they only aggregate at shell level, where only Windows Explorer and file dialogs can show them? I was under the impression it was the latter, but your "No" challenges this unless I wrote my question wrong (I meant to say "Libraries are shell-level like shortcut links are , right?" ). – Medinoc Sep 30 '14 at 20:52

Tom Wijsman ,Oct 1, 2014 at 9:32

@Medinoc: They are files at C:\Users\{User}\AppData\Roaming\Microsoft\Windows\Libraries . – Tom Wijsman Oct 1 '14 at 9:32

Tom Wijsman ,Apr 25, 2015 at 10:23

@Pacerier: Windows uses the old location system, where you can for example move a music folder around from its properties. Libraries are a new addition, which the OS itself barely uses as a result. Therefore I doubt if anything would break; as they are intended solely for display purposes, ... – Tom Wijsman Apr 25 '15 at 10:23

GeminiDomino ,Oct 18, 2011 at 17:49

If you're on Windows Vista or later, and have admin rights, you might check out the mklink command (it's a command line tool). I'm not sure how symlink-y it actually is since windows gives it the little arrow icon it puts on shortcuts, but a quick notepad++ test on a text file suggests it might work for what you're looking for.

You can run mklink with no arguments for a quick usage guide.

I hope that helps.

jcrawfordor ,Oct 18, 2011 at 17:58

mklink uses NTFS junction points (I believe that's what they're called) to more or less perfectly duplicate Unix-style linking. Windows can tell that it's a junction, though, so it'll give it the traditional arrow icon. iirc you can remove this with some registry fiddling, but I don't remember where. – jcrawfordor Oct 18 '11 at 17:58

grawity ,Oct 18, 2011 at 18:01

@jcrawfordor: The disk structures are "reparse points" . Junctions and symlinks are two different types of reparse points; volume mountpoints are third. – grawity Oct 18 '11 at 18:01

grawity ,Oct 18, 2011 at 18:08

And yes, @Gemini, mklink -made symlinks were specifically implemented to work just like Unix ones . – grawity Oct 18 '11 at 18:08

GeminiDomino ,Oct 18, 2011 at 18:13

Thanks grawity, for the confirmation. I've never played around with them much, so I just wanted to include disclaim.h ;) – GeminiDomino Oct 18 '11 at 18:13

barlop ,Jun 11, 2016 at 17:32

this article has some distinctions

one important distinction is that in a sense, junctions pre win7 were a bit unsafe, in that deleting them would delete the target directory.

http://cects.com/overview-to-understanding-hard-links-junction-points-and-symbolic-links-in-windows/

A Junction Point should never be removed in Win2k, Win2003 and WinXP with Explorer, the del or del /s commands, or with any utility that recursively walks directories since these will delete the target directory and all its subdirectories. Instead, use the rmdir command, the linkd utility, or fsutil (if using WinXP or above) or a third party tool to remove the junction point without affecting the target. In Vista/Win7, it's safe to delete Junction Points with Explorer or with the rmdir and del commands.

[Aug 07, 2018] May I sort the -etc-group and -etc-passwd files

Aug 07, 2018 | unix.stackexchange.com

Ned64 ,Feb 18 at 13:52

My /etc/group has grown by adding new users as well as installing programs that have added their own user and/or group. The same is true for /etc/passwd . Editing has now become a little cumbersome due to the lack of structure.

May I sort these files (e.g. by numerical id or alphabetical by name) without negative effect on the system and/or package managers?

I would guess that is does not matter but just to be sure I would like to get a 2nd opinion. Maybe root needs to be the 1st line or within the first 1k lines or something?

The same goes for /etc/*shadow .

Kevin ,Feb 19 at 23:50

"Editing has now become a little cumbersome due to the lack of structure" Why are you editing those files by hand? – Kevin Feb 19 at 23:50

Barmar ,Feb 21 at 20:51

How does sorting the file help with editing? Is it because you want to group related accounts together, and then do similar changes in a range of rows? But will related account be adjacent if you sort by uid or name? – Barmar Feb 21 at 20:51

Ned64 ,Mar 13 at 23:15

@Barmar It has helped mainly because user accounts are grouped by ranges and separate from system accounts (when sorting by UID). Therefore it is easier e.g. to spot the correct line to examine or change when editing with vi . – Ned64 Mar 13 at 23:15

ErikF ,Feb 18 at 14:12

You should be OK doing this : in fact, according to the article and reading the documentation, you can sort /etc/passwd and /etc/group by UID/GID with pwck -s and grpck -s , respectively.

hvd ,Feb 18 at 22:59

@Menasheh This site's colours don't make them stand out as much as on other sites, but "OK doing this" in this answer is a hyperlink. – hvd Feb 18 at 22:59

mickeyf ,Feb 19 at 14:05

OK, fine, but... In general, are there valid reasons to manually edit /etc/passwd and similar files? Isn't it considered better to access these via the tools that are designed to create and modify them? – mickeyf Feb 19 at 14:05

ErikF ,Feb 20 at 21:21

@mickeyf I've seen people manually edit /etc/passwd when they're making batch changes, like changing the GECOS field for all users due to moving/restructuring (global room or phone number changes, etc.) It's not common anymore, but there are specific reasons that crop up from time to time. – ErikF Feb 20 at 21:21

hvd ,Feb 18 at 17:28

Although ErikF is correct that this should generally be okay, I do want to point out one potential issue:

You're allowed to map different usernames to the same UID. If you make use of this, tools that map a UID back to a username will generally pick the first username they find for that UID in /etc/passwd . Sorting may cause a different username to appear first. For display purposes (e.g. ls -l output), either username should work, but it's possible that you've configured some program to accept requests from username A, where it will deny those requests if it sees them coming from username B, even if A and B are the same user.

Rui F Ribeiro ,Feb 19 at 17:53

Having root at first line has been a long time de facto "standard" and is very convenient if you ever have to fix their shell or delete the password, when dealing with problems or recovering systems.

Likewise I prefer to have daemons/utils users in the middle and standard users at the end of both passwd and shadow .

hvd answer is also very good about disturbing the users order, especially in systems with many users maintained by hand.

If you somewhat manage to sort the files, for instance, only for standard users, it would be more sensible than changing the order of all users, imo.

Barmar ,Feb 21 at 20:13

If you sort numerically by UID, you should get your preferred order. Root is always 0 , and daemons conventionally have UIDs under 100. – Barmar Feb 21 at 20:13

Rui F Ribeiro ,Feb 21 at 20:16

@Barmar If sorting by UID and not by name, indeed, thanks for remembering. – Rui F Ribeiro Feb 21 at 20:16

[Jul 30, 2018] Non-root user getting root access after running sudo vi -etc-hosts

Notable quotes:
"... as the original user ..."
Jul 30, 2018 | unix.stackexchange.com

Gilles, Mar 10, 2018 at 10:24

If sudo vi /etc/hosts is successful, it means that the system administrator has allowed the user to run vi /etc/hosts as root. That's the whole point of sudo: it lets the system administrator authorize certain users to run certain commands with extra privileges.

Giving a user the permission to run vi gives them the permission to run any vi command, including :sh to run a shell and :w to overwrite any file on the system. A rule allowing only to run vi /etc/hosts does not make any sense since it allows the user to run arbitrary commands.

There is no "hacking" involved. The breach of security comes from a misconfiguration, not from a hole in the security model. Sudo does not particularly try to prevent against misconfiguration. Its documentation is well-known to be difficult to understand; if in doubt, ask around and don't try to do things that are too complicated.

It is in general a hard problem to give a user a specific privilege without giving them more than intended. A bulldozer approach like giving them the right to run an interactive program such as vi is bound to fail. A general piece of advice is to give the minimum privileges necessary to accomplish the task. If you want to allow a user to modify one file, don't give them the permission to run an editor. Instead, either:

Note that allowing a user to edit /etc/hosts may have an impact on your security infrastructure: if there's any place where you rely on a host name corresponding to a specific machine, then that user will be able to point it to a different machine. Consider that it is probably unnecessary anyway .

[Jul 05, 2018] Can rsync resume after being interrupted

Notable quotes:
"... as if it were successfully transferred ..."
Jul 05, 2018 | unix.stackexchange.com

Tim ,Sep 15, 2012 at 23:36

I used rsync to copy a large number of files, but my OS (Ubuntu) restarted unexpectedly.

After reboot, I ran rsync again, but from the output on the terminal, I found that rsync still copied those already copied before. But I heard that rsync is able to find differences between source and destination, and therefore to just copy the differences. So I wonder in my case if rsync can resume what was left last time?

Gilles ,Sep 16, 2012 at 1:56

Yes, rsync won't copy again files that it's already copied. There are a few edge cases where its detection can fail. Did it copy all the already-copied files? What options did you use? What were the source and target filesystems? If you run rsync again after it's copied everything, does it copy again? – Gilles Sep 16 '12 at 1:56

Tim ,Sep 16, 2012 at 2:30

@Gilles: Thanks! (1) I think I saw rsync copied the same files again from its output on the terminal. (2) Options are same as in my other post, i.e. sudo rsync -azvv /home/path/folder1/ /home/path/folder2 . (3) Source and target are both NTFS, buy source is an external HDD, and target is an internal HDD. (3) It is now running and hasn't finished yet. – Tim Sep 16 '12 at 2:30

jwbensley ,Sep 16, 2012 at 16:15

There is also the --partial flag to resume partially transferred files (useful for large files) – jwbensley Sep 16 '12 at 16:15

Tim ,Sep 19, 2012 at 5:20

@Gilles: What are some "edge cases where its detection can fail"? – Tim Sep 19 '12 at 5:20

Gilles ,Sep 19, 2012 at 9:25

@Tim Off the top of my head, there's at least clock skew, and differences in time resolution (a common issue with FAT filesystems which store times in 2-second increments, the --modify-window option helps with that). – Gilles Sep 19 '12 at 9:25

DanielSmedegaardBuus ,Nov 1, 2014 at 12:32

First of all, regarding the "resume" part of your question, --partial just tells the receiving end to keep partially transferred files if the sending end disappears as though they were completely transferred.

While transferring files, they are temporarily saved as hidden files in their target folders (e.g. .TheFileYouAreSending.lRWzDC ), or a specifically chosen folder if you set the --partial-dir switch. When a transfer fails and --partial is not set, this hidden file will remain in the target folder under this cryptic name, but if --partial is set, the file will be renamed to the actual target file name (in this case, TheFileYouAreSending ), even though the file isn't complete. The point is that you can later complete the transfer by running rsync again with either --append or --append-verify .

So, --partial doesn't itself resume a failed or cancelled transfer. To resume it, you'll have to use one of the aforementioned flags on the next run. So, if you need to make sure that the target won't ever contain files that appear to be fine but are actually incomplete, you shouldn't use --partial . Conversely, if you want to make sure you never leave behind stray failed files that are hidden in the target directory, and you know you'll be able to complete the transfer later, --partial is there to help you.

With regards to the --append switch mentioned above, this is the actual "resume" switch, and you can use it whether or not you're also using --partial . Actually, when you're using --append , no temporary files are ever created. Files are written directly to their targets. In this respect, --append gives the same result as --partial on a failed transfer, but without creating those hidden temporary files.

So, to sum up, if you're moving large files and you want the option to resume a cancelled or failed rsync operation from the exact point that rsync stopped, you need to use the --append or --append-verify switch on the next attempt.

As @Alex points out below, since version 3.0.0 rsync now has a new option, --append-verify , which behaves like --append did before that switch existed. You probably always want the behaviour of --append-verify , so check your version with rsync --version . If you're on a Mac and not using rsync from homebrew , you'll (at least up to and including El Capitan) have an older version and need to use --append rather than --append-verify . Why they didn't keep the behaviour on --append and instead named the newcomer --append-no-verify is a bit puzzling. Either way, --append on rsync before version 3 is the same as --append-verify on the newer versions.

--append-verify isn't dangerous: It will always read and compare the data on both ends and not just assume they're equal. It does this using checksums, so it's easy on the network, but it does require reading the shared amount of data on both ends of the wire before it can actually resume the transfer by appending to the target.

Second of all, you said that you "heard that rsync is able to find differences between source and destination, and therefore to just copy the differences."

That's correct, and it's called delta transfer, but it's a different thing. To enable this, you add the -c , or --checksum switch. Once this switch is used, rsync will examine files that exist on both ends of the wire. It does this in chunks, compares the checksums on both ends, and if they differ, it transfers just the differing parts of the file. But, as @Jonathan points out below, the comparison is only done when files are of the same size on both ends -- different sizes will cause rsync to upload the entire file, overwriting the target with the same name.

This requires a bit of computation on both ends initially, but can be extremely efficient at reducing network load if for example you're frequently backing up very large files fixed-size files that often contain minor changes. Examples that come to mind are virtual hard drive image files used in virtual machines or iSCSI targets.

It is notable that if you use --checksum to transfer a batch of files that are completely new to the target system, rsync will still calculate their checksums on the source system before transferring them. Why I do not know :)

So, in short:

If you're often using rsync to just "move stuff from A to B" and want the option to cancel that operation and later resume it, don't use --checksum , but do use --append-verify .

If you're using rsync to back up stuff often, using --append-verify probably won't do much for you, unless you're in the habit of sending large files that continuously grow in size but are rarely modified once written. As a bonus tip, if you're backing up to storage that supports snapshotting such as btrfs or zfs , adding the --inplace switch will help you reduce snapshot sizes since changed files aren't recreated but rather the changed blocks are written directly over the old ones. This switch is also useful if you want to avoid rsync creating copies of files on the target when only minor changes have occurred.

When using --append-verify , rsync will behave just like it always does on all files that are the same size. If they differ in modification or other timestamps, it will overwrite the target with the source without scrutinizing those files further. --checksum will compare the contents (checksums) of every file pair of identical name and size.

UPDATED 2015-09-01 Changed to reflect points made by @Alex (thanks!)

UPDATED 2017-07-14 Changed to reflect points made by @Jonathan (thanks!)

Alex ,Aug 28, 2015 at 3:49

According to the documentation --append does not check the data, but --append-verify does. Also, as @gaoithe points out in a comment below, the documentation claims --partial does resume from previous files. – Alex Aug 28 '15 at 3:49

DanielSmedegaardBuus ,Sep 1, 2015 at 13:29

Thank you @Alex for the updates. Indeed, since 3.0.0, --append no longer compares the source to the target file before appending. Quite important, really! --partial does not itself resume a failed file transfer, but rather leaves it there for a subsequent --append(-verify) to append to it. My answer was clearly misrepresenting this fact; I'll update it to include these points! Thanks a lot :) – DanielSmedegaardBuus Sep 1 '15 at 13:29

Cees Timmerman ,Sep 15, 2015 at 17:21

This says --partial is enough. – Cees Timmerman Sep 15 '15 at 17:21

DanielSmedegaardBuus ,May 10, 2016 at 19:31

@CMCDragonkai Actually, check out Alexander's answer below about --partial-dir -- looks like it's the perfect bullet for this. I may have missed something entirely ;) – DanielSmedegaardBuus May 10 '16 at 19:31

Jonathan Y. ,Jun 14, 2017 at 5:48

What's your level of confidence in the described behavior of --checksum ? According to the man it has more to do with deciding which files to flag for transfer than with delta-transfer (which, presumably, is rsync 's default behavior). – Jonathan Y. Jun 14 '17 at 5:48

Alexander O'Mara ,Jan 3, 2016 at 6:34

TL;DR:

Just specify a partial directory as the rsync man pages recommends:

--partial-dir=.rsync-partial

Longer explanation:

There is actually a built-in feature for doing this using the --partial-dir option, which has several advantages over the --partial and --append-verify / --append alternative.

Excerpt from the rsync man pages:
--partial-dir=DIR
      A  better way to keep partial files than the --partial option is
      to specify a DIR that will be used  to  hold  the  partial  data
      (instead  of  writing  it  out to the destination file).  On the
      next transfer, rsync will use a file found in this dir  as  data
      to  speed  up  the resumption of the transfer and then delete it
      after it has served its purpose.

      Note that if --whole-file is specified (or  implied),  any  par-
      tial-dir  file  that  is  found for a file that is being updated
      will simply be removed (since rsync  is  sending  files  without
      using rsync's delta-transfer algorithm).

      Rsync will create the DIR if it is missing (just the last dir --
      not the whole path).  This makes it easy to use a relative  path
      (such  as  "--partial-dir=.rsync-partial")  to have rsync create
      the partial-directory in the destination file's  directory  when
      needed,  and  then  remove  it  again  when  the partial file is
      deleted.

      If the partial-dir value is not an absolute path, rsync will add
      an  exclude rule at the end of all your existing excludes.  This
      will prevent the sending of any partial-dir files that may exist
      on the sending side, and will also prevent the untimely deletion
      of partial-dir items on the receiving  side.   An  example:  the
      above  --partial-dir  option would add the equivalent of "-f '-p
      .rsync-partial/'" at the end of any other filter rules.

By default, rsync uses a random temporary file name which gets deleted when a transfer fails. As mentioned, using --partial you can make rsync keep the incomplete file as if it were successfully transferred , so that it is possible to later append to it using the --append-verify / --append options. However there are several reasons this is sub-optimal.

  1. Your backup files may not be complete, and without checking the remote file which must still be unaltered, there's no way to know.
  2. If you are attempting to use --backup and --backup-dir , you've just added a new version of this file that never even exited before to your version history.

However if we use --partial-dir , rsync will preserve the temporary partial file, and resume downloading using that partial file next time you run it, and we do not suffer from the above issues.

trs ,Apr 7, 2017 at 0:00

This is really the answer. Hey everyone, LOOK HERE!! – trs Apr 7 '17 at 0:00

JKOlaf ,Jun 28, 2017 at 0:11

I agree this is a much more concise answer to the question. the TL;DR: is perfect and for those that need more can read the longer bit. Strong work. – JKOlaf Jun 28 '17 at 0:11

N2O ,Jul 29, 2014 at 18:24

You may want to add the -P option to your command.

From the man page:

--partial By default, rsync will delete any partially transferred file if the transfer
         is interrupted. In some circumstances it is more desirable to keep partially
         transferred files. Using the --partial option tells rsync to keep the partial
         file which should make a subsequent transfer of the rest of the file much faster.

  -P     The -P option is equivalent to --partial --progress.   Its  pur-
         pose  is to make it much easier to specify these two options for
         a long transfer that may be interrupted.

So instead of:

sudo rsync -azvv /home/path/folder1/ /home/path/folder2

Do:

sudo rsync -azvvP /home/path/folder1/ /home/path/folder2

Of course, if you don't want the progress updates, you can just use --partial , i.e.:

sudo rsync --partial -azvv /home/path/folder1/ /home/path/folder2

gaoithe ,Aug 19, 2015 at 11:29

@Flimm not quite correct. If there is an interruption (network or receiving side) then when using --partial the partial file is kept AND it is used when rsync is resumed. From the manpage: "Using the --partial option tells rsync to keep the partial file which should <b>make a subsequent transfer of the rest of the file much faster</b>." – gaoithe Aug 19 '15 at 11:29

DanielSmedegaardBuus ,Sep 1, 2015 at 14:11

@Flimm and @gaoithe, my answer wasn't quite accurate, and definitely not up-to-date. I've updated it to reflect version 3 + of rsync . It's important to stress, though, that --partial does not itself resume a failed transfer. See my answer for details :) – DanielSmedegaardBuus Sep 1 '15 at 14:11

guettli ,Nov 18, 2015 at 12:28

@DanielSmedegaardBuus I tried it and the -P is enough in my case. Versions: client has 3.1.0 and server has 3.1.1. I interrupted the transfer of a single large file with ctrl-c. I guess I am missing something. – guettli Nov 18 '15 at 12:28

Yadunandana ,Sep 16, 2012 at 16:07

I think you are forcibly calling the rsync and hence all data is getting downloaded when you recall it again. use --progress option to copy only those files which are not copied and --delete option to delete any files if already copied and now it does not exist in source folder...
rsync -avz --progress --delete -e  /home/path/folder1/ /home/path/folder2

If you are using ssh to login to other system and copy the files,

rsync -avz --progress --delete -e "ssh -o UserKnownHostsFile=/dev/null -o \
StrictHostKeyChecking=no" /home/path/folder1/ /home/path/folder2

let me know if there is any mistake in my understanding of this concept...

Fabien ,Jun 14, 2013 at 12:12

Can you please edit your answer and explain what your special ssh call does, and why you advice to do it? – Fabien Jun 14 '13 at 12:12

DanielSmedegaardBuus ,Dec 7, 2014 at 0:12

@Fabien He tells rsync to set two ssh options (rsync uses ssh to connect). The second one tells ssh to not prompt for confirmation if the host he's connecting to isn't already known (by existing in the "known hosts" file). The first one tells ssh to not use the default known hosts file (which would be ~/.ssh/known_hosts). He uses /dev/null instead, which is of course always empty, and as ssh would then not find the host in there, it would normally prompt for confirmation, hence option two. Upon connecting, ssh writes the now known host to /dev/null, effectively forgetting it instantly :) – DanielSmedegaardBuus Dec 7 '14 at 0:12

DanielSmedegaardBuus ,Dec 7, 2014 at 0:23

...but you were probably wondering what effect, if any, it has on the rsync operation itself. The answer is none. It only serves to not have the host you're connecting to added to your SSH known hosts file. Perhaps he's a sysadmin often connecting to a great number of new servers, temporary systems or whatnot. I don't know :) – DanielSmedegaardBuus Dec 7 '14 at 0:23

moi ,May 10, 2016 at 13:49

"use --progress option to copy only those files which are not copied" What? – moi May 10 '16 at 13:49

Paul d'Aoust ,Nov 17, 2016 at 22:39

There are a couple errors here; one is very serious: --delete will delete files in the destination that don't exist in the source. The less serious one is that --progress doesn't modify how things are copied; it just gives you a progress report on each file as it copies. (I fixed the serious error; replaced it with --remove-source-files .) – Paul d'Aoust Nov 17 '16 at 22:39

[Jul 04, 2018] How do I parse command line arguments in Bash

Notable quotes:
"... enhanced getopt ..."
Jul 04, 2018 | stackoverflow.com

Lawrence Johnston ,Oct 10, 2008 at 16:57

Say, I have a script that gets called with this line:
./myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile

or this one:

./myscript -v -f -d -o /fizz/someOtherFile ./foo/bar/someFile

What's the accepted way of parsing this such that in each case (or some combination of the two) $v , $f , and $d will all be set to true and $outFile will be equal to /fizz/someOtherFile ?

Inanc Gumus ,Apr 15, 2016 at 19:11

See my very easy and no-dependency answer here: stackoverflow.com/a/33826763/115363Inanc Gumus Apr 15 '16 at 19:11

dezza ,Aug 2, 2016 at 2:13

For zsh-users there's a great builtin called zparseopts which can do: zparseopts -D -E -M -- d=debug -debug=d And have both -d and --debug in the $debug array echo $+debug[1] will return 0 or 1 if one of those are used. Ref: zsh.org/mla/users/2011/msg00350.htmldezza Aug 2 '16 at 2:13

Bruno Bronosky ,Jan 7, 2013 at 20:01

Preferred Method: Using straight bash without getopt[s]

I originally answered the question as the OP asked. This Q/A is getting a lot of attention, so I should also offer the non-magic way to do this. I'm going to expand upon guneysus's answer to fix the nasty sed and include Tobias Kienzler's suggestion .

Two of the most common ways to pass key value pair arguments are:

Straight Bash Space Separated

Usage ./myscript.sh -e conf -s /etc -l /usr/lib /etc/hosts

#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo FILE EXTENSION  = "${EXTENSION}"
echo SEARCH PATH     = "${SEARCHPATH}"
echo LIBRARY PATH    = "${LIBPATH}"
echo DEFAULT         = "${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 "$1"
fi
Straight Bash Equals Separated

Usage ./myscript.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

#!/bin/bash

for i in "$@"
do
case $i in
    -e=*|--extension=*)
    EXTENSION="${i#*=}"
    shift # past argument=value
    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    shift # past argument=value
    ;;
    -l=*|--lib=*)
    LIBPATH="${i#*=}"
    shift # past argument=value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument with no value
    ;;
    *)
          # unknown option
    ;;
esac
done
echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 $1
fi

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Using getopt[s]

from: http://mywiki.wooledge.org/BashFAQ/035#getopts

Never use getopt(1). getopt cannot handle empty arguments strings, or arguments with embedded whitespace. Please forget that it ever existed.

The POSIX shell (and others) offer getopts which is safe to use instead. Here is a simplistic getopts example:

#!/bin/sh

# A POSIX variable
OPTIND=1         # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
output_file=""
verbose=0

while getopts "h?vf:" opt; do
    case "$opt" in
    h|\?)
        show_help
        exit 0
        ;;
    v)  verbose=1
        ;;
    f)  output_file=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"

# End of file

The advantages of getopts are:

  1. It's portable, and will work in e.g. dash.
  2. It can handle things like -vf filename in the expected Unix way, automatically.

The disadvantage of getopts is that it can only handle short options ( -h , not --help ) without trickery.

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts , which might be informative.

Livven ,Jun 6, 2013 at 21:19

Is this really true? According to Wikipedia there's a newer GNU enhanced version of getopt which includes all the functionality of getopts and then some. man getopt on Ubuntu 13.04 outputs getopt - parse command options (enhanced) as the name, so I presume this enhanced version is standard now. – Livven Jun 6 '13 at 21:19

szablica ,Jul 17, 2013 at 15:23

That something is a certain way on your system is a very weak premise to base asumptions of "being standard" on. – szablica Jul 17 '13 at 15:23

Stephane Chazelas ,Aug 20, 2014 at 19:55

@Livven, that getopt is not a GNU utility, it's part of util-linux . – Stephane Chazelas Aug 20 '14 at 19:55

Nicolas Mongrain-Lacombe ,Jun 19, 2016 at 21:22

If you use -gt 0 , remove your shift after the esac , augment all the shift by 1 and add this case: *) break;; you can handle non optionnal arguments. Ex: pastebin.com/6DJ57HTcNicolas Mongrain-Lacombe Jun 19 '16 at 21:22

kolydart ,Jul 10, 2017 at 8:11

You do not echo –default . In the first example, I notice that if –default is the last argument, it is not processed (considered as non-opt), unless while [[ $# -gt 1 ]] is set as while [[ $# -gt 0 ]]kolydart Jul 10 '17 at 8:11

Robert Siemer ,Apr 20, 2015 at 17:47

No answer mentions enhanced getopt . And the top-voted answer is misleading: It ignores -⁠vfd style short options (requested by the OP), options after positional arguments (also requested by the OP) and it ignores parsing-errors. Instead:

The following calls

myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile
myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile
myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile
myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd
myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFile

all return

verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile

with the following myscript

#!/bin/bash

getopt --test > /dev/null
if [[ $? -ne 4 ]]; then
    echo "I'm sorry, `getopt --test` failed in this environment."
    exit 1
fi

OPTIONS=dfo:v
LONGOPTIONS=debug,force,output:,verbose

# -temporarily store output to be able to check for errors
# -e.g. use "--options" parameter by name to activate quoting/enhanced mode
# -pass arguments only via   -- "$@"   to separate them correctly
PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTIONS --name "$0" -- "$@")
if [[ $? -ne 0 ]]; then
    # e.g. $? == 1
    #  then getopt has complained about wrong arguments to stdout
    exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"

# now enjoy the options in order and nicely split until we see --
while true; do
    case "$1" in
        -d|--debug)
            d=y
            shift
            ;;
        -f|--force)
            f=y
            shift
            ;;
        -v|--verbose)
            v=y
            shift
            ;;
        -o|--output)
            outFile="$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        *)
            echo "Programming error"
            exit 3
            ;;
    esac
done

# handle non-option arguments
if [[ $# -ne 1 ]]; then
    echo "$0: A single input file is required."
    exit 4
fi

echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"

1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt
2 the POSIX exec() conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)

johncip ,Jan 12, 2017 at 2:00

Thanks for this. Just confirmed from the feature table at en.wikipedia.org/wiki/Getopts , if you need support for long options, and you're not on Solaris, getopt is the way to go. – johncip Jan 12 '17 at 2:00

Kaushal Modi ,Apr 27, 2017 at 14:02

I believe that the only caveat with getopt is that it cannot be used conveniently in wrapper scripts where one might have few options specific to the wrapper script, and then pass the non-wrapper-script options to the wrapped executable, intact. Let's say I have a grep wrapper called mygrep and I have an option --foo specific to mygrep , then I cannot do mygrep --foo -A 2 , and have the -A 2 passed automatically to grep ; I need to do mygrep --foo -- -A 2 . Here is my implementation on top of your solution.Kaushal Modi Apr 27 '17 at 14:02

bobpaul ,Mar 20 at 16:45

Alex, I agree and there's really no way around that since we need to know the actual return value of getopt --test . I'm a big fan of "Unofficial Bash Strict mode", (which includes set -e ), and I just put the check for getopt ABOVE set -euo pipefail and IFS=$'\n\t' in my script. – bobpaul Mar 20 at 16:45

Robert Siemer ,Mar 21 at 9:10

@bobpaul Oh, there is a way around that. And I'll edit my answer soon to reflect my collections regarding this issue ( set -e )... – Robert Siemer Mar 21 at 9:10

Robert Siemer ,Mar 21 at 9:16

@bobpaul Your statement about util-linux is wrong and misleading as well: the package is marked "essential" on Ubuntu/Debian. As such, it is always installed. – Which distros are you talking about (where you say it needs to be installed on purpose)? – Robert Siemer Mar 21 at 9:16

guneysus ,Nov 13, 2012 at 10:31

from : digitalpeer.com with minor modifications

Usage myscript.sh -p=my_prefix -s=dirname -l=libname

#!/bin/bash
for i in "$@"
do
case $i in
    -p=*|--prefix=*)
    PREFIX="${i#*=}"

    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    ;;
    -l=*|--lib=*)
    DIR="${i#*=}"
    ;;
    --default)
    DEFAULT=YES
    ;;
    *)
            # unknown option
    ;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Tobias Kienzler ,Nov 12, 2013 at 12:48

Neat! Though this won't work for space-separated arguments à la mount -t tempfs ... . One can probably fix this via something like while [ $# -ge 1 ]; do param=$1; shift; case $param in; -p) prefix=$1; shift;; etc – Tobias Kienzler Nov 12 '13 at 12:48

Robert Siemer ,Mar 19, 2016 at 15:23

This can't handle -vfd style combined short options. – Robert Siemer Mar 19 '16 at 15:23

bekur ,Dec 19, 2017 at 23:27

link is broken! – bekur Dec 19 '17 at 23:27

Matt J ,Oct 10, 2008 at 17:03

getopt() / getopts() is a good option. Stolen from here :

The simple use of "getopt" is shown in this mini-script:

#!/bin/bash
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

What we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).

If we call this "g" and try it out:

bash-2.05a$ ./g -abc foo
Before getopt
-abc
foo
After getopt
-->-a
-->-b
-->-c
-->foo
-->--

We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".

Robert Siemer ,Apr 16, 2016 at 14:37

Using $* is broken usage of getopt . (It hoses arguments with spaces.) See my answer for proper usage. – Robert Siemer Apr 16 '16 at 14:37

SDsolar ,Aug 10, 2017 at 14:07

Why would you want to make it more complicated? – SDsolar Aug 10 '17 at 14:07

thebunnyrules ,Jun 1 at 1:57

@Matt J, the first part of the script (for i) would be able to handle arguments with spaces in them if you use "$i" instead of $i. The getopts does not seem to be able to handle arguments with spaces. What would be the advantage of using getopt over the for i loop? – thebunnyrules Jun 1 at 1:57

bronson ,Jul 15, 2015 at 23:43

At the risk of adding another example to ignore, here's my scheme.

Hope it's useful to someone.

while [ "$#" -gt 0 ]; do
  case "$1" in
    -n) name="$2"; shift 2;;
    -p) pidfile="$2"; shift 2;;
    -l) logfile="$2"; shift 2;;

    --name=*) name="${1#*=}"; shift 1;;
    --pidfile=*) pidfile="${1#*=}"; shift 1;;
    --logfile=*) logfile="${1#*=}"; shift 1;;
    --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;;

    -*) echo "unknown option: $1" >&2; exit 1;;
    *) handle_argument "$1"; shift 1;;
  esac
done

rhombidodecahedron ,Sep 11, 2015 at 8:40

What is the "handle_argument" function? – rhombidodecahedron Sep 11 '15 at 8:40

bronson ,Oct 8, 2015 at 20:41

Sorry for the delay. In my script, the handle_argument function receives all the non-option arguments. You can replace that line with whatever you'd like, maybe *) die "unrecognized argument: $1" or collect the args into a variable *) args+="$1"; shift 1;; . – bronson Oct 8 '15 at 20:41

Guilherme Garnier ,Apr 13 at 16:10

Amazing! I've tested a couple of answers, but this is the only one that worked for all cases, including many positional parameters (both before and after flags) – Guilherme Garnier Apr 13 at 16:10

Shane Day ,Jul 1, 2014 at 1:20

I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash

# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi

echo "Before"
for i ; do echo - $i ; done


# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.

while [ -n "$1" ]; do
        # Copy so we can modify it (can't modify $1)
        OPT="$1"
        # Detect argument termination
        if [ x"$OPT" = x"--" ]; then
                shift
                for OPT ; do
                        REMAINS="$REMAINS \"$OPT\""
                done
                break
        fi
        # Parse current opt
        while [ x"$OPT" != x"-" ] ; do
                case "$OPT" in
                        # Handle --flag=value opts like this
                        -c=* | --config=* )
                                CONFIGFILE="${OPT#*=}"
                                shift
                                ;;
                        # and --flag value opts like this
                        -c* | --config )
                                CONFIGFILE="$2"
                                shift
                                ;;
                        -f* | --force )
                                FORCE=true
                                ;;
                        -r* | --retry )
                                RETRY=true
                                ;;
                        # Anything unknown is recorded for later
                        * )
                                REMAINS="$REMAINS \"$OPT\""
                                break
                                ;;
                esac
                # Check for multiple short options
                # NOTICE: be sure to update this pattern to match valid options
                NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
                if [ x"$OPT" != x"$NEXTOPT" ] ; then
                        OPT="-$NEXTOPT"  # multiple short opts, keep going
                else
                        break  # long form, exit inner loop
                fi
        done
        # Done with that param. move to next
        shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS


echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done

Robert Siemer ,Dec 6, 2015 at 13:47

This code can't handle options with arguments like this: -c1 . And the use of = to separate short options from their arguments is unusual... – Robert Siemer Dec 6 '15 at 13:47

sfnd ,Jun 6, 2016 at 19:28

I ran into two problems with this useful chunk of code: 1) the "shift" in the case of "-c=foo" ends up eating the next parameter; and 2) 'c' should not be included in the "[cfr]" pattern for combinable short options. – sfnd Jun 6 '16 at 19:28

Inanc Gumus ,Nov 20, 2015 at 12:28

More succinct way

script.sh

#!/bin/bash

while [[ "$#" > 0 ]]; do case $1 in
  -d|--deploy) deploy="$2"; shift;;
  -u|--uglify) uglify=1;;
  *) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

echo "Should deploy? $deploy"
echo "Should uglify? $uglify"

Usage:

./script.sh -d dev -u

# OR:

./script.sh --deploy dev --uglify

hfossli ,Apr 7 at 20:58

This is what I am doing. Have to while [[ "$#" > 1 ]] if I want to support ending the line with a boolean flag ./script.sh --debug dev --uglify fast --verbose . Example: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58hfossli Apr 7 at 20:58

hfossli ,Apr 7 at 21:09

I sent an edit request. I just tested this and it works perfectly. – hfossli Apr 7 at 21:09

hfossli ,Apr 7 at 21:10

Wow! Simple and clean! This is how I'm using this: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58hfossli Apr 7 at 21:10

Ponyboy47 ,Sep 8, 2016 at 18:59

My answer is largely based on the answer by Bruno Bronosky , but I sort of mashed his two pure bash implementations into one that I use pretty frequently.
# As long as there is at least one more argument, keep looping
while [[ $# -gt 0 ]]; do
    key="$1"
    case "$key" in
        # This is a flag type option. Will catch either -f or --foo
        -f|--foo)
        FOO=1
        ;;
        # Also a flag type option. Will catch either -b or --bar
        -b|--bar)
        BAR=1
        ;;
        # This is an arg value type option. Will catch -o value or --output-file value
        -o|--output-file)
        shift # past the key and to the value
        OUTPUTFILE="$1"
        ;;
        # This is an arg=value type option. Will catch -o=value or --output-file=value
        -o=*|--output-file=*)
        # No need to shift here since the value is part of the same string
        OUTPUTFILE="${key#*=}"
        ;;
        *)
        # Do whatever you want with extra options
        echo "Unknown option '$key'"
        ;;
    esac
    # Shift after checking all the cases to get the next option
    shift
done

This allows you to have both space separated options/values, as well as equal defined values.

So you could run your script using:

./myscript --foo -b -o /fizz/file.txt

as well as:

./myscript -f --bar -o=/fizz/file.txt

and both should have the same end result.

PROS:

CONS:

These are the only pros/cons I can think of off the top of my head

bubla ,Jul 10, 2016 at 22:40

I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:

https://argbash.io

RichVel ,Aug 18, 2016 at 5:34

Thanks for writing argbash, I just used it and found it works well. I mostly went for argbash because it's a code generator supporting the older bash 3.x found on OS X 10.11 El Capitan. The only downside is that the code-generator approach means quite a lot of code in your main script, compared to calling a module. – RichVel Aug 18 '16 at 5:34

bubla ,Aug 23, 2016 at 20:40

You can actually use Argbash in a way that it produces tailor-made parsing library just for you that you can have included in your script or you can have it in a separate file and just source it. I have added an example to demonstrate that and I have made it more explicit in the documentation, too. – bubla Aug 23 '16 at 20:40

RichVel ,Aug 24, 2016 at 5:47

Good to know. That example is interesting but still not really clear - maybe you can change name of the generated script to 'parse_lib.sh' or similar and show where the main script calls it (like in the wrapping script section which is more complex use case). – RichVel Aug 24 '16 at 5:47

bubla ,Dec 2, 2016 at 20:12

The issues were addressed in recent version of argbash: Documentation has been improved, a quickstart argbash-init script has been introduced and you can even use argbash online at argbash.io/generatebubla Dec 2 '16 at 20:12

Alek ,Mar 1, 2012 at 15:15

I think this one is simple enough to use:
#!/bin/bash
#

readopt='getopts $opts opt;rc=$?;[ $rc$opt == 0? ]&&exit 1;[ $rc == 0 ]||{ shift $[OPTIND-1];false; }'

opts=vfdo:

# Enumerating options
while eval $readopt
do
    echo OPT:$opt ${OPTARG+OPTARG:$OPTARG}
done

# Enumerating arguments
for arg
do
    echo ARG:$arg
done

Invocation example:

./myscript -v -do /fizz/someOtherFile -f ./foo/bar/someFile
OPT:v 
OPT:d 
OPT:o OPTARG:/fizz/someOtherFile
OPT:f 
ARG:./foo/bar/someFile

erm3nda ,May 20, 2015 at 22:50

I read all and this one is my preferred one. I don't like to use -a=1 as argc style. I prefer to put first the main option -options and later the special ones with single spacing -o option . Im looking for the simplest-vs-better way to read argvs. – erm3nda May 20 '15 at 22:50

erm3nda ,May 20, 2015 at 23:25

It's working really well but if you pass an argument to a non a: option all the following options would be taken as arguments. You can check this line ./myscript -v -d fail -o /fizz/someOtherFile -f ./foo/bar/someFile with your own script. -d option is not set as d: – erm3nda May 20 '15 at 23:25

unsynchronized ,Jun 9, 2014 at 13:46

Expanding on the excellent answer by @guneysus, here is a tweak that lets user use whichever syntax they prefer, eg
command -x=myfilename.ext --another_switch

vs

command -x myfilename.ext --another_switch

That is to say the equals can be replaced with whitespace.

This "fuzzy interpretation" might not be to your liking, but if you are making scripts that are interchangeable with other utilities (as is the case with mine, which must work with ffmpeg), the flexibility is useful.

STD_IN=0

prefix=""
key=""
value=""
for keyValue in "$@"
do
  case "${prefix}${keyValue}" in
    -i=*|--input_filename=*)  key="-i";     value="${keyValue#*=}";; 
    -ss=*|--seek_from=*)      key="-ss";    value="${keyValue#*=}";;
    -t=*|--play_seconds=*)    key="-t";     value="${keyValue#*=}";;
    -|--stdin)                key="-";      value=1;;
    *)                                      value=$keyValue;;
  esac
  case $key in
    -i) MOVIE=$(resolveMovie "${value}");  prefix=""; key="";;
    -ss) SEEK_FROM="${value}";          prefix=""; key="";;
    -t)  PLAY_SECONDS="${value}";           prefix=""; key="";;
    -)   STD_IN=${value};                   prefix=""; key="";; 
    *)   prefix="${keyValue}=";;
  esac
done

vangorra ,Feb 12, 2015 at 21:50

getopts works great if #1 you have it installed and #2 you intend to run it on the same platform. OSX and Linux (for example) behave differently in this respect.

Here is a (non getopts) solution that supports equals, non-equals, and boolean flags. For example you could run your script in this way:

./script --arg1=value1 --arg2 value2 --shouldClean

# parse the arguments.
COUNTER=0
ARGS=("$@")
while [ $COUNTER -lt $# ]
do
    arg=${ARGS[$COUNTER]}
    let COUNTER=COUNTER+1
    nextArg=${ARGS[$COUNTER]}

    if [[ $skipNext -eq 1 ]]; then
        echo "Skipping"
        skipNext=0
        continue
    fi

    argKey=""
    argVal=""
    if [[ "$arg" =~ ^\- ]]; then
        # if the format is: -key=value
        if [[ "$arg" =~ \= ]]; then
            argVal=$(echo "$arg" | cut -d'=' -f2)
            argKey=$(echo "$arg" | cut -d'=' -f1)
            skipNext=0

        # if the format is: -key value
        elif [[ ! "$nextArg" =~ ^\- ]]; then
            argKey="$arg"
            argVal="$nextArg"
            skipNext=1

        # if the format is: -key (a boolean flag)
        elif [[ "$nextArg" =~ ^\- ]] || [[ -z "$nextArg" ]]; then
            argKey="$arg"
            argVal=""
            skipNext=0
        fi
    # if the format has not flag, just a value.
    else
        argKey=""
        argVal="$arg"
        skipNext=0
    fi

    case "$argKey" in 
        --source-scmurl)
            SOURCE_URL="$argVal"
        ;;
        --dest-scmurl)
            DEST_URL="$argVal"
        ;;
        --version-num)
            VERSION_NUM="$argVal"
        ;;
        -c|--clean)
            CLEAN_BEFORE_START="1"
        ;;
        -h|--help|-help|--h)
            showUsage
            exit
        ;;
    esac
done

akostadinov ,Jul 19, 2013 at 7:50

This is how I do in a function to avoid breaking getopts run at the same time somewhere higher in stack:
function waitForWeb () {
   local OPTIND=1 OPTARG OPTION
   local host=localhost port=8080 proto=http
   while getopts "h:p:r:" OPTION; do
      case "$OPTION" in
      h)
         host="$OPTARG"
         ;;
      p)
         port="$OPTARG"
         ;;
      r)
         proto="$OPTARG"
         ;;
      esac
   done
...
}

Renato Silva ,Jul 4, 2016 at 16:47

EasyOptions does not require any parsing:
## Options:
##   --verbose, -v  Verbose mode
##   --output=FILE  Output filename

source easyoptions || exit

if test -n "${verbose}"; then
    echo "output file is ${output}"
    echo "${arguments[@]}"
fi

Oleksii Chekulaiev ,Jul 1, 2016 at 20:56

I give you The Function parse_params that will parse params:
  1. Without polluting global scope.
  2. Effortlessly returns to you ready to use variables so that you could build further logic on them
  3. Amount of dashes before params does not matter ( --all equals -all equals all=all )

The script below is a copy-paste working demonstration. See show_use function to understand how to use parse_params .

Limitations:

  1. Does not support space delimited params ( -d 1 )
  2. Param names will lose dashes so --any-param and -anyparam are equivalent
  3. eval $(parse_params "$@") must be used inside bash function (it will not work in the global scope)

#!/bin/bash

# Universal Bash parameter parsing
# Parse equal sign separated params into named local variables
# Standalone named parameter value will equal its param name (--force creates variable $force=="force")
# Parses multi-valued named params into an array (--path=path1 --path=path2 creates ${path[*]} array)
# Parses un-named params into ${ARGV[*]} array
# Additionally puts all named params into ${ARGN[*]} array
# Additionally puts all standalone "option" params into ${ARGO[*]} array
# @author Oleksii Chekulaiev
# @version v1.3 (May-14-2018)
parse_params ()
{
    local existing_named
    local ARGV=() # un-named params
    local ARGN=() # named params
    local ARGO=() # options (--params)
    echo "local ARGV=(); local ARGN=(); local ARGO=();"
    while [[ "$1" != "" ]]; do
        # Escape asterisk to prevent bash asterisk expansion
        _escaped=${1/\*/\'\"*\"\'}
        # If equals delimited named parameter
        if [[ "$1" =~ ^..*=..* ]]; then
            # Add to named parameters array
            echo "ARGN+=('$_escaped');"
            # key is part before first =
            local _key=$(echo "$1" | cut -d = -f 1)
            # val is everything after key and = (protect from param==value error)
            local _val="${1/$_key=}"
            # remove dashes from key name
            _key=${_key//\-}
            # search for existing parameter name
            if (echo "$existing_named" | grep "\b$_key\b" >/dev/null); then
                # if name already exists then it's a multi-value named parameter
                # re-declare it as an array if needed
                if ! (declare -p _key 2> /dev/null | grep -q 'declare \-a'); then
                    echo "$_key=(\"\$$_key\");"
                fi
                # append new value
                echo "$_key+=('$_val');"
            else
                # single-value named parameter
                echo "local $_key=\"$_val\";"
                existing_named=" $_key"
            fi
        # If standalone named parameter
        elif [[ "$1" =~ ^\-. ]]; then
            # Add to options array
            echo "ARGO+=('$_escaped');"
            # remove dashes
            local _key=${1//\-}
            echo "local $_key=\"$_key\";"
        # non-named parameter
        else
            # Escape asterisk to prevent bash asterisk expansion
            _escaped=${1/\*/\'\"*\"\'}
            echo "ARGV+=('$_escaped');"
        fi
        shift
    done
}

#--------------------------- DEMO OF THE USAGE -------------------------------

show_use ()
{
    eval $(parse_params "$@")
    # --
    echo "${ARGV[0]}" # print first unnamed param
    echo "${ARGV[1]}" # print second unnamed param
    echo "${ARGN[0]}" # print first named param
    echo "${ARG0[0]}" # print first option param (--force)
    echo "$anyparam"  # print --anyparam value
    echo "$k"         # print k=5 value
    echo "${multivalue[0]}" # print first value of multi-value
    echo "${multivalue[1]}" # print second value of multi-value
    [[ "$force" == "force" ]] && echo "\$force is set so let the force be with you"
}

show_use "param 1" --anyparam="my value" param2 k=5 --force --multi-value=test1 --multi-value=test2

Oleksii Chekulaiev ,Sep 28, 2016 at 12:55

To use the demo to parse params that come into your bash script you just do show_use "$@"Oleksii Chekulaiev Sep 28 '16 at 12:55

Oleksii Chekulaiev ,Sep 28, 2016 at 12:58

Basically I found out that github.com/renatosilva/easyoptions does the same in the same way but is a bit more massive than this function. – Oleksii Chekulaiev Sep 28 '16 at 12:58

galmok ,Jun 24, 2015 at 10:54

I'd like to offer my version of option parsing, that allows for the following:
-s p1
--stage p1
-w somefolder
--workfolder somefolder
-sw p1 somefolder
-e=hello

Also allows for this (could be unwanted):

-s--workfolder p1 somefolder
-se=hello p1
-swe=hello p1 somefolder

You have to decide before use if = is to be used on an option or not. This is to keep the code clean(ish).

while [[ $# > 0 ]]
do
    key="$1"
    while [[ ${key+x} ]]
    do
        case $key in
            -s*|--stage)
                STAGE="$2"
                shift # option has parameter
                ;;
            -w*|--workfolder)
                workfolder="$2"
                shift # option has parameter
                ;;
            -e=*)
                EXAMPLE="${key#*=}"
                break # option has been fully handled
                ;;
            *)
                # unknown option
                echo Unknown option: $key #1>&2
                exit 10 # either this: my preferred way to handle unknown options
                break # or this: do this to signal the option has been handled (if exit isn't used)
                ;;
        esac
        # prepare for next option in this key, if any
        [[ "$key" = -? || "$key" == --* ]] && unset key || key="${key/#-?/-}"
    done
    shift # option(s) fully processed, proceed to next input argument
done

Luca Davanzo ,Nov 14, 2016 at 17:56

what's the meaning for "+x" on ${key+x} ? – Luca Davanzo Nov 14 '16 at 17:56

galmok ,Nov 15, 2016 at 9:10

It is a test to see if 'key' is present or not. Further down I unset key and this breaks the inner while loop. – galmok Nov 15 '16 at 9:10

Mark Fox ,Apr 27, 2015 at 2:42

Mixing positional and flag-based arguments --param=arg (equals delimited)

Freely mixing flags between positional arguments:

./script.sh dumbo 127.0.0.1 --environment=production -q -d
./script.sh dumbo --environment=production 127.0.0.1 --quiet -d

can be accomplished with a fairly concise approach:

# process flags
pointer=1
while [[ $pointer -le $# ]]; do
   param=${!pointer}
   if [[ $param != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer
   else
      case $param in
         # paramter-flags with arguments
         -e=*|--environment=*) environment="${param#*=}";;
                  --another=*) another="${param#*=}";;

         # binary flags
         -q|--quiet) quiet=true;;
                 -d) debug=true;;
      esac

      # splice out pointer frame from positional list
      [[ $pointer -gt 1 ]] \
         && set -- ${@:1:((pointer - 1))} ${@:((pointer + 1)):$#} \
         || set -- ${@:((pointer + 1)):$#};
   fi
done

# positional remain
node_name=$1
ip_address=$2
--param arg (space delimited)

It's usualy clearer to not mix --flag=value and --flag value styles.

./script.sh dumbo 127.0.0.1 --environment production -q -d

This is a little dicey to read, but is still valid

./script.sh dumbo --environment production 127.0.0.1 --quiet -d

Source

# process flags
pointer=1
while [[ $pointer -le $# ]]; do
   if [[ ${!pointer} != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer
   else
      param=${!pointer}
      ((pointer_plus = pointer + 1))
      slice_len=1

      case $param in
         # paramter-flags with arguments
         -e|--environment) environment=${!pointer_plus}; ((slice_len++));;
                --another) another=${!pointer_plus}; ((slice_len++));;

         # binary flags
         -q|--quiet) quiet=true;;
                 -d) debug=true;;
      esac

      # splice out pointer frame from positional list
      [[ $pointer -gt 1 ]] \
         && set -- ${@:1:((pointer - 1))} ${@:((pointer + $slice_len)):$#} \
         || set -- ${@:((pointer + $slice_len)):$#};
   fi
done

# positional remain
node_name=$1
ip_address=$2

schily ,Oct 19, 2015 at 13:59

Note that getopt(1) was a short living mistake from AT&T.

getopt was created in 1984 but already buried in 1986 because it was not really usable.

A proof for the fact that getopt is very outdated is that the getopt(1) man page still mentions "$*" instead of "$@" , that was added to the Bourne Shell in 1986 together with the getopts(1) shell builtin in order to deal with arguments with spaces inside.

BTW: if you are interested in parsing long options in shell scripts, it may be of interest to know that the getopt(3) implementation from libc (Solaris) and ksh93 both added a uniform long option implementation that supports long options as aliases for short options. This causes ksh93 and the Bourne Shell to implement a uniform interface for long options via getopts .

An example for long options taken from the Bourne Shell man page:

getopts "f:(file)(input-file)o:(output-file)" OPTX "$@"

shows how long option aliases may be used in both Bourne Shell and ksh93.

See the man page of a recent Bourne Shell:

http://schillix.sourceforge.net/man/man1/bosh.1.html

and the man page for getopt(3) from OpenSolaris:

http://schillix.sourceforge.net/man/man3c/getopt.3c.html

and last, the getopt(1) man page to verify the outdated $*:

http://schillix.sourceforge.net/man/man1/getopt.1.html

Volodymyr M. Lisivka ,Jul 9, 2013 at 16:51

Use module "arguments" from bash-modules

Example:

#!/bin/bash
. import.sh log arguments

NAME="world"

parse_arguments "-n|--name)NAME;S" -- "$@" || {
  error "Cannot parse command line."
  exit 1
}

info "Hello, $NAME!"

Mike Q ,Jun 14, 2014 at 18:01

This also might be useful to know, you can set a value and if someone provides input, override the default with that value..

myscript.sh -f ./serverlist.txt or just ./myscript.sh (and it takes defaults)

    #!/bin/bash
    # --- set the value, if there is inputs, override the defaults.

    HOME_FOLDER="${HOME}/owned_id_checker"
    SERVER_FILE_LIST="${HOME_FOLDER}/server_list.txt"

    while [[ $# > 1 ]]
    do
    key="$1"
    shift

    case $key in
        -i|--inputlist)
        SERVER_FILE_LIST="$1"
        shift
        ;;
    esac
    done


    echo "SERVER LIST   = ${SERVER_FILE_LIST}"

phk ,Oct 17, 2015 at 21:17

Another solution without getopt[s], POSIX, old Unix style

Similar to the solution Bruno Bronosky posted this here is one without the usage of getopt(s) .

Main differentiating feature of my solution is that it allows to have options concatenated together just like tar -xzf foo.tar.gz is equal to tar -x -z -f foo.tar.gz . And just like in tar , ps etc. the leading hyphen is optional for a block of short options (but this can be changed easily). Long options are supported as well (but when a block starts with one then two leading hyphens are required).

Code with example options
#!/bin/sh

echo
echo "POSIX-compliant getopt(s)-free old-style-supporting option parser from phk@[se.unix]"
echo

print_usage() {
  echo "Usage:

  $0 {a|b|c} [ARG...]

Options:

  --aaa-0-args
  -a
    Option without arguments.

  --bbb-1-args ARG
  -b ARG
    Option with one argument.

  --ccc-2-args ARG1 ARG2
  -c ARG1 ARG2
    Option with two arguments.

" >&2
}

if [ $# -le 0 ]; then
  print_usage
  exit 1
fi

opt=
while :; do

  if [ $# -le 0 ]; then

    # no parameters remaining -> end option parsing
    break

  elif [ ! "$opt" ]; then

    # we are at the beginning of a fresh block
    # remove optional leading hyphen and strip trailing whitespaces
    opt=$(echo "$1" | sed 's/^-\?\([a-zA-Z0-9\?-]*\)/\1/')

  fi

  # get the first character -> check whether long option
  first_chr=$(echo "$opt" | awk '{print substr($1, 1, 1)}')
  [ "$first_chr" = - ] && long_option=T || long_option=F

  # note to write the options here with a leading hyphen less
  # also do not forget to end short options with a star
  case $opt in

    -)

      # end of options
      shift
      break
      ;;

    a*|-aaa-0-args)

      echo "Option AAA activated!"
      ;;

    b*|-bbb-1-args)

      if [ "$2" ]; then
        echo "Option BBB with argument '$2' activated!"
        shift
      else
        echo "BBB parameters incomplete!" >&2
        print_usage
        exit 1
      fi
      ;;

    c*|-ccc-2-args)

      if [ "$2" ] && [ "$3" ]; then
        echo "Option CCC with arguments '$2' and '$3' activated!"
        shift 2
      else
        echo "CCC parameters incomplete!" >&2
        print_usage
        exit 1
      fi
      ;;

    h*|\?*|-help)

      print_usage
      exit 0
      ;;

    *)

      if [ "$long_option" = T ]; then
        opt=$(echo "$opt" | awk '{print substr($1, 2)}')
      else
        opt=$first_chr
      fi
      printf 'Error: Unknown option: "%s"\n' "$opt" >&2
      print_usage
      exit 1
      ;;

  esac

  if [ "$long_option" = T ]; then

    # if we had a long option then we are going to get a new block next
    shift
    opt=

  else

    # if we had a short option then just move to the next character
    opt=$(echo "$opt" | awk '{print substr($1, 2)}')

    # if block is now empty then shift to the next one
    [ "$opt" ] || shift

  fi

done

echo "Doing something..."

exit 0

For the example usage please see the examples further below.

Position of options with arguments

For what its worth there the options with arguments don't be the last (only long options need to be). So while e.g. in tar (at least in some implementations) the f options needs to be last because the file name follows ( tar xzf bar.tar.gz works but tar xfz bar.tar.gz does not) this is not the case here (see the later examples).

Multiple options with arguments

As another bonus the option parameters are consumed in the order of the options by the parameters with required options. Just look at the output of my script here with the command line abc X Y Z (or -abc X Y Z ):

Option AAA activated!
Option BBB with argument 'X' activated!
Option CCC with arguments 'Y' and 'Z' activated!
Long options concatenated as well

Also you can also have long options in option block given that they occur last in the block. So the following command lines are all equivalent (including the order in which the options and its arguments are being processed):

All of these lead to:

Option CCC with arguments 'Z' and 'Y' activated!
Option BBB with argument 'X' activated!
Option AAA activated!
Doing something...
Not in this solution Optional arguments

Options with optional arguments should be possible with a bit of work, e.g. by looking forward whether there is a block without a hyphen; the user would then need to put a hyphen in front of every block following a block with a parameter having an optional parameter. Maybe this is too complicated to communicate to the user so better just require a leading hyphen altogether in this case.

Things get even more complicated with multiple possible parameters. I would advise against making the options trying to be smart by determining whether the an argument might be for it or not (e.g. with an option just takes a number as an optional argument) because this might break in the future.

I personally favor additional options instead of optional arguments.

Option arguments introduced with an equal sign

Just like with optional arguments I am not a fan of this (BTW, is there a thread for discussing the pros/cons of different parameter styles?) but if you want this you could probably implement it yourself just like done at http://mywiki.wooledge.org/BashFAQ/035#Manual_loop with a --long-with-arg=?* case statement and then stripping the equal sign (this is BTW the site that says that making parameter concatenation is possible with some effort but "left [it] as an exercise for the reader" which made me take them at their word but I started from scratch).

Other notes

POSIX-compliant, works even on ancient Busybox setups I had to deal with (with e.g. cut , head and getopts missing).

Noah ,Aug 29, 2016 at 3:44

Solution that preserves unhandled arguments. Demos Included.

Here is my solution. It is VERY flexible and unlike others, shouldn't require external packages and handles leftover arguments cleanly.

Usage is: ./myscript -flag flagvariable -otherflag flagvar2

All you have to do is edit the validflags line. It prepends a hyphen and searches all arguments. It then defines the next argument as the flag name e.g.

./myscript -flag flagvariable -otherflag flagvar2
echo $flag $otherflag
flagvariable flagvar2

The main code (short version, verbose with examples further down, also a version with erroring out):

#!/usr/bin/env bash
#shebang.io
validflags="rate time number"
count=1
for arg in $@
do
    match=0
    argval=$1
    for flag in $validflags
    do
        sflag="-"$flag
        if [ "$argval" == "$sflag" ]
        then
            declare $flag=$2
            match=1
        fi
    done
        if [ "$match" == "1" ]
    then
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done
#Cleanup then restore the leftovers
shift $#
set -- $leftovers

The verbose version with built in echo demos:

#!/usr/bin/env bash
#shebang.io
rate=30
time=30
number=30
echo "all args
$@"
validflags="rate time number"
count=1
for arg in $@
do
    match=0
    argval=$1
#   argval=$(echo $@ | cut -d ' ' -f$count)
    for flag in $validflags
    do
            sflag="-"$flag
        if [ "$argval" == "$sflag" ]
        then
            declare $flag=$2
            match=1
        fi
    done
        if [ "$match" == "1" ]
    then
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done

#Cleanup then restore the leftovers
echo "pre final clear args:
$@"
shift $#
echo "post final clear args:
$@"
set -- $leftovers
echo "all post set args:
$@"
echo arg1: $1 arg2: $2

echo leftovers: $leftovers
echo rate $rate time $time number $number

Final one, this one errors out if an invalid -argument is passed through.

#!/usr/bin/env bash
#shebang.io
rate=30
time=30
number=30
validflags="rate time number"
count=1
for arg in $@
do
    argval=$1
    match=0
        if [ "${argval:0:1}" == "-" ]
    then
        for flag in $validflags
        do
                sflag="-"$flag
            if [ "$argval" == "$sflag" ]
            then
                declare $flag=$2
                match=1
            fi
        done
        if [ "$match" == "0" ]
        then
            echo "Bad argument: $argval"
            exit 1
        fi
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done
#Cleanup then restore the leftovers
shift $#
set -- $leftovers
echo rate $rate time $time number $number
echo leftovers: $leftovers

Pros: What it does, it handles very well. It preserves unused arguments which a lot of the other solutions here don't. It also allows for variables to be called without being defined by hand in the script. It also allows prepopulation of variables if no corresponding argument is given. (See verbose example).

Cons: Can't parse a single complex arg string e.g. -xcvf would process as a single argument. You could somewhat easily write additional code into mine that adds this functionality though.

Daniel Bigham ,Aug 8, 2016 at 12:42

The top answer to this question seemed a bit buggy when I tried it -- here's my solution which I've found to be more robust:
boolean_arg=""
arg_with_value=""

while [[ $# -gt 0 ]]
do
key="$1"
case $key in
    -b|--boolean-arg)
    boolean_arg=true
    shift
    ;;
    -a|--arg-with-value)
    arg_with_value="$2"
    shift
    shift
    ;;
    -*)
    echo "Unknown option: $1"
    exit 1
    ;;
    *)
    arg_num=$(( $arg_num + 1 ))
    case $arg_num in
        1)
        first_normal_arg="$1"
        shift
        ;;
        2)
        second_normal_arg="$1"
        shift
        ;;
        *)
        bad_args=TRUE
    esac
    ;;
esac
done

# Handy to have this here when adding arguments to
# see if they're working. Just edit the '0' to be '1'.
if [[ 0 == 1 ]]; then
    echo "first_normal_arg: $first_normal_arg"
    echo "second_normal_arg: $second_normal_arg"
    echo "boolean_arg: $boolean_arg"
    echo "arg_with_value: $arg_with_value"
    exit 0
fi

if [[ $bad_args == TRUE || $arg_num < 2 ]]; then
    echo "Usage: $(basename "$0") <first-normal-arg> <second-normal-arg> [--boolean-arg] [--arg-with-value VALUE]"
    exit 1
fi

phyatt ,Sep 7, 2016 at 18:25

This example shows how to use getopt and eval and HEREDOC and shift to handle short and long parameters with and without a required value that follows. Also the switch/case statement is concise and easy to follow.
#!/usr/bin/env bash

# usage function
function usage()
{
   cat << HEREDOC

   Usage: $progname [--num NUM] [--time TIME_STR] [--verbose] [--dry-run]

   optional arguments:
     -h, --help           show this help message and exit
     -n, --num NUM        pass in a number
     -t, --time TIME_STR  pass in a time string
     -v, --verbose        increase the verbosity of the bash script
     --dry-run            do a dry run, don't change any files

HEREDOC
}  

# initialize variables
progname=$(basename $0)
verbose=0
dryrun=0
num_str=
time_str=

# use getopt and store the output into $OPTS
# note the use of -o for the short options, --long for the long name options
# and a : for any option that takes a parameter
OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@")
if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; usage; exit 1 ; fi
eval set -- "$OPTS"

while true; do
  # uncomment the next line to see how shift is working
  # echo "\$1:\"$1\" \$2:\"$2\""
  case "$1" in
    -h | --help ) usage; exit; ;;
    -n | --num ) num_str="$2"; shift 2 ;;
    -t | --time ) time_str="$2"; shift 2 ;;
    --dry-run ) dryrun=1; shift ;;
    -v | --verbose ) verbose=$((verbose + 1)); shift ;;
    -- ) shift; break ;;
    * ) break ;;
  esac
done

if (( $verbose > 0 )); then

   # print out all the parameters we read in
   cat <<-EOM
   num=$num_str
   time=$time_str
   verbose=$verbose
   dryrun=$dryrun
EOM
fi

# The rest of your script below

The most significant lines of the script above are these:

OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@")
if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; exit 1 ; fi
eval set -- "$OPTS"

while true; do
  case "$1" in
    -h | --help ) usage; exit; ;;
    -n | --num ) num_str="$2"; shift 2 ;;
    -t | --time ) time_str="$2"; shift 2 ;;
    --dry-run ) dryrun=1; shift ;;
    -v | --verbose ) verbose=$((verbose + 1)); shift ;;
    -- ) shift; break ;;
    * ) break ;;
  esac
done

Short, to the point, readable, and handles just about everything (IMHO).

Hope that helps someone.

Emeric Verschuur ,Feb 20, 2017 at 21:30

I have write a bash helper to write a nice bash tool

project home: https://gitlab.mbedsys.org/mbedsys/bashopts

example:

#!/bin/bash -ei

# load the library
. bashopts.sh

# Enable backtrace dusplay on error
trap 'bashopts_exit_handle' ERR

# Initialize the library
bashopts_setup -n "$0" -d "This is myapp tool description displayed on help message" -s "$HOME/.config/myapprc"

# Declare the options
bashopts_declare -n first_name -l first -o f -d "First name" -t string -i -s -r
bashopts_declare -n last_name -l last -o l -d "Last name" -t string -i -s -r
bashopts_declare -n display_name -l display-name -t string -d "Display name" -e "\$first_name \$last_name"
bashopts_declare -n age -l number -d "Age" -t number
bashopts_declare -n email_list -t string -m add -l email -d "Email adress"

# Parse arguments
bashopts_parse_args "$@"

# Process argument
bashopts_process_args

will give help:

NAME:
    ./example.sh - This is myapp tool description displayed on help message

USAGE:
    [options and commands] [-- [extra args]]

OPTIONS:
    -h,--help                          Display this help
    -n,--non-interactive true          Non interactive mode - [$bashopts_non_interactive] (type:boolean, default:false)
    -f,--first "John"                  First name - [$first_name] (type:string, default:"")
    -l,--last "Smith"                  Last name - [$last_name] (type:string, default:"")
    --display-name "John Smith"        Display name - [$display_name] (type:string, default:"$first_name $last_name")
    --number 0                         Age - [$age] (type:number, default:0)
    --email                            Email adress - [$email_list] (type:string, default:"")

enjoy :)

Josh Wulf ,Jun 24, 2017 at 18:07

I get this on Mac OS X: ``` lib/bashopts.sh: line 138: declare: -A: invalid option declare: usage: declare [-afFirtx] [-p] [name[=value] ...] Error in lib/bashopts.sh:138. 'declare -x -A bashopts_optprop_name' exited with status 2 Call tree: 1: lib/controller.sh:4 source(...) Exiting with status 1 ``` – Josh Wulf Jun 24 '17 at 18:07

Josh Wulf ,Jun 24, 2017 at 18:17

You need Bash version 4 to use this. On Mac