Classic Shell Scripting - Arnold Robbins [49]
Sort Efficiency
The obvious way to sort data requires comparing all pairs of items to see which comes first, and leads to algorithms known as bubble sort and insertion sort. These quick-and-dirty algorithms are fine for small amounts of data, but they certainly are not quick for large amounts, because their work to sort n records grows like n 2. This is quite different from almost all of the filters that we discuss in this book: they read a record, process it, and output it, so their execution time is directly proportional to the number of records, n.
Fortunately, the sorting problem has had lots of attention in the computing community, and good sorting algorithms are known whose average complexity goes like n 3/2 (shellsort), n log n (heapsort, mergesort, and quicksort), and for restricted kinds of data, n (distribution sort). The Unix sort command implementation has received extensive study and optimization: you can be confident that it will do the job efficiently, and almost certainly better than you can do yourself without learning a lot more about sorting algorithms.
Sort Stability
An important question about sorting algorithms is whether or not they are stable: that is, is the input order of equal records preserved in the output? A stable sort may be desirable when records are sorted by multiple keys, or more than once in a pipeline. POSIX does not require that sort be stable, and most implementations are not, as this example shows:
$ sort -t_ -k1,1 -k2,2 << EOF
Sort four lines by first two fields
> one_two
> one_two_three
> one_two_four
> one_two_five
> EOF
one_two
one_two_five
one_two_four
one_two_three
The sort fields are identical in each record, but the output differs from the input, so sort is not stable. Fortunately, the GNU implementation in the coreutils package[1] remedies that deficiency via the —stable option: its output for this example correctly matches the input.
Sort Wrap-Up
sort certainly ranks in the top ten Unix commands: learn it well because you'll use it often. More details on sort are provided in the sidebar near the start of this chapter, but consult the manual pages for sort(1) for the complete story on your system. sort is, of course, standardized by POSIX, so it should be available on every computer that you are likely to use.
* * *
[1] Available at ftp://ftp.gnu.org/gnu/coreutils/.
Removing Duplicates
It is sometimes useful to remove consecutive duplicate records from a data stream. We showed in Section 4.1.2 that sort -u would do that job, but we also saw that the elimination is based on matching keys rather than matching records. The uniq command provides another way to filter data: it is frequently used in a pipeline to eliminate duplicate records downstream from a sort operation:
sort ... | uniq | ...
uniq has three useful options that find frequent application. The -c option prefixes each output line with a count of the number of times that it occurred, and we will use it in the word-frequency filter in Example 5-5 in Chapter 5. The -d option shows only lines that are duplicated, and the -u option shows just the nonduplicate lines. Here are some examples:
$ cat latin-numbers
Show the test file
tres
unus
duo
tres
duo
tres
$ sort latin-numbers | uniq
Show unique sorted records
duo
tres
unus
$ sort latin-numbers | uniq -c
Count unique sorted records
2 duo
3 tres
1 unus
$ sort latin-numbers | uniq -d
Show only duplicate records
duo
tres
$ sort latin-numbers | uniq -u
Show only nonduplicate records
unus
uniq is sometimes a useful complement to the diff utility for figuring out the differences between two similar data streams: dictionary word lists, pathnames in mirrored directory trees, telephone books, and so on. Most implementations have other options that you can find described in the manual pages for uniq(1), but their use is rare. Like sort, uniq is standardized by POSIX, so you can use it everywhere.
Reformatting Paragraphs
Most powerful text editors provide commands that