Online Book Reader

Home Category

Classic Shell Scripting - Arnold Robbins [63]

By Root 917 0
Jon Bentley wrote an interesting column in Communications of the ACM titled Programming Pearls. Some of the columns were later collected, with substantial changes, into two books listed in the Chapter 16. In one of the columns, Bentley posed this challenge: write a program to process a text file, and output a list of the n most-frequent words, with counts of their frequency of occurrence, sorted by descending count. Noted computer scientists Donald Knuth and David Hanson responded separately with interesting and clever literate programs,[7] each of which took several hours to write. Bentley's original specification was imprecise, so Hanson rephrased it this way: Given a text file and an integer n, you are to print the words (and their frequencies of occurrence) whose frequencies of occurrence are among the n largest in order of decreasing frequency.

In the first of Bentley's articles, fellow Bell Labs researcher Doug McIlroy reviewed Knuth's program, and offered a six-step Unix solution that took only a couple of minutes to develop and worked correctly the first time. Moreover, unlike the two other programs, McIlroy's is devoid of explicit magic constants that limit the word lengths, the number of unique words, and the input file size. Also, its notion of what constitutes a word is defined entirely by simple patterns given in its first two executable statements, making changes to the word-recognition algorithm easy.

McIlroy's program illustrates the power of the Unix tools approach: break a complex problem into simpler parts that you already know how to handle. To solve the word-frequency problem, McIlroy converted the text file to a list of words, one per line (tr does the job), mapped words to a single lettercase (tr again), sorted the list (sort), reduced it to a list of unique words with counts (uniq), sorted that list by descending counts (sort), and finally, printed the first several entries in the list (sed, though head would work too).

The resulting program is worth being given a name (wf, for word frequency) and wrapped in a shell script with a comment header. We also extend McIlroy's original sed command to make the output list-length argument optional, and we modernize the sort options. We show the complete program in Example 5-5.

Example 5-5. Word-frequency filter

#! /bin/sh

# Read a text stream on standard input, and output a list of

# the n (default: 25) most frequently occurring words and

# their frequency counts, in order of descending counts, on

# standard output.

#

# Usage:

# wf [n]

tr -cs A-Za-z\' '\n' | Replace nonletters with newlines

tr A-Z a-z | Map uppercase to lowercase

sort | Sort the words in ascending order

uniq -c | Eliminate duplicates, showing their counts

sort -k1,1nr -k2 | Sort by descending count, and then by ascending word

sed ${1:-25}q Print only the first n (default: 25) lines; see Chapter 3

POSIX tr supports all of the escape sequences of ISO Standard C. The older X/Open Portability Guide specification only had octal escape sequences, and the original tr had none at all, forcing the newline to be written literally, which was one of the criticisms levied at McIlroy's original program. Fortunately, the tr command on every system that we tested now has the POSIX escape sequences.

A shell pipeline isn't the only way to solve this problem with Unix tools: Bentley gave a six-line awk implementation of this program in an earlier column[8] that is roughly equivalent to the first four stages of McIlroy's pipeline.

Knuth and Hanson discussed the computational complexity of their programs, and Hanson used runtime profiling to investigate several variants of his program to find the fastest one.

The complexity of McIlroy's is easy to identify. All but the sort stages run in a time that is linear in the size of their input, and that size is usually sharply reduced after the uniq stage. Thus, the rate-limiting step is the first sort. A good sorting algorithm based on comparisons, like that in Unix sort, can sort n items in a time proportional to n log2 n. The logarithm-to-the-base-2

Return Main Page Previous Page Next Page

®Online Book Reader