Mastering Algorithms With C - Kyle Loudon [26]
* *
**************************************************************************/
while (i >= 0 && compare(&a[i * esize], key) > 0) {
memcpy(&a[(i + 1) * esize], &a[i * esize], esize);
i--;
}
memcpy(&a[(i + 1) * esize], key, esize);
}
/*****************************************************************************
* *
* Free the storage allocated for sorting. *
* *
*****************************************************************************/
free(key);
return 0;
}
Questions and Answers
Q: From lowest to highest, what is the correct order of the complexities O (n2), O (3n), O (2n), O (n2 lg n), O (1), O (n lg n), O (n3), O (n!), O (lg n), O (n)?
A: From lowest to highest, the correct order of these complexities is O (1), O (lg n), O (n), O (n lg n), O (n 2), O (n 2 lg n), O (n 3), O (2n ), O (3n), O (n!).
Q: What are the complexities of T1(n) = 3n lg n + lg n; T2(n) = 2n + n3 + 25; and T3(n, k) = k + n, where k ≤ n? From lowest to highest, what is the correct order of the resulting complexities?
A: Using the rules of O -notation, the complexities of T 1, T 2, and T 3 respectively are O (n lg n), O (2n ), and O (n). From lowest to highest, the correct order of these complexities is O (n), O (n lg n), and O (2n ).
Q: Suppose we have written a procedure to add m square matrices of size n × n. If adding two square matrices requires O (n2 ) running time, what is the complexity of this procedure in terms of m and n?
A: To add m matrices of size n × n, we must perform m - 1 additions, each requiring time O (n 2). Therefore, the overall running time of this procedure is:
O(m-1)O(n 2) = O(m)O(n 2) = O(mn 2)
Q: Suppose we have two algorithms to solve the same problem. One runs in time T1 (n) = 400n, whereas the other runs in time T2 (n) = n2 . What are the complexities of these two algorithms? For what values of n might we consider using the algorithm with the higher complexity?
A: The complexity of T 1 is O (n), and the complexity of T 2 is O (n 2). However, the algorithm described by T 1 involves such a large constant coefficient for n that when n < 400, the algorithm described by T 2 would be preferable. This is a good example of why we sometimes consider other factors besides the complexity of an algorithm alone.
Q: How do we account for calls such as memcpy and malloc in analyzing real code? Although these calls often depend on the size of the data processed by an algorithm, they are really more of an implementation detail than part of an algorithm itself.
A: Usually calls such as memcpy and malloc are regarded as executing in a constant amount of time. Generally, they can be expected to execute very efficiently at the machine level regardless of how much data they are copying or allocating. Of course, their exact efficiency may depend on the computer on which they execute as well as other factors (particularly in the case of malloc, which depends on the state of the system at the moment it is called).
Related Topics
Recurrences
Functions frequently used in the formal analysis of recursive algorithms. Recurrences are represented as recursive functions. A recursive function is a function that calls itself (see Chapter 3). Each successive call works on a more refined set of inputs, bringing us closer and closer to a solution. They are useful in describing the performance of recursive algorithms because they allow us to describe an algorithm's performance in terms of invoking the algorithm on a more and more refined set of inputs.
Summation formulas
Mathematical formulas useful in simplifying summations that describe the running times of algorithms. Summations occur frequently as the result of analyzing iterative control structures.
Θ-notation, Ω-notation, o-notation, and w-notation
Additional notations used to represent information about an algorithm's performance. Whereas O -notation expresses the upper bound of a function within a constant factor, Θ -notation expresses a bound from above and below. Ω -notation expresses strictly a lower bound within