Professional C__ - Marc Gregoire [31]
What are all the return values (by value or reference) from a call? What are all the possible exceptions thrown?
Here are some points to keep in mind for a framework:
If you inherit from a class, which constructor should you call on it? Which virtual methods should you override?
What memory are you responsible for freeing, and what memory is the framework responsible for freeing?
Understand the Performance
It is important to know the performance guarantees that the library or other code provides. Even if your particular program is not performance sensitive, you should make sure that the code you use doesn’t have awful performance for your particular use.
Big-O Notation
Programmers generally discuss and document algorithm and library performance using big-O notation. This section explains the general concepts of algorithm complexity analysis and big-O notation without a lot of unnecessary mathematics. If you are already familiar with these concepts, you may skip this section.
Big-O notation specifies relative, rather than absolute, performance. For example, instead of saying that an algorithm runs in a specific amount of time, such as 300 milliseconds, big-O notation specifies how an algorithm performs as its input size increases. Examples of input sizes include the number of items to be sorted by a sorting algorithm, the number of elements in a hash table during a key lookup, and the size of a file to be copied between disks.
Note that big-O notation applies only to algorithms whose speed depends on their inputs. It does not apply to algorithms that take no input or whose running time is random. In practice, you will find that the running times of most algorithms of interest depend on their input, so this limitation is not significant.
To be more formal: Big-O notation specifies algorithm run time as a function of its input size, also known as the complexity of the algorithm. However, that’s not as complicated as it sounds. For example, suppose that a sorting algorithm takes 50 milliseconds to sort 500 elements and 100 milliseconds to sort 1,000 elements. Because it takes twice as long to sort twice as many elements, its performance is linear as a function of its input. That is, you could graph the performance versus input size as a straight line. Big-O notation summarizes the sorting algorithm performance like this: O(n). The O just means that you’re using big-O notation, while the n represents the input size. O(n) specifies that the sorting algorithm speed is a direct linear function of the input size.
Unfortunately, not all algorithms have performance that is linear with respect to the input size. Computer programs would run a lot faster if that were true. The following table summarizes the common categories of functions, in order of their performance from best to worst:
There are two advantages to specifying performance as a function of the input size instead of in absolute numbers:
1. It is platform independent. Specifying that a piece of code runs in 200 milliseconds on one computer says nothing about its speed on a second computer. It is also difficult to compare two different algorithms without running them on the same computer with the exact same load. On the other hand, performance specified as a function of the input size is applicable to any platform.
2. Performance as a function of input size covers all possible inputs to the algorithm with one specification. The specific time in seconds that an algorithm takes to run covers only one specific input, and says nothing about any other input.
Sometimes the statistical expectations are taken into account when working with big-O notation in which case big-O represents the expected time. For example, a linear search is often said to be O(n/2) because statistically about half of the elements need to be searched each time. The number of cases that are found in less than O(n/2) are compensated for by the number of cases that require