Objective-C Programming_ The Big Nerd Ranch Guide - Aaron Hillegass [17]
printf()
But before we get to numbers, let’s take a closer look at the printf() function you’ve been using. printf() prints a string to the log. A string is a string of characters. Basically, it’s text.
Reopen your ClassCertificates project. In main.c, find congratulateStudent().
void congratulateStudent(char *student, char *course, int numDays)
{
printf("%s has done as much %s Programming as I could fit into %d days.\n",
student, course, numDays);
}
What does this call to printf() do? Well, you’ve seen the output; you know what it does. Now let’s figure out how.
printf() is a function that accepts a string as an argument. You can make a literal string (as opposed to a string that’s stored in a variable) by surrounding text in double quotes.
The string that printf() takes as an argument is known as the format string, and the format string can have tokens. The three tokens in this string are %s, %s, and %d. When the program is run, the tokens are replaced with the values of the variables that follow the string. In this case, those variables are student, course, and numDays. Notice that they are replaced in order in the output. If you swapped student and course in the list of variables, you would see
Cocoa has done as much Mark Programming as I could fit into 5 days.
However, tokens and variables are not completely interchangeable. The %s token expects a string. The %d expects an integer. (Try swapping them and see what happens.)
Notice that student and course are declared as type char *. For now, just read char * as a type that is a string. We’ll come back to strings in Objective-C in Chapter 14 and back to char * in Chapter 34.
Finally, what’s with the \n? In printf() statements, you have to include an explicit new-line character or all the log output will run together on one line. \n represents the new-line character.
Now let’s get back to numbers.
Integers
An integer is a number without a decimal point – a whole number. Integers are good for problems like counting. Some problems, like counting every person on the planet, require really large numbers. Other problems, like counting the number of children in a classroom, require numbers that aren’t as large.
To address these different problems, integer variables come in different sizes. An integer variable has a certain number of bits in which it can encode a number, and the more bits the variable has, the larger the number it can hold. Typical sizes are: 8-bit, 16-bit, 32-bit, and 64-bit.
Similarly, some problems require negative numbers, while others do not. So, integer types come in signed and unsigned varieties.
An unsigned 8-bit number can hold any integer from 0 to 255. How did I get that? 28 = 256 possible numbers. And we choose to start at 0.
A signed 64-bit number can hold any integer from -9,223,372,036,854,775,807 to 9,223,372,036,854,775,807. 263 = 9,223,372,036,854,775,808 minus one bit for the sign (+ or -).
When you declare an integer, you can be very specific:
UInt32 x; // An unsigned 32-bit integer
SInt16 y; // An signed 16-bit integer
However, it is more common for programmers just to use the descriptive types that you learned in Chapter 3.
char a; // 8 bits
short b; // Usually 16 bits (depending on the platform)
int c; // Usually 32 bits (depending on the platform)
long d; // 32 or 64 bits (depending on the platform)
long long e; // 64 bits
Why is char a number? Any character can be described as an 8-bit number, and computers prefer to think in numbers. What about sign? char, short, int, long, and long long are signed by default, but you can prefix them with unsigned to create the unsigned equivalent.
Also, the sizes of integers depend on the platform. (A platform is a combination of an operating system and a particular computer or mobile device.) Some platforms are 32-bit and others are 64-bit. The difference is in the size of the memory address, and we’ll talk more about that in