CompTIA A_ Certification All-In-One Exam Guide, Seventh Edition - Michael Meyers [122]
Table 8-1 COM and LPT Assignments
Notice that the four COM ports share two IRQs. In the old days, if two devices shared an IRQ, the system instantly locked up. The lack of available IRQs in early systems led IBM to double up the IRQs for the serial devices, creating one of the few exceptions to the rule that no two devices could share IRQs. You could share an IRQ between two devices, but only if one of the devices would never actually access the IRQ. You’d see this with a dedicated fax/modem card, for example, which has a single phone line connected to a single card that has two different functions. The CPU needed distinct sets of I/O addresses for fax commands and modem commands, but as there was only the one modem doing both jobs, it needed only a single IRQ.
Direct Memory Access
CPUs do a lot of work. They run the BIOS, operating system, and applications. CPUs handle interrupts and I/O addresses. CPUs also deal with one other item: data. CPUs constantly move data between devices and RAM. CPUs move files from the hard drive to RAM. They move print jobs from RAM to laser printers, and they move images from scanners to RAM, just to name a very few examples of this RAM-to-device-and-back process.
Moving all this data is obviously necessary, but it is a simple task—the CPU has better things to do with its power and time. Moreover, with all of the caches and such on today’s CPUs, the system spends most of its time waiting around doing nothing while the CPU handles some internal calculation. Add these facts together and the question arises: Why not make devices that access memory directly, without involving the CPU (Figure 8-23)? The process of accessing memory without using the CPU is called direct memory access (DMA).
DMA is very common and is excellent for creating background sounds in games and for moving data from floppy and hard drives into RAM (Figure 8-24).
Nice as it may sound, the concept of DMA as just described has a problem—there’s only one expansion bus. What if more than one device wants to use DMA? What keeps these devices from stomping on the external data bus all at the same time? Plus, what if the CPU suddenly needs the data bus? How can you stop the device using DMA so the CPU, which should have priority, can access the bus? To deal with this, IBM added another traffic cop.
Figure 8-23 Why not talk to the chipset directly?
Figure 8-24 DMA in action
The DMA controller, which seasoned techs often call the 8237 after its old chip name, controls all DMA functions. DMA is similar to IRQ handling in that the DMA controller assigns numbers, called DMA channels, by which devices can request use of the DMA. The DMA also handles the data passing from peripherals to RAM and vice versa. This takes necessary but simple work away from the CPU so the CPU can spend time doing more productive work.
The DMA chip sends data along the external data bus when the CPU is busy with internal calculations and not using the external data bus. This is perfectly acceptable, because the CPU accesses the external data bus only about five percent of the time on a modern CPU.
The DMA just described is called classic DMA; it was the first and for a long time the only way to do DMA. Classic DMA is dying out because it’s very slow and only supports 16-bit data transfers, a silly waste in a world of much wider buses. On most systems, only floppy drives still use classic DMA.
All systems still support classic DMA, but most devices today that use DMA do so without going through the DMA controller. These devices are known as bus masters. Bus mastering devices have circuitry that enables them to watch for other devices accessing the external data bus; they can detect a potential conflict and get out of the way on their own. Bus mastering has become extremely popular in hard drives. All modern hard drives take advantage of bus mastering. Hard drive bus mastering is hidden under terms such as Ultra DMA, and for the most part is totally automatic and invisible. See Chapter 12, “Implementing Hard Drives,” for