CompTIA A_ Certification All-In-One Exam Guide, Seventh Edition - Michael Meyers [119]
And boy howdy, is PCIe ever fast! A PCIe connection uses one wire for sending and one for receiving. Each of these pairs of wires between a PCIe controller and a device is called a lane. Each direction of a lane runs at 2.5 Gbps, or 5 Gbps with PCIe 2.0. Better yet, each point-to-point connection can use 1, 2, 4, 8, 12, 16, or 32 lanes to achieve a maximum theoretical bandwidth of 320 Gbps. The effective data rate drops a little bit because of the encoding scheme—the way the data is broken down and reassembled—but full duplex data throughput can go up to a whopping 16 Gbps on a × 16 connection. The most common PCIe slot is the 16-lane (×16) version most commonly used for video cards, as shown in Figure 8-12. The first versions of PCIe motherboards used a combination of a single PCI ×16 slot and a number of standard PCI slots. (Remember, PCI is designed to work with other expansion slots, even other types of PCI.) There is also a small form factor version of PCI Express for mobile computers called PCI Express Mini Card.
* * *
NOTE When you talk about the lanes, such a 1×or 8, use “by” rather than “ex” for the multiplication mark. So “by 1” and “by 8” is the correct pronunciation. You’ll of course hear it spoken as both “by 8” and “8 ex” for the next few years until the technology has become a household term.
Figure 8-12 PCI ×16 slot (black) with PCI slots (white)
The bandwidth generated by 16 a ×slot is far more than anything other than a video card would need, so most PCIe motherboards also contain slots with fewer lanes. Currently ×1 and ×4 are the most common general-purpose PCIe slots, but PCIe is still pretty new—so expect things to change as PCIe matures (see Figure 8-13).
Figure 8-13 PCI × slots
System Resources
All devices on your computer, including your expansion cards, need to communicate with the CPU. Unfortunately, just using the word communication is too simplistic, because communication between the CPU and devices isn’t like a human conversation. In the PC, only the CPU “talks” in the form of BIOS or driver commands—devices only react to the CPU’s commands. You can divide communication into four aspects called system resources: I/O addresses, IRQs, DMA channels, and memory addresses.
Not all devices use all four system resources. All devices use I/O addressing and most use IRQs, but very few use DMA or memory. System resources are not new; they’ve been with PCs since the first IBM PC.
New devices must have their system resources configured. Configuration happens more or less automatically now through the plug and play process, but in the old days, configuration was handled through a painstaking manual process. (You kids don’t know how good you have it. Oops! Sorrys—Old Man Voice.) Even though system resources are now automated, you still might run into them in a few places on a modern PC. On those rare occasions, you’ll need to understand I/O addresses, IRQs, DMAs, and memory to make changes as needed. Let’s look at each system resource in detail to understand what they are and how they work.
I/O Addresses
The CPU gives a command to a device by using a pattern of ones and zeroes called an I/O address. Every device responds to at least four I/O addresses, meaning the CPU can give at least four different commands to each device. The process of communicating through I/O addresses is called, quite logically, I/O addressing. Here’s how it works.
The chipset extends the address bus to the expansion slots, which makes two interesting things happen. First, you can place RAM on a card, and the CPU can address it just as it can your regular RAM. Devices such as video cards come with their own RAM. The CPU draws the screen by writing directly to the RAM on the video card. Second, the CPU can use the address bus to talk to all of the devices on your computer through I/O addressing.
Normally the address bus on an expansion bus works