Opinion » Cybernaut


Hardware revolutions



It’s interesting how the discovery or refinement of one technology snowballs, creating an avalanche of new tech breakthroughs that change the way things work.

For example, it was only a few years ago that Intel and AMD were locked in a battle of transistors to see how many they could cram onto a single chip. It was all about nanometers, or how narrowly they could thread lines of copper and gold onto chips without them disintegrating.

Then something happened. In 2003 AMD came out with the first mass produced 64-bit CPU (central processing unit) for businesses and home consumers, followed closely by Intel. The use of 64-bit architecture basically allows your computer to handle longer lines of code faster, which opens the door for burlier programs, larger files and applications, and all around better performance.

Then companies started to give some thought to chip architecture. Rather than stacking chips to improve performance, as animators and film editors had been doing for years, the big chip companies started to create “duo” processors where two CPUs shared the same piece of silicon, dividing up tasks, and essentially allowing computers to do more things at once.

The chip makers are now looking at ways to double and even quadruple that capability.

At the same time motherboard manufacturers are doing their best to design boards that can accommodate several chips while increasing the speed and volume that data is transferred between the main system components — hard drive, memory, CPU, graphics processor, cache, and feedback systems (USB and firewire ports, Internet and wireless connections, etc.).

The Cell processor represents another new direction in architecture. Created by Sony, Toshiba and IBM at a cost of about $400 million, the chip includes a power architecture core processor connected to seven sub-processors that handle the tasks assigned to it. The first model, which will be included in the Sony PS3 game console, is capable of performing three to 12 times faster than any desktop processor, depending on the nature of the application and how the software is written.

Last week another breakthrough in architecture was announced. Nvidia (www.nvidia.com) showed off a new graphics chip with a built-in CPU that handles some of the computer’s processing duties. Since almost all applications are graphics-based these days, even if it’s just in terms of the interface, it can be more efficient to integrate graphics and processing onto the same chip.

One of the issues with graphics processors is that they are seldom used to their potential because of other bottlenecks in a computer as information travels from media to CPU to graphics card and out. Now, with the GeForce 8800 GTX, the same information will be processed on the graphics card itself without trying to package it and route it through the motherboard. The new graphics chip should also reduce power consumption as well, something that typically results in additional performance benefits.