About us

Scan Windows 7

MSI FX 5900 - Background & under the skin
The shortcomings of the NV30 and the products based on it has been well documented and whilst it was a source of great embarrassment for NVIDIA, like any company they have moved on. Last month's unveiling of the NV35 was greeted with much optimism as no one likes to see a single company run away with the spoils. NVIDIA knew they had to pull out all the stops, and whilst many were questioning some of the Doom III benchmarks, from a PR point of view it was a great coup for the NVIDIA guys.

Though both ATI and NVIDIA agree that the bulk of their sales comes from their mainstream products, the high-end has much to offer in terms of kudos and prestige. Due to a cascading effect in both their company's product lines, what was a year ago the high-end filters down to the mainstream boards, therefore it provides an interesting insight for the "big buyers" as to where their next order will go.

Whilst there are varying reports as to the amount of market share ATI has managed to regain (note the use of regain as a sign of ATI's current performance status quo) it's clear that the Markham based company has done a lot in the past 10 months to win the hearts and minds of gamers and power users over to their camp. NVIDIA, on the other hand have done exactly the opposite. Products based on the NV30 were delayed to such a point that by the time we started seeing those on the shelves, the NV35 was only days away from being launched. Not only that, but due to various driver issues image quality suffered. The running soap opera with FutureMark and NVIDIA having centre stage roles (not forgetting a lesser role played by ATI) leaves a large gash that NVIDIA needs to heal, with more than just gaffer tape.

Apart from the delays, there were other problems with the 5800. All 5800 Ultras were made by NVIDIA, and this managed to anger a number of the large board partners. Companies like Asus and MSI, who are recognized component manufacturers were told that the 12-layer PCB is too complicated for them to manufacturer. When you couple that to the low yields that NVIDIA got from the NV30 (that was fabricated by TSMC) it left many large board partners looking at a loss on the NV30. Sadly for NVIDIA, it might have been a case of expecting too much, too quickly from TSMC. Low yields doesn't just mean fewer parts, it also means more expensive parts as companies try to reclaim the lost Dollars in manufacture.

The real kick in the teeth, however, was the architecture. The two points that instantly stand out are the four pipelines and the 128bit memory bus. ATI, with their high-end R3xx GPUs had 8 pipelines. Now there are many users who don't care how many pipelines, bits there are in their bus and what fabrication process their GPU is made from as long as they get their high frame rates without a degradation in quality over a different manufacturer's product. Sadly for NVIDIA these users are generally found in the lower end of the market, and certainly aren't the type of people who would spend in excess of 300 for a graphics card. So there was much anger being shown at the NV30's design, and without doubt ATI did a fair amount to fuel the flames around the NV30 fire.

There is a huge amount of information thrown about regarding technical details of graphics cards most of which, sadly is marketing babble. If you want to know what pipelining really is, and how it helps computers in general then there are many, many books on the subject. We will try and keep it brief and use a real world example of what the idea of a pipeline really is; those of you that are or have done a computer science degree will, no doubt have heard or will hear of this in a somewhat more rigorous form sometime in your life. Why do we mention pipelining? Because it's a buzz word that is often used in press releases and product launches.

The aim of pipelining is to reduce the time taken by the processor (be it the GPU or the CPU) by doing tasks in parallel. The irony is that the length of the task doesn't shorten, so if a set of instructions take 5ms on a single-cycle processor (that is one that doesn't have any pipelines), it will take 5ms on a processor that has 2, 5 or 50 million pipelines. So how does it speed things up?

It's probably best explained by a real world analogy. A popular one that is used in many courses is doing laundry (probably because lecturers know that students love doing it). Let's say you are lucky enough to have a washing machine and a dryer. Your tasks are :
1) Put the dirties into the washing machine.
2) When the washing machine has finished cleaning your clothes you load them into the dryer.
3) When the dryer is finished shrinking your clothes into a crisp, you put them in a pile (ready to put in storage).
4) Get someone to put your clothes into storage.
The non pipelined way of doing that (for this example we assume there is more than one load of laundry, and that all stages take a fixed amount of time, X) would be to wait until you finish stage 4, then go back to stage 1. The pipelined way to do this would be to load up the washing machine with dirty clothes as soon as you take the cleaned load out. So whilst the dryer is doing it's thing, the washing machine is cleaning your second load of clothes. In this little example pipelined laundry is up to 4 times faster than non-pipelined laundry.

So whilst you may see little or no correlation between laundry and your graphics card, the basics of pipelining is present in both. In general the advantage of pipelining a set of instructions can be found through this formula :-

The essence of pipelining

So what does that equation mean? Take the time the task takes, and divide by the number of stages required to carry out the task, to get the time taken when doing it in a pipelined method. So our laundry service, if we had 4 sets of laundry to do, it would take us 16 units of time. Doing the same in a pipelined manner would take us just 7 units.