UKGamer


Content

Latest

News
Articles

Community

Forum

UKGamer

About us
Network

Scan Windows 7

The Whitepaper
As soon as we convince ourselves that Vista is actually worth spending hard-eraned money on you'll have to check out the many other Vantage reviews on the 'Net for benchmarks. For now here's a look at the features and thinking behind its operation reproduced verbatim from the FM guys and gals.

Aims
3DMark Vantage is a gamers' benchmark for the DX10 platform. Its primary purpose is to help gamers evaluate their system performance for gaming use, and through online services relate the tested system to other available hardware. This should provide true value to gamers by enabling them to make better purchasing decisions, and to compete against each other in system performance. In order to meet its primary purpose, the benchmark should provide results that remain relevant for at least one year after launch, and optimally two or more. This means anticipating both upcoming hardware capabilities and performance-related trends in future games. A key benefit of the Benchmark Development Program (BDP) is mitigating uncertainty related to upcoming hardware. On game software performance trends, we rely on our own research and judgment, and our partners both inside and outside the BDP.

Principles
There are three guiding principles we follow in determining the benchmark test mix, architecture, content and scoring. These principles help the benchmark to serve its primary purpose:
O Prefer game-like content,
O Represent technology fairly and accurately, and
O Exercise technology with a view to the future.

Test Mix

Click to enlarge



The Vista PC is a target platform with huge performance disparities in several hardware areas. 3DMark Vantage focuses on the two areas most critical to gaming performance: the CPU and the GPU. With the emergence of multi-package and multi-core configurations on both the CPU and GPU side, the performance scale of these areas has widened, and the visual and game-play effects made possible by these configurations are accordingly wide-ranging. This makes covering the entire spectrum of 3D gaming a difficult task. 3DMark Vantage solves this problem in three ways:

1. Isolate GPU and CPU performance benchmarking into separate tests,
2. Cover several visual and game-play effects and techniques in four different tests, and
3. Introduce visual quality presets to scale the graphics test load up through the highest-end hardware.

To this end, 3DMark Vantage has two GPU tests, each with a different emphasis on various visual techniques, and two CPU tests, which cover the two most common CPU-side tasks: Physics Simulation and AI. It also has four visual quality presets (Entry, Performance, High, and Extreme) available in the Advanced and Professional versions, which increase the graphics load successively for even more visual quality. Each preset will produce a separate, official 3DMark Score, tagged with the preset in question.

Rendering Engine
The 3DMark Vantage Rendering Engine (¡§Engine¡¨ for short) supports multiple dynamic lights in a single pass, complex character rigs, GPU-simulated content and complex, custom lighting models for surface materials and light-surface interaction.

3D Engine architecture
Our engine performs the following rendering passes and sub-passes per frame:

1. GPU simulation update loop
a. Rendering of simulation inputs (depth views)
2. Shadow map generation
3. Pre-depth for appropriate materials (speed optimization)
4. Opaque illumination
a. Possible hierarchical rendering steps (reflection, refraction)
5. Translucent illumination
6. Post-processing
a. Possible custom scene rendering passes

GPU simulations are performed as fixed-step loops. Texture and vertex simulation steps are repeated until the simulations reach parity with wall-clock time. Some simulations use depth views rendered from the scene as collision or pressure inputs. We generate all required shadow maps before the main scene rendering. The main scene render pass consists of steps 3, 4 and 5. The pre-depth pass is performed for selected materials to reduce overdraw of materials with heavy pixel shaders in the main opaque pass. The translucent illumination pass supports soft clipping, utilizing the results of the opaque illumination pass. We perform a CopyResource() call (or ResolveSubresource() when MSAA is enabled) on the depth buffer to make it available for particle rendering. The post-processing step takes the scene render output, and applies a variety of image-space filters and effects. Some post-processing steps may require special scene rendering passes, like the velocity pass needed by the motion blur effect. We perform automatic run-time instancing of objects in the frustum based on pre-calculated model comparison. Frustum culling is performed per item.

Lighting model

High Dynamic Range
Our Engine renders everything in high dynamic range (HDR), using 16-bit-per-component floating-point render targets. The HDR render target is passed along to post-processing, and finally tone-mapping for display. Cube map textures are stored in HDR format, but most standard 2D color textures are not. They are instead stored using 8 bits / channel in linear space (conversion from the sRGB space used by most image editing software is done as part of our asset creation pipeline).

Surface shading and shader composition
Our Engine composes shaders from three types of parts: material shaders, light shaders and transformation shaders. Material shaders describe how the surface reflects and emits light. Light shaders describe how light from light sources reaches the material surface. Transformation shaders perform vertex transformation. The surface and light shaders are stored in HLSL fragments. When rendering a surface that is affected by one or more lights, the Engine first combines the appropriate surface shader fragment with all necessary light shader snippets using text pre-processing, and then compiles the shader. For performance reasons, needed shader combinations are pre-generated and cached for fast access in real-time, as part of a warm-up phase before test rendering. The combined shader loops over all the affecting lights, and calls the surface shading function to apply the light intensity from a certain direction for each light. Multiple lights of different kinds can be handled by a combined shader in a single pass. The light parameters are packed into shader resource buffers by the Engine.

Shadows
We implemented Variance Shadow Maps and PCF-filtered cascaded shadow maps. The PCF shadows have penumbra size estimation and an option to adjust shadow shader quality in terms of sample tap count. We are not using penumbra dithering. The VSM uses light bleeding reduction.

Post-processing
Post-processing effects are performed in a separate step after main scene rendering, but may trigger recursive rendering passes, as in the case of motion blur, which requires a special velocity render of the scene, apart from the main color output. We support a set of per-camera animated post-processing effects, including the following:

O Bloom
O Streaks
O Anamorphic flare
O Lens flare (ghosting)
O Lenticular halo
O Depth-of-field
O Motion blur
O Depth fog
O Film grain noise
O Volumetric fog
O Tone-mapping w/ gamma correction and vignette

Bloom
To create the bloom effect, we render the scene to a texture, and then halve it progressively. Each down-sized frame is then blurred using a Gaussian filter. The final effect is the result of a weighted sum of these blurred frames, and is blended back onto the original texture. This tends to bleed bright areas of the picture into their surroundings. An artist-controllable blurring threshold excludes low intensities from the blurring pass, helping maintain image sharpness in moderately lit areas of the screen.

Streaks
To create the streak effect, we again progressively halve the original scene image down to a smaller resolution, and then apply a six line convolution with 32 samples per line to the small resolution image. The convolution result is blended back to the original image. Streaking is a camera artifact, caused by the hexagonal aperture blade arrangement in the camera objective.

Anamorphic flare
The flare is implemented using two horizontal streaks with colorization.

Lens flare (ghosting)
We create the lens flare effect by scaling the input image around the center using several different scaling factors, including negative ones. A vignette effect is applied to the input image to avoid hard edges at image borders. All scale factors are sampled and composited in a single pass by a single shader.

Lenticular halo
The halo effect is created in three stages:
1. Reduce the original image to one quarter horizontal and vertical resolution
2. Apply a spherical filter a. 32 point samples spread along the circumference of a circle
b. The samples are offset randomly towards or away from the center
c. The samples are colorized based on distance to center to create a spectrum separation effect
d. The kernel is rotated pseudo-randomly per pixel
3. Blur the filtered texture using a separable Gaussian kernel to reduce halo graininess caused by the limited number of samples 5.3.6 Depth-of-field
We first combine the scene color and focus information (calculated from scene depth) into an RGBA16 texture. Then we take a scaled copy of the result and blur it using a Gaussian kernel. The final scene with DOF is then mixed from these two images.

Motion blur
We first render the view space velocity information of the scene into an RGBA16 texture. Using the velocity information, we then blur the original rendered scene towards the direction of movement.

Depth fog
We render depth fog as a post processing effect by simply adding depth dependant color over the scene.

Volumetric fog
The volumetric fog samples fog density and color functions and shadow maps along the viewing ray, accumulating color and alpha. Sampling positions are jittered to trade off aliasing for noise. To create a big enough fog that is still detailed, we use a density function that non-linearly combines many samples from volumetric textures at different scales. This is also helpful in animating the fog. Sampling shadow maps yields volumetric shadows. The fog is a very expensive effect due to the high complexity of the calculations and the large number of samples required to keep visual quality acceptable. Therefore we render fog at a 4x4 times reduced resolution. Intersection with the fog bounding volume, which is a cylinder, is calculated analytically in the pixel shader.

Tone mapping
We use a simple tone mapping operator with animated exposure and gamma correction, and an added vignette effect that can be used to colorize image corners.

GPU physics simulations
The Engine supports both texture simulations and vertex simulations. In texture simulations, one or more simulation state buffers are rendered back and forth, using a full-screen quad to cover the entire output texture. The calculations are done per pixel using pixel shaders. Vertex simulations use stream-out to cycle vertex buffers through the simulation pass. We do not use draw auto or auto offset, because in our case we can keep the number of simulated vertices up to date on the CPU side. Draw auto is an interesting future research direction, which might be useful in for example hierarchical particle effects.

Particle system rendering
The engine renders particles using soft clipping. The geometry shader is used to expand particle points into billboards.
Rendering options
Our engine supports the following options.
Screen resolution
We support all Direct3D-supported target resolutions.
Multisample count
We support all available MSAA sample counts for the chosen render target.
Texture filtering and Maximum anisotropy
We support optimal and anisotropic texture filtering.
Texture quality
This is a content-related option that controls source texture resolution as specified in each test scene. Lower settings indicate lower texture resolutions.
Shadow shader quality
The shadow shader quality affects the number of shadow map samples taken in the PCF shadows.
Shadow resolution quality
The shadow resolution quality affects shadow map size.
Shader quality
The shader quality affects the used techniques and their quality in various shaders.
Post-processing scale
The scale of post-processing effects relative to screen resolution.
Post-processing disable per effect
We support selectively disabling each post-processing effect listed in section 5.3, except for the tone-mapping operator which is required for HDR rendering.