Windows Media Player SDK banner art
PreviousNext

Implementing Render

The easiest way to think of visualization programming is that you are creating a handler for a timed event. At specific intervals, Windows Media Player takes a snapshot of the audio data it is playing, and provides the snapshot data to the currently loaded visualization. This is similar to event-driven programming and is part of the programming model of Microsoft Windows. You write code and wait for it to be called by a particular event.

Your code is an implementation the Effects::Render function, which contains the following parameters:

TimedLevel

This is a structure that defines the audio data your code will be analyzing. The structure is composed of two arrays. The first array is based on frequency information and contains a snapshot of the audio spectrum divided into 1024 portions. Each cell of the array contains a value from 0 to 255. The first cell starts at 20 Hz and the last at 22050 Hz. The array is two dimensional to represent stereo audio. The second array is based on waveform information and corresponds to audio power, where the stronger the wave is, the larger the value. The waveform array is a granular snapshot of the last 1024 slices of audio power taken at very small time intervals. This array also is two dimensional to represent stereo audio.

HDC

This is a Microsoft Windows handle to a device context. This gives a way to identify the drawing surface to Windows. You do not need to create it, you just need to use it for specific drawing function calls.

RECT

This is a Microsoft Windows rectangle that defines the size of a drawing surface. This is a simple rectangle with four properties: left, right, top, and bottom. The actual values are supplied by Windows Media Player so that you can determine how the user or skin developer has sized the window you will draw on. It also determines the position on the HDC that the effect is supposed to render on.

PreviousNext


© 2000-2001 Microsoft Corporation. All rights reserved.