Tải bản đầy đủ - 0 (trang)
Chapter 6. The CGL API for OpenGL Configuration

Chapter 6. The CGL API for OpenGL Configuration

Tải bản đầy đủ - 0trang

OpenGL Application





ATI Driver

SW Renderer

NV Driver

ATI Hardware

NV Hardware

Figure 6-1 CGL Renderer Selection

Because CGL lies beneath both AGL and the Cocoa interface to OpenGL, you

can freely use CGL in combination with either an AGL application or a Cocoa application. CGL may also be freely used with GLUT applications because the OS X GLUT implementation relies on Cocoa, which in turn relies

on CGL. The only invalid combination of these four interfaces is to use the

AGL or Carbon interface to OpenGL in combination with the Cocoa interface to


Generally speaking, you will find that both AGL and Cocoa provide enough

flexibility that you may not need any of the additional control that CGL allows.

If you’re doing simple prototyping of graphics techniques, you’ll probably find

GLUT to be the easiest API with which to get up and running.

CGL shares many things with other windowing interfaces to OpenGL: pixel

format selection, context creation and manipulation, and a pbuffer interface,

to name a few. On OS X, CGL also has to shoulder the burden of requirements that arise out of having a plug-in renderer architecture that supports a

heterogeneous set of installed graphics devices (Figure 6-1). Dragging a window from one display to another is a simple matter if both displays are being

driven by a single graphics device; it’s quite another matter if the displays are on

different devices with widely varying capabilities. On any OpenGL implementation, the context state that is maintained internally is a reflection of the capabilities of the underlying hardware. Imagine how that context state varies when

you drag a window from a display supported by a high-end graphics device

built by one graphics hardware vendor to a display supported by a low-end

device built from another!

Linking with CGL is easy; it’s part of OpenGL.framework, which is typically

found in /System/Library/Frameworks but may also be in a path specific

to your SDK installation. Because CGL is part of the OpenGL framework, its


Chapter 6: The CGL API for OpenGL Configuration

headers are found in the Headers directory of the OpenGL.framework directory. Commonly used CGL headers include CGLTypes.h, CGLRenderers.h,

and CGLMacros.h. We’ll talk more about these headers in this chapter.

Error Handling

CGL error handling is based on the values returned from each of the CGL functions. All CGL functions return 0 when successful. Upon failure, a number of

different return values may be returned that describe the nature of the failure.

The full list of possible error values is part of the CGLError enum and

can be found in /System/Libraries/Frameworks/OpenGL.framework/



** Error return values from


typedef enum _CGLError {



/* invalid ... */





































} CGLError;



/* no error */





































pixel format attribute

renderer property

pixel format

renderer info



graphics device

context state

numerical value

share context


offscreen drawable

offscreen drawable



code module

memory allocation

CoreGraphics connection



















Pixel Format Selection

A pixel format is simply a set of attribute–value pairs that describe the desired configuration for the framebuffer. All graphics hardware has limitations

in terms of the allowable framebuffer configurations it supports. For instance,

a specific video card may support an RGBA, 8 bits per component, doublebuffered pixel format, but it may not support an RGBA, 12 bits per component,

double-buffered pixel format. Because of these differences, pixel format APIs

Pixel Format Selection


such as CGL provide a selection mechanism that attempts to match a set of requested attributes as closely as the underlying renderer can support.

The CGL pixel format API consists of three entry points for creating, querying,

and destroying pixels:





CGLChoosePixelFormat creates a pixel format using a NULL-terminated

mixed array of attributes and, if applicable, the attribute’s value. Let’s look at

Example 6-1, which shows how to create a simple pixel format, before we dive

into the details of all the possible pixel format attributes.

Example 6-1 CGLChoosePixelFormat Usage



CGLPixelFormatAttribute attribs[] =










CGLPixelFormatObj pixelFormatObj;

long numPixelFormats;

CGLError cglError;

cglError = CGLChoosePixelFormat( attribs, &pixelFormatObj,

&numPixelFormats );

if(cglError != kCGLNoError)


printf("Unable to create pixel format." \

"Error is: 0x%0x\n", cglError);


Notice that the pixel format attribute constants are prefaced with “kCGLPFA.”

The “k” specifies that the value is a global constant. “CGL,” well, that just means

CGL. “PFA” stands for pixel format attribute.

Pixel format attributes are scalars that may be integer quantities or Boolean values. In our example, kCGLPFADoubleBuffer and kCGLPFAMinimumPolicy

are Boolean values. There’s a certain asymmetry in specifying Boolean versus

non-Boolean values, but you can’t argue that it doesn’t save on typing: Rather


Chapter 6: The CGL API for OpenGL Configuration

than having a value of true or false explicitly listed for Boolean attributes,

they are simply specified when true and not specified when false.

The design of Apple’s pixel format API is subtractive in nature. That is, you can

think of your attribute array as a set of constraints for the list of possible pixel

formats that will match, rather than trying to build up a pixel format containing

only these features.

typedef enum _CGLPixelFormatAttribute {





= 53,


= 54,


= 57,


= 58,


= 60,


= 61,


= 70,


= 71,


= 72,


= 73,


= 75,


= 76,


= 78,


= 80,


= 81,


= 83,


= 84,


= 90,


= 91,

kCGLPFAVirtualScreenCount = 128,

} CGLPixelFormatAttribute;

Policies and Buffer Sizing

Each of the policies used in pixel format selection is a scoring system to nominate matching pixel format candidates.

The policy attributeskCGLPFAMinimumPolicy and kCGLPFAMaximumPolicy

are applicable only to the color, depth, and accumulation buffer sizes. If you

specify the minimum policy, then these attributes must have at least the value

specified with the attribute. In our example, we’ve requested that the pixel format be chosen using only pixel formats that are double buffered, have at least

24 bits for the R/G/B color channels, and that there be at least 16 bits for the

depth buffer.

Here is the set of policy attributes:




Pixel Format Selection


The minimum policy sets the low bar for acceptance, but there is another

asymmetry here: kCGLPFAMaximumPolicy doesn’t set the high bar for acceptance. Instead, it means that if kCGLPFAColorSize, kCGLPFADepthSize,

or kCGLPFAAccumSize is specified with a non-zero value, then the largest

possible corresponding buffer size will be chosen for your pixel format


kCGLPFAClosestPolicy is applicable to only the color buffer size attribute

kCGLPFAColorSize; it does not consider the size specified for the depth or

accumulation buffers. With this attribute, the color buffer size of the returned

pixel format object will most closely match the requested size. This policy is

most similar to the behavior that the X11 window system uses when choosing


As you may have gathered by now, kCGLPFAMinimumPolicy is the default policy for buffer sizing. Also, notice that neither of the nondefault policies kCGLPFAMaximumPolicy and kCGLPFAClosestPolicy is applicable to

the kCGLPFAAlphaSize or kCGLPFAStencilSize attribute. Apply a little

deductive reasoning and we have a new rule: The pixel format matching

semantics for kCGLPFAAlphaSize and kCGLPFAStencilSize follow the

kCGLPFAMinimumPolicy behavior only.

Render Targets

You may have noticed when running a game or other full-screen application

that only the primary display (the display with the Apple menu bar) is captured for the full-screen application. The reason for this is that full-screen rendering is supported only with a single renderer on the Mac OS. Therefore, if you

include the kCGLPFAFullScreen attribute to qualify your pixel format, only

renderers capable of supporting full-screen rendering will be considered and

kCGLPFASingleRenderer is implied.

Mutually exclusive to kCGLPFAFullScreen are the kCGLPFAOffScreen and

kCGLPFAWindow attributes. On some platforms, the term “hidden window”

or “invisible window” is used when describing an off-screen destination. On

the Mac OS, if you’re rendering off screen, according to this mutual exclusivity

you’re not rendering to a window.

If you wish to restrict the list of renderers that will match your format to those

that can render off screen, specify the kCGLPFAOffScreen attribute. However,

be wary of this attribute if you are at all concerned about the performance of

your off-screen rendering. There are three ways to do off-screen rendering on

the Mac OS. If that’s not confusing enough, with the introduction of the framebuffer object specification in OpenGL, there are now four. See Chapter 5 for

more information.

60 Chapter 6: The CGL API for OpenGL Configuration

Finally, if you wish to restrict the renderer list to only those renderers that can

render on screen in a window, specify the kCGLPFAWindow attribute in your

format array.


If multisampling is desired, set the kCGLPFASampleBuffers attribute to 1

to indicate a preference for a multisample buffer. Set the kCGLPFASamples

attribute to the number of samples desired for each pixel. The policy attributes

are not applicable to these two multisampling attributes.


For stereo rendering, also known as quad-buffering, the token kCGLPFAStereo

is used. This option produces a pixel format that contains two double-buffered

drawables, logically a left and a right, with a stereo offset to produce a 3D

projected rendering. If you haven’t experienced the LCD shutter glass type

of stereo rendering, it is as if the scene is floating in space in front of the

physical surface of the display. The stereo effect is achieved by providing two

buffers (left and right), each with a separate perspective frustum. Each frustum is offset by the inter-ocular distance or, in English, the distance between

your eyes.

The NVidia Quadro FX 4500, which was introduced in 2005, was the first

hardware introduced on the Mac platform to support stereo in a window. The

alternative to stereo in a window is full-screen stereo. For any Mac configured

with hardware released prior to the FX 4500, kCGLPFAStereo implies

kCGLPFAFullScreen, which in turn implies kCGLPFASingleRenderer.

Selecting a Renderer

Another way of selecting a renderer is by explicitly choosing one for your

application. There are a number of possibilities when it comes to selecting

a renderer by ID, enumerated in the file CGLRenderers.h. You use the

kCGLPFARendererID attribute to select your own renderer ID. Here is a

snapshot of the evolving list of possible renderer IDs:










Pixel Format Selection







The star of this show is the Apple software renderer, which was released as

part of Tiger. If you wish to use or test certain functionality that your hardware doesn’t support, you can use this renderer. The new software renderer

is specified using kCGLRendererGenericFloatID. You may hear this renderer described as “the float renderer” because of its support for floating-point

framebuffers and pixel formats. This software renderer is highly tuned for the

Mac platform. It uses a great deal of hand-tuned and hand-scheduled PowerPC

and Intel assembly. The performance of this renderer, though not comparable to

that of a dedicated hardware renderer, is quite astonishing.

The software renderer is a great tool to use when you are debugging your application. If, for instance, you believe your OpenGL logic is correct yet the rendering doesn’t appear correct, try changing your renderer to the software renderer.

The software renderer allows you to cross-check the vendor-specific renderers

to determine whether your application has a bug that is specific to a certain

graphics card. If you see correct results in the software renderer but not in

the vendor-specific renderer, or vice versa, it’s time to file a bug report and let

Apple have a look at it. Keep in mind, however, that OpenGL is not a pixelexact specification, and minor differences between rendered images are always

possible, and even likely. However, gross differences are likely bugs—so please

file them.

There are two other renderer IDs that correspond to software renderers. The

software renderer preceding the current float renderer is referenced using

kCGLRendererAppleSWID. This older renderer is neither as fast nor as full

featured as the new software renderer but could prove useful as another

check when debugging or comparing results. Aside from this scenario, this

renderer should not be used unless you wish to hamstring your application



kCGLGenericID corresponds to the original software renderer written for

OS X. If you are experiencing difficulty with the new software renderer and

your application doesn’t use features beyond OpenGL 1.2, you may have better

luck with this original renderer. Although not as highly tuned, the old software

renderer is better tested by virtue of its age alone. This older software renderer

can also be used as yet another data point in fortifying a bug submission if you

suspect a problem with one of the hardware renderers.


Chapter 6: The CGL API for OpenGL Configuration


Arguably, this renderer ID should not be published. It serves as a placeholder

(and a questionable one at that) for graphics driver writers for OS X.














If you wish to restrict pixel format matching to a device-specific hardware renderer, you may use the list above to do so. When you ask for a specific renderer

ID of this sort, your software will run only on the requested hardware. On other

devices, your pixel format selection will fail.

Most graphics application developers are familiar with the ATI, NVIDIA, and

Intel graphics hardware described in the renderer ID strings above. Less familiar is the kCGLRendererVTBladeXP2ID ID, which corresponds to the VillageTronic hardware renderer.

kCGLRendererMesa3DFXID is outdated and will eventually be removed from

the list of renderer IDs.

Context Management

The CGL type CGLContextObj is the fundamental data type for an OpenGL

context on the Mac. CGL contexts are created as follows:

CGLError CGLCreateContext(CGLPixelFormatObj pixelFormat, CGLContextObj

sharedContext, CGLContextObj *ctx);

Contexts may be duplicated with a call to

CGLError CGLCopyContext(CGLContextObj src, CGLContextObj dst, unsigned long


The stateMask parameter should be set using a bitwise OR of the enum values

used with the OpenGL call glPushAttrib. It provides a handy mechanism

to filter which state elements you wish to copy from the source context to the

destination context.

Context Management


Specifying GL ALL ATTRIB BITS for your state mask will yield as close as possible to a duplicate of your source context. The faithful reproduction of the

copy is limited only by the scope of state encapsulated by the glPushAttrib/

glPopAttrib state management API within OpenGL. Various OpenGL state

elements, such as feedback or selection settings, cannot be pushed and popped.

The OpenGL specification has a detailed description of this state management

API if you need to further scrutinize the details of your context copy.

To free a context and set the current context to NULL use, call

CGLError CGLDestroyContext(CGLContextObj ctx);

Setting or getting the current context in CGL is a simple matter of calling

CGLError CGLSetCurrentContext(CGLContextObj ctx);


CGLContextObj CGLGetCurrentContext(void);

You may find CGLGetCurrentContext() to be the most useful entry point in

the API. It’s very handy for debugging CGL, AGL, and NSOpenGLView-based

OpenGL applications. You can simply insert this call in your code anywhere

downstream of context initialization and use the result for the full gamut of

reasons you use contexts. It’s quite handy for debugging configuration-related


Context Parameters and Enables

Like any logical object, a CGLContextObj has an associated set of parameters

that are scoped to the context itself. Some parameters are read/write; others are

read-only and simply allow the application to inspect the running configuration

of the context. Here’s a code fragment from the CGLTypes.h file on a Tiger

system that lists the valid context parameter values:


** Parameter names for CGLSetParameter and CGLGetParameter.


typedef enum _CGLContextParameter {


= 200,

/* 4 params. Set or get the swap rectangle {x, y, w, h}


= 222,

/* 1 param. 0: Don’t sync, n: Sync every n retrace */

kCGLCPDispatchTableSize = 224,

/* 1 param. Get the dispatch table size */


= 226,

/* 1 param. Context specific generic storage */


= 228,

/* 3 params. SID, target, internal_format */


= 235,

/* 1 param. 1: Above window, -1: Below Window */


Chapter 6: The CGL API for OpenGL Configuration



= 236,

/* 1 param. 1: surface is opaque (default), 0: non-opaque */

kCGLCPSurfaceBackingSize = 304,

/* 2 params. Width/height of surface backing size */

kCGLCPSurfaceSurfaceVolatile = 306,

/* 1 param. Surface volatile state */


= 308,

/* 0 params. */


= 309,

/* 1 param. Retrieves the current renderer ID */


= 310,

/* 1 param. Currently processing vertices with GPU (get) */

kCGLCPGPUFragmentProcessing = 311

/* 1 param. Currently processing fragments with GPU (get) */

} CGLContextParameter;

Context parameters are set with

CGLError CGLSetParameter(CGLContextObj ctx, CGLContext Parameter

parameterName, const long *params);

and retrieved by

CGLError CGLGetParameter(CGLContextObj ctx, CGLContext Parameter

parameterName, long *params);

Notice that for each of the valid parameter values is prefaced by the string

“kCGL” followed by “CP”. “CP” stands for context parameter, but this note

will help you distinguish this CGL constant from others. Each value passed to

CGLSetParameter is either a parameter with a value specific to the parameter

or a Boolean enabled parameter that is controlled by calls to

CGLError CGLEnable(CGLContextObj ctx, CGLContextEnable enableName);

CGLError CGLDisable(CGLContextObj ctx, CGLContextEnable enableName);

A list of CGL context enables, also from CGLTypes.h, follows:


** Enable names for CGLEnable, CGLDisable, and CGLIsEnabled.


typedef enum _CGLContextEnable {


= 201,

/* Enable or disable the swap rectangle



= 203,

/* Enable or disable the swap async limit



= 221,

/* Enable or disable all rasterization


kCGLCEStateValidation = 301,

/* Validate state for multi-screen functionality */

kCGLCESurfaceBackingSize = 305,

/* Enable or disable surface backing size override */

kCGLCEDisplayListOptimization = 307

/* Ability to turn off display list optimizer */

} CGLContextEnable;

Context Management


Read/Write Parameters


If your application occupies much more screen space (whether the full screen

or windowed) than you are typically drawing to, a kCGLCPSwapRectangle

may be specified as an optimization hint for OpenGL. When a swap rectangle

is defined, the Mac OpenGL implementation may be able to optimize your application by only swapping the back to the front buffer in the region defined

by this swap rectangle. As with any hint, this behavior is not guaranteed. Furthermore, the region outside the swap rectangle may be swapped (or flushed)

by the Quartz windowing system itself. This is often the case in a compositing

situation where windows overlap.


The swap interval parameter allows applications to control the frequency at

which the buffers are swapped in a double-buffered application. The swap interval allows your application to tie buffer swaps to the retrace rate of the display.

This behavior is often desirable for real-time applications that wish to guarantee

a specific frame rate rather than running “as fast as they can.” This mechanism

allows synchronization of your application with an external device that generates interrupts at a fixed time interval.

If the swap interval setting is 0, swaps are executed as early as possible without

regard to the refresh rate of the monitor. For any swap interval setting n that is

greater than 0, buffer swaps will occur every nth refresh of the display. A setting of 1 will, therefore, synchronize your buffer swaps with the vertical retrace

of the display. This is often referred to as vertical blank synchronized or “VBL



The client storage parameter allows for associating a single 32-bit value with

a context. Essentially this parameter allows applications to piggy-back an arbitrary pointer or other 4-byte value onto the context so as to allow logical grouping of data with a context.


Surface texturing in OS X allows texturing from a drawable object that is

associated with a context. Thus, surface texturing is yet another mechanism to

render to a texture on OS X. Given the availability of pbuffer texturing and now

framebuffer objects, the now twice superseded surface texturing approach is the

oldest, least flexible, and least well-maintained render-to-texture method on the

Mac. If we haven’t dissuaded you yet, read on . . .

Surface texturing is typically done using AGL or GLUT, as both of these APIs

provide a direct interface to set up surface texturing—aglSurfaceTexture


Chapter 6: The CGL API for OpenGL Configuration

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Chapter 6. The CGL API for OpenGL Configuration

Tải bản đầy đủ ngay(0 tr)