| | | | |
| |
| In lugnet.cad.dev.mac, Travis Cobbs wrote:
> Your expectation is certainly justified. I don't know what the driver
> architecture is on the Mac, but I suspect that the video hardware manufacturers
> are responsible for the OpenGL implementation for their own hardware, just like
> on the PC. If this is the case, then it is ATI that's responsible for doing the
> right thing with the data (or not doing the right thing).
>
> Just out of curiosity, how are you drawing your geometry? On ATI hardware on
> the PC, I think vertex arrays are faster than OpenGL 1.0 glVertex calls, even
> when encapsulated inside a display list. This doesn't seem to be the case on
> nVidia hardware on the PC. (It might also not be the case with recent ATI
> drivers.)
Travis
Having done some experiments and research over the last couple of days, it seems
that to get very fast drawing in OpenGL generally, vertex arrays are necessary.
OpenGL has specific commands such as glBufferDataARB that will explicitly
transfer the info into the graphic card's RAM (only availible on newer cards)
and likewise MacOS X also has commands that give the graphic card DMA access to
the Mac's RAM (again only for newer cards).
Based on some crude and unsophisticated tests there does appear to be hardware
acceleration with respect to drawing in that the cards seem to be able to decode
and render OpenGL commands (for example, a command like gluCylinder draws
noticeably faster than creating the same object from a series of vertex normals
and quads or quad strip). The OpenGL lists themselves seem to be stored in the
client (ie in Computer RAM) and therefore each time a list is called, the
commands contained within the list are passed from the client to the graphic
card so reducing the size of the list does have tangible benfits.
This means two things, I need to learn how to make vertex arrays and I need to
reduce the info being transferred when drawing.
Don,
You mentioned that MacOS X is effectively triple buffered when using double
buffering in OpenGL. This is true, but I'm not sure how MacOS X achieves this. I
believe that it takes a bitmap copy of the display buffer, so there should be
minimal impact on the drawing time. One great thing about this though is when
you move or bring the model window to the foreground, MacOS X uses its buffer
and therefore makes instant screen updates.
Andrew...
| | | | | | | | | | | | |
| |
| In lugnet.cad.dev.mac, Andrew Allan wrote:
> Having done some experiments and research over the last couple of days, it seems
> that to get very fast drawing in OpenGL generally, vertex arrays are necessary.
> OpenGL has specific commands such as glBufferDataARB that will explicitly
> transfer the info into the graphic card's RAM (only availible on newer cards)
> and likewise MacOS X also has commands that give the graphic card DMA access to
> the Mac's RAM (again only for newer cards).
Just as a note, I found that Vertex Buffer Objects (VBOs) would crash my ATI
card in Windows when used inside a display list. As far as I can tell, this
isn't supposed to happen. Regular vertex arrays work fine inside a display
list. General consensus on the OpenGL.org discussion forum was that even if it
didn't crash it wouldn't speed things up, since a display list is supposed to
automatically figure out how to draw the geometry as fast as possible.
Also, it really sounds like something is causing your app to be running in
software mode, not hardware. I don't know how OpenGL windows are created in Mac
OS X, but you might want to make sure you aren't requesting any features that
force software rendering. I know on Windows it's relatively easy to request
something not supported by the video hardware, and this causes it to create a
software-rendered OpenGL window, instead of a hardware-rendered one. (I realize
this is a long shot, because Mac OS X is a whole lot easier to program than
Windows, but I thought I'd at least mention it.)
--Travis
| | | | | | | | | | | | | | | | |
| |
| In lugnet.cad.dev.mac, Travis Cobbs wrote:
> In lugnet.cad.dev.mac, Andrew Allan wrote:
> > Having done some experiments and research over the last couple of days, it seems
> > that to get very fast drawing in OpenGL generally, vertex arrays are necessary.
> > OpenGL has specific commands such as glBufferDataARB that will explicitly
> > transfer the info into the graphic card's RAM (only availible on newer cards)
> > and likewise MacOS X also has commands that give the graphic card DMA access to
> > the Mac's RAM (again only for newer cards).
>
> Just as a note, I found that Vertex Buffer Objects (VBOs) would crash my ATI
> card in Windows when used inside a display list. As far as I can tell, this
> isn't supposed to happen. Regular vertex arrays work fine inside a display
> list. General consensus on the OpenGL.org discussion forum was that even if it
> didn't crash it wouldn't speed things up, since a display list is supposed to
> automatically figure out how to draw the geometry as fast as possible.
>
> Also, it really sounds like something is causing your app to be running in
> software mode, not hardware. I don't know how OpenGL windows are created in Mac
> OS X, but you might want to make sure you aren't requesting any features that
> force software rendering. I know on Windows it's relatively easy to request
> something not supported by the video hardware, and this causes it to create a
> software-rendered OpenGL window, instead of a hardware-rendered one. (I realize
> this is a long shot, because Mac OS X is a whole lot easier to program than
> Windows, but I thought I'd at least mention it.)
Yeah, watch out for that software rendering on the Mac. It sneeks up on
you. I know it can switch you over while your program is running, but I
don't know what causes it to happen other than resizing a window. You
might want to query it about the driver as you build up your display lists
to see if it switches you over to software rendering at some point.
Don
| | | | | | | | | | | | | | | | | In lugnet.cad.dev.mac, Don Heyse wrote:
> Yeah, watch out for that software rendering on the Mac. It sneeks up on
> you. I know it can switch you over while your program is running, but I
> don't know what causes it to happen other than resizing a window. You
> might want to query it about the driver as you build up your display lists
> to see if it switches you over to software rendering at some point.
Actually, the same thing can happen in Windows if you resize a window too big
(say 1600x1200) on an older video card. However, I'm not sure you can tell that
it has switched to software mode on Windows, because as far as I know it is
still being rendered by the video card's OpenGL driver. I can't see how it
could possibly switch drivers mid-stream because different drivers support
different extensions. That would mean that an extension that you had been told
was present could disappear.
--Travis
| | | | | | | | | | | | | | | | |
| |
| Travis Cobbs wrote:
> In lugnet.cad.dev.mac, Don Heyse wrote:
>
> > Yeah, watch out for that software rendering on the Mac. It sneeks up on
> > you. I know it can switch you over while your program is running, but I
> > don't know what causes it to happen other than resizing a window. You
> > might want to query it about the driver as you build up your display lists
> > to see if it switches you over to software rendering at some point.
>
>
> Actually, the same thing can happen in Windows if you resize a window too big
> (say 1600x1200) on an older video card. However, I'm not sure you can tell that
> it has switched to software mode on Windows, because as far as I know it is
> still being rendered by the video card's OpenGL driver. I can't see how it
> could possibly switch drivers mid-stream because different drivers support
> different extensions. That would mean that an extension that you had been told
> was present could disappear.
>
> --Travis
I think this is what Don is refering to.
Using Don's ldgliteargs to 800x537 on my aging 8 MB VRAM machine so
ldglite would launch using the ATi renderer.
GL_VENDOR ='ATI Technologies Inc.'
GL_RENDERER ='ATi Rage 128 Pro OpenGL Engine'
GL_RGBA_BITS: (8, 8, 8, 8)
GL_DEPTH_BITS = 24
GL_STENCIL_BITS = 8
But if I make my window bigger the Apple software renderer kicks in
GL_VENDOR ='Apple'
GL_RENDERER ='Generic'
GL_RGBA_BITS: (8, 8, 8, 8)
GL_DEPTH_BITS = 32
GL_STENCIL_BITS = 8
If I go back to the original size, the ATi driver takes over again.
One of the odd observations is that the screen redraw takes the same
amount of time regardless of the window size (6 s for my caboose
http://users.rcn.com/cjmasi/caboose_m.dat) but rotating the model is
_much_ smoother when the ATi driver is active. Maybe it is just a window
size thing, but I don't think so. It only takes a few pixels bigger to
get the drivers to switch. You know what the nonprogramer* is
thinking... in ldglite (maybe MBC too) redrawing the model is not
hardware accelerated on my Mac (maybe yours too) even when the hardware
renderer is active.
The GL stuff that is quoted above is taking from my console window.
With MBC the the size of the window does not effect the smoothness of
the rotation nearly as much, but I have no way of know where the
rendering is occuring.
Chris
*That is me if it wasn't clear enough.
| | | | | | |