To LUGNET HomepageTo LUGNET News HomepageTo LUGNET Guide Homepage
 Help on Searching
 
Post new message to lugnet.cad.dev.macOpen lugnet.cad.dev.mac in your NNTP NewsreaderTo LUGNET News Traffic PageSign In (Members)
 CAD / Development / Macintosh / 126
125  |  127
Subject: 
Re: review: Radeon 7000 for BrickDraw3D, low-end Mac
Newsgroups: 
lugnet.cad.dev.mac
Date: 
Thu, 11 Apr 2002 15:39:55 GMT
Viewed: 
3830 times
  
In lugnet.off-topic.geek, Andrew Allan writes:
At present, I am writing my own OpenGL renderer for LdGLite (Mac). In order
to spped things up I am defining the parts within OpenGL and then calling
then by their definitions. Theoretically this places the entire model within
the Graphics Card memory.

I started the process in the main memory by fully parsing the files into
thier primitives, including final vertex locations, vertex normals and color
as well (very memory inefficient - very small files require large bits of
memory). However on simple models I could achieve redraw rates of 60 fps on
my G4/400 - ATI Rage128(PCI) graphic system.

By then loading the entire model into OpenGL and calling it by definition,
the same system achieves 62 fps. Therefore in highly parsed models, my
conclusion is the system bus on the G4 is not interfering with card by too much.

Cool!  If you can clearly show where you make the changes, I might be
able to port this back into the other versions.  I've never been all
that happy with the rendering speed myself.  You can see in the code
where I tinkered a bit with display lists, but on my junky old hardware
with software opengl it didn't make much of a difference.  Also
display lists seem to be on the way out in the discussions about
OpenGL 2 so I put it on the backburner.

However, the system is designed to work as follows, each individual brick in
the model will be saved to the graphics card by name (including color and
face normals). Then when displaying the model, the system will send the the
card inly information about the bricks orientation and spatial position. I
am hoping that this will improve the rendering speeds over those in Don's
original code used in LdGLite as to date this has been my dissapointment in
my Mac implimentation of his program.

You'll have to watch out for the optional lines with this approach.  To
calculate when to draw them you need to convert the points to screen
coords to do the test.  Look at render_five() in stub.c to see what I
mean.

In conclusion an approach like this should also work wonders on any graphics
card as it keeps all the rendering and model information in the graphic card
this minimising traffic to the host computer. I will keep you posted as to
progress.

I'm looking forward to it.

Don



1 Message in This Thread:

Entire Thread on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact
    

Custom Search

©2005 LUGNET. All rights reserved. - hosted by steinbruch.info GbR