To LUGNET HomepageTo LUGNET News HomepageTo LUGNET Guide Homepage
 Help on Searching
 
Post new message to lugnet.cadOpen lugnet.cad in your NNTP NewsreaderTo LUGNET News Traffic PageSign In (Members)
 CAD / 11458
     
   
Subject: 
Re: New LDraw based editor... GLIDE.
Newsgroups: 
lugnet.cad
Date: 
Wed, 19 May 2004 21:19:37 GMT
Viewed: 
2344 times
  

In lugnet.cad, Kyle McDonald wrote:
Dan wrote:

Hi
  I agree its a shame about the incomplete BFC situation. Currently I use
Two-Sided lighting in GLIDE you don't notice the performance hit because most of
the processing effort is done by the CPU at the moment. That’s what slows it
down rather than the GFX card. My Nvidia TI4200 recently cooked itself and I’m
back on an old TNT2. There is only a small difference in performance between the
two in GLIDE.


That's because most of the focus in Grahics HW the past few years has
been in texture
and shading performance, and *not* in raw polygon drawing that LCAD
makes so much
use of.

This is only true to a point.  As long as you have a reasonably modern video
card, its on-board geometry performance is going to far exceed the geometry
performance you could get out of the fastest CPU.  I don't know how GLIDE does
its drawing, but I do know that a TI 4200 should be quite a bit faster than a
TNT2 simply because the TNT2 doesn't have on-board T&L (transform and lighting)
support.

I measured LDView on a GeForce 3 at about 15 million vertices per second.  It
should do quite a bit more on a TI 4200.  I'm not sure what a 3.4GHz CPU is
capable of, but I'd be very surprised if it matched the TI 4200.  When I first
got my GeForce 3, it was an upgrade from a TNT2.  LDView performance on a 1GHz
PIII went up by a factor of 8-10.


We mainly have games to thatnk for that. For the most part (especially
in fast moving
games) it's much easier to add details to a texture on a 'large' flat
polygon than it is
to refine the surfaces insto seperate polygons with seperate textures.
It's easier on
development, and it's easier to get better performance too.

In our world though with oure bricks having many sides, and generally
solid colors,
we don't have many place to use textures. Obviously they'd be useful on
printed parts,
and maybe standardizing the definaition of such things will be done soon
by the
LDraw language standards commitee.

Till the Hardware Polygon speed improves, the easiest thing to do is to
try to
figure out which polygons in a model (not necc. a brick) will never,
ever, be
seen and cull them as early as possible.

Take a look at the specs of modern graphics cards, and you will see that they
have done a very good job of keeping the geometry performance up with
improvements in texturing and fill-rate.  IIRC, a GeForce 3 was rated at around
25-30 million vertices per second.  By comparison, the GeForce FX 5950 Ultra is
rated at 356 million vertices per second.  And even though both those numbers
are marketing numbers, it seems likely they were derived in a similar manner,
and have gone up by a factor of 10 in two generations.  (GeForce FX is basically
GeForce 5.)

   
         
   
Subject: 
Re: New LDraw based editor... GLIDE.
Newsgroups: 
lugnet.cad
Date: 
Thu, 20 May 2004 15:00:14 GMT
Viewed: 
2337 times
  

because most of the processing effort is done by the CPU at the moment.

Possibly not the best way I could have worded it. GLIDE in its current form is
inefficient. There is some redundant sorting and a lot of list iteration that
the CPU does before it calls on OGL. This is the main speed-limiting factor.
With GLIDE as it is, a faster CPU and memory will make a bigger difference to
performance than a better GFX card.

I am aiming to create a feature rich program with a very friendly useable
interface. I can refactor / optimise after I have the features in there. A lot
of this is a learning process for me as this is my first OpenGL program. As such
I am creating solutions to problems that are not always optimal. In other cases
I have been lazy for the sake of development speed. Once a piece of code works
correctly making it go faster drops to the bottom of my todo pile. For example I
have some bubble sorts scattered around ware insertion sorts would be a far
better solution. In my opinion there is much more fun code to be written than
optimised sorts, like the (fun but pointless) explode model code “CRTL + E”.

 

©2005 LUGNET. All rights reserved. - hosted by steinbruch.info GbR