Subject:
|
Re: The use of type 5 lines for smoothing of objects.
|
Newsgroups:
|
lugnet.cad.dev
|
Date:
|
Mon, 20 May 2002 22:48:01 GMT
|
Viewed:
|
464 times
|
| |
| |
> Well, I added an initial go at Martijn's suggested algorithm to LDView (with
> some modifications here and there), and the results look extremely
> promising. Here is a sample:
>
> http://home.san.rr.com/tcobbs/LDView/FigRed.png
Looks good!!! Can you tell me which modifications you made?
> Note that this was rendered with Primitive Substitution disabled, so all
> smoothing is done purely based on the presense of Type 5 lines. I perform
> my smoothing at the Part level, so smoothing occurs between the multiple
> primitives in each part (note the smooth elongated sphere at the top of the
> arm).
>
> There are some glitches; at least one is visible near the top of the helmet,
> and some more around the dimple in the side of the helmet. I'm not entirely
> sure yet whether these are due to a problem with my implementation of the
> algorithm or improperly done type 5 lines in the DAT file.
I was able to reproduce the exact glitches you saw. They are due to round
off errors
in the coordinates. Your map sees a difference e.g. between point
(1.9999,0.0000,0.0000) and point (2.0000,0.0000,0.0000) while these are the
same points in the model. You have to find a way to make the map search less
precise. I did it with a data type called TVertex that has this constructor:
TVertex::TVertex(GLfloat* vectorvalue)
{
GLfloat Precision = 100.f;
VertexValue[0] = vertexvalue[0]*Precision;
VertexValue[1] = vertexvalue[1]*Precision;
VertexValue[2] = vertexvalue[2]*Precision;
}
This object contains the value "int VectorValue[3]". This way you can tweak
the precision of the map search with the Precision parameter. The map search
is now performed with ints to find an exact match. I think it is faster to
compare ints than floats.With Precision set to 10000.f you get the same
precision in ints as the precision used in the dat files ( four numbers
behind the dot). The following method I used to sort the map:
friend bool operator<(const TVertex& lvector, const TVertex& rvertex) {
if(lvertex.VertexValue[0] < rvertex.VertexValue[0]) {
return true;
}
else {
if(lvertex.VertexValue[0] > rvertex.VertexValue[0]) {
return false;
}
else { // so if they are equal
if(lvertex.VertexValue[1] < rvertex.VertexValue[1]) {
return true;
}
else {
if(lvertex.VertexValue[1] > rvertex.VertexValue[1]) {
return false;
}
else { // so if they are equal
if(lvertex.VertexValue[2] < rvertex.VertexValue[2]) {
return true;
}
else {
return false; // It is equal or greater
}
}
}
}
}
}
> My current implementation is slow. I'll have to look into speeding it up; I
> have pretty good ideas about where to start. It's fine for reasonably small
> parts, but it gets REALLY slow for large parts; baseplates take forever to
> process. On the plus side, the slowness is only at load time; the
> interactive performance hasn't been significantly impacted. (I'm not sure
> what effect the extra surface normals have on rendering performance, but I
> can't imagine it being too significant.)
My implementation is not very slow. That is because I load each dat file
only once and keep track of their pointers with a DatName2Pointer map. That
way I can link al data and save memory. I also do smoothing per dat file.You
can try the speed by copying a lot of dat files in the testmodels map. They
are all loaded at the start of the program.
I see that you do normal averaging per part. That is a good idea indeed
because type 5 lines around edges of primitives do not have to represent a
normal correctly (I read the messages about that). I also discovered that my
algorithm doesn't calculate the right normals around borders of surfaces in
some cases. In the case of a cylinder it works fine, but in the case of
curvature in 2 directions it is a bit off around borders. This algorithm
needs also the normal at the other side of the border or else the averaged
normal points a bit inwards. So averaging of normals is needed beyond .dat
scope and has to be scaled up to part scope. Perhaps it is efficient to
first do averaging per .dat then detect borders. And later average along the
detected borders in part scope. I am going to try some things and let you
know.
|
|
Message has 1 Reply: | | Re: The use of type 5 lines for smoothing of objects.
|
| (...) I think I listed most of them, but here are the ones I can think of: Calculated per part, not per dat file. Only calculated when an edge is an exact match with a type 5 line; this requires some extra work to accomplish. Completely ignore the (...) (23 years ago, 20-May-02, to lugnet.cad.dev)
|
Message is in Reply To:
| | Re: The use of type 5 lines for smoothing of objects.
|
| (...) Well, I added an initial go at Martijn's suggested algorithm to LDView (with some modifications here and there), and the results look extremely promising. Here is a sample: (URL) that this was rendered with Primitive Substitution disabled, so (...) (23 years ago, 20-May-02, to lugnet.cad.dev)
|
22 Messages in This Thread:
- Entire Thread on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
This Message and its Replies on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
|
|
|
|