Subject:
|
Re: 8-bit floating-point number representations?
|
Newsgroups:
|
lugnet.off-topic.geek
|
Date:
|
Thu, 4 Jan 2001 14:51:19 GMT
|
Viewed:
|
142 times
|
| |
| |
Todd Lehman wrote:
>
> In lugnet.off-topic.geek, Frank Filz writes:
> > After thinking about this, I realized most compact binary floating point
> > forms don't allow denormalized numbers. The reason is that if one
> > assures that all numbers are normalized, the bit to the left of the
> > decimal point is always a 1, and therefore need not be stored, thus
> > increasing the precision by 1 bit (which will be extremely significant
> > for an 8 bit float).
> >
> > [...snip code...]
>
> Wow, Frank, I didn't know you geeked like that! All right!
I've been a pretty serious geek, especially with low level stuff like
this. My first real job involved developing a Fortran style formatted
I/O for the Apple II which involved digging into the the internal guts
of Applesoft BASIC so that we could make use of Applesoft internal
routines to evaluate expressions and the like.
> > I've probably made some misteaks but the above gives an idea of what
> > needs to be done at least.
>
> It past my bedtime, so I'll have to think more about it another time, but
> you've definitely given me hope that it could be done pretty efficiently.
> As a bonus in this case, the numbers are always non-negative, so that would
> mean one more bit for the mantissa. I'm starting to wonder, though, whether
> 8-bit numbers would actually be faster in the long run than 64-bit double-
> precision values -- because once the vector is in memory as a memory-mapped
> file, the speed is no longer dependent on the size; with enough RAM, the
> entire file would stick around in the filesystem cache, or it could even be
> locked and held as a dedicated in-memory resource.
The biggest speed gain may be to try and avoid floating point values. I
haven't grokked around in the CPU internals since the 100mhz Pentium I
days, but there is one definite performance hit of floating point (the
possibility of additional context to be saved on task switches, though
the Pentium supports a scheme which delays saving the FPU context until
a different task actually needs the FPU), and there is probably still a
performance penalty for FPU operations.
I tend to be concerned about memory accesses. While they are much faster
than disk acesses, they can still slow things down unless the data has a
significant locality which would tend to cause more of it to sit in the
CPU cache.
What are the range and characteristics of the numbers you are dealing
with?
--
Frank Filz
-----------------------------
Work: mailto:ffilz@us.ibm.com (business only please)
Home: mailto:ffilz@mindspring.com
|
|
Message has 1 Reply: | | Re: 8-bit floating-point number representations?
|
| (...) I was thinking about this some more, and am really wondering what you will be doing with the numbers. If the only computations you need to do with them are comparisons, then once converted they will be extremely cheap to work with (the reason (...) (24 years ago, 5-Jan-01, to lugnet.off-topic.geek)
|
Message is in Reply To:
| | Re: 8-bit floating-point number representations?
|
| (...) Wow, Frank, I didn't know you geeked like that! All right! (...) It past my bedtime, so I'll have to think more about it another time, but you've definitely given me hope that it could be done pretty efficiently. As a bonus in this case, the (...) (24 years ago, 4-Jan-01, to lugnet.off-topic.geek)
|
8 Messages in This Thread:
- Entire Thread on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
This Message and its Replies on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
|
|
|
|