[CLN-list] More bugs.

Richard B. Kreckel kreckel at thep.physik.uni-mainz.de
Tue Dec 14 00:26:16 CET 2004


El Lunes, 13 de Diciembre de 2004, Isidro Cachadiña Gutiérrez escribió:
> El Viernes, 10 de Diciembre de 2004 22:48, Richard B. Kreckel escribió:
> > Hi otra vez,
> >
> > On Fri, 10 Dec 2004, Isidro [iso-8859-15] Cachadiña Gutiérrez wrote:
> > > #include <iostream>
> > > #include <cln/lfloat.h>
> > > #include <cln/lfloat_io.h>
> > >
> > >
> > > using namespace cln;
> > > int main(int argc, char **argv)
> > > {
> > >
> > >   default_float_format=float_format(100);
> > >   cl_LF a,b,c;
> > >   b="1e-60";
> > >   a="1.0";
> > >   c=a+b;
> > >   std::cerr << "c=" << c << std::endl;
> > >   c=c-a;
> > >   std::cerr << "c=" << c << std::endl;
> > > }
> > >
> > > And the outputs are 1.0L and 0.0L  ¿What happen with the
> > > default_float_format?.  If I write a="1.00000000000000000000000 ..."
> > > (100 zeros here)  I obtain the same result, then b is lost somewhere.
> > >
> > > More, when b about  < 1e-20 then it is lost.  Maybe an error in the
> > > conversion?
> > >
> > >From the documentation:
> >
> >     4.1.3 Constructing floating-point numbers
> >     -----------------------------------------
> >
> >     `cl_F' objects with low precision are most easily constructed from C
> >     `float' and `double'. See *Note Conversions::.
> >
> >   ........
>
> Hello Richard and Bruno:
>
> I'm sorry to tell you that you don't understand that I wanted to say, because
> it is related to the use of the global variable default_float_format, and not
> with the floating point representation but I will explain it.
>
> Lets begin.
>
> First, with your documentation the function
>
> float_format_t float_format( uintL n)
>
> "Returns the smallest float format wich guarantees at least n _decimal_ digits
> in the mantissa (after the decimal point)" (see _undelined_ the word decimal
> digits)
>
> That is,  being q the number of binary digits of the mantisa  can be
> established the relation
>
>     2^-q = 10^-n
>
> then   q = n log 10/log 2
>
> and with n = 100 then q = 332 bits approximately.  Then,  following the
> documentation the line
>
>    default_float_format=float_format(100);
>
> should guarantize at least 332 bits in the mantissa, aproximately ¿right?.

Basically yes, that's right.  On a machine with 32 Bit words
(intDsize==32) that would be rounded up to some 352 bits in the mantissa
because CLN doesn't bother to use fractions of words.

> When I define
>
>    b="1e-60";
>    a="1.0";
>
> Can I expect that these numbers will be represented with 332 mantissa bits, or
> not?.

No.  Use cl_float(x,y) for that.  And remember that cl_float returns a
cl_F, not a cl_LF.

> If not, there is a bug cause the global variable has not effect.  Or it is a
> feature?, I'll discuss it later...

Feature.  goto later;

> If yes, there is a bug cause  in the result the b is truncated about 1e-20
> which is approximately 60 mantissa bits, and it is so close to the number of
> mantissa bit of the double type.

Just one hopefully clarifying remark: This is because you specifically
asked CLN to construct a cl_LF, which is at least as large as double.
You end up with two machine words, which is 64 Bit and that is quite close
to 20 decimal digits.

> > `cl_F' objects with low precision are most easily constructed from C
> > `float' and `double'. See *Note Conversions::.
>
> I think, because I have not read all the source code that there is a problem
> in the decimal to binary conversion that don't read the default_float_format
> variable before convert it to numbers), and then it has not effect.

later:
Nobody expects you to read all the sources.   But the documentation
clearly states (4.11.1 Conversion to floating-point numbers):

    `float_format_t default_float_format'
         Global variable: the default float format used when converting
         rational numbers to floats.

    To convert a real number to a float, each of the types `cl_R', `cl_F',
    `cl_I', `cl_RA', `int', `unsigned int', `float', `double' defines the
    following operations:

    `cl_F cl_float (const TYPE&x, float_format_t f)'
         Returns `x' as a float of format `f'.

    `cl_F cl_float (const TYPE&x, const cl_F& y)'
         Returns `x' in the float format of `y'.

    `cl_F cl_float (const TYPE&x)'
         Returns `x' as a float of format `default_float_format' if it is
         an exact number, or `x' itself if it is already a float.

I think that you are doing things with floating point numbers that are
documented to work with rationals only.

> >     To construct a `cl_F' with high precision, you can use the conversion
> >     from `const char *', but you have to specify the desired precision
> >     within the string. (See *Note Internal and printed representation::.)
> >     Example:
> >             cl_F e =
> >     "0.271828182845904523536028747135266249775724709369996e+1_40";
> >     will set `e' to the given value, with a precision of 40 decimal digits.
> >
>
> Then is you don't consider the above behaviour as bugs, then I have a wish.
> The "programatic way" to define a number with the desired precission is good
> when you know how many decimal digits you need, but, for example I have a
> case in which I don't know how many digits I will need and I have to do with
> trial and error  procedure.  Then, the default_float_format must behave in
> this way:
>
>    a = 5.0;
>
> then 5.0 have to be converted to a 330 bits (if I choose 100 decimal digits)
> and in all algebraical operations like
>
>   b=a*2.72/c*7.28.
>
> all numbers have to be converted to the default_float_format that I have
> specified.

No.  In a large project this would not be a good idea because any
deterioration of precision due to lower precision in some of the input
variables would go undetected.

I do think it was a brilliant idea to require people to write down the
complete cl_float(double,float_format_t) spell.

I also think that something in your approach is broken if you wish to
control the imact of double precision floating point variables on 100
decimal digit precision floating point variables.

> >     The programmatic way to construct a `cl_F' with high precision is
> >     through the `cl_float' conversion function, see *Note Conversion to
> >     floating-point numbers::. For example, to compute `e' to 40 decimal
> >     places, first construct 1.0 to 40 decimal places and then apply the
> >     exponential function:
> >             float_format_t precision = float_format(40);
> >             cl_F e = exp(cl_float(1,precision));
> >
> Then programatic way that you have proposed is not good for long proyects
> cause you have to define a variable number of digits and then apply to all
> the conversions and things like b=a*2.72/c*7.28 become as the ugly
>
>   b = a*cl_float(2.72,precission)/c*cl_float(7.28,precision);

Unless that statement is inside a loop anyway (in which case the constant
factor ought to be hoisted out) I would write that such things rather as
    b = cl_float( 2.72*7.28, precision ) * a / c;
because of efficiency anyways.  Or did you mean
    b = cl_float( 2.72/7.28, precision ) * a / c;
by chance?

> But, .....let us read my first e-mail...
>
> >#include <iostream>
> >#include <cln/real.h>
> >#include <cln/real_io.h>
>
> >using namespace cln;
>
> >int main(int argc, char **argv)
> >{
> > default_float_format=float_format(100);
> > cl_R a,b;
> >  b=cl_float(1e-1,default_float_format);
> >  a=cl_float(1,default_float_format);
> > a=a+b;
> >  std::cerr << "a=" << a << std::endl;
> >}
>
> >The output is:
>
> >a=1.1000000000000000055511151231257827021181583404541015625L0
>  >                                        ^ ¿Why are here these noisy digits?.              
> >They are so close ..
>
>
> Upss.  It doesn't work.

*What* does not work?

>                         ¿a bug?.  The difference it is that I used
> default_float_format instead precission

You are not implying that it makes a difference if you use your own
variable of type float_format_t instead of the global
default_float_format, are you?  If you think you have found a bug, please
send a test program as small as possible, tell us what it does for you,
and what you expected it to do.  Otherwise, we won't be able to help you.

>                                         and the conversion of 1e-1 was done
> with approximately the double precission cause the noisy digits 55111... are
> about the 19 decimal digit not about the desired 100.

Me parece que todavía no existe una traducción de
<http://www.chiark.greenend.org.uk/~sgtatham/bugs.html>.  ¡Vaya!

> > It appears like you are still totally confused about the relation between
> > decimal and binary representation.  Please do have a close look at the
> > paper by David Goldberg that Bruno Haible has recommended in an earlier
> > thread!
>
> If with all things that I wrote above you still think that I am confused with
> the relation between decimal and binary representation then I have a big
> problem, cause I think that two first mails were so clear about the problem.

Maybe you aren't confused.  But your bug reports sure are confusing.  :-)

[...]
> A ver si esta vez nos enteramos del problema.

Parece que no.

Hasta la vista
  -richy.
-- 
Richard B. Kreckel
<http://www.ginac.de/~kreckel/>




More information about the CLN-list mailing list