From bagnara at cs.unipr.it Wed Jan 9 17:21:05 2002
From: bagnara at cs.unipr.it (Roberto Bagnara)
Date: Wed, 09 Jan 2002 17:21:05 +0100
Subject: The specification of sqrfree()
Message-ID: <3C3C6DF1.B117F8EF@cs.unipr.it>
Hi there,
we are developing a GiNaC-based recurrence relations solver.
During this work we have found that GiNaC's documentation
is not very precise about what a "square-free factorization" is.
Below you find what we believe is a sensible definition
(which also seems to be compatible with the current implementation).
Please, check if that is also consistent with the specification
of GiNaC (we would like to avoid relying on non-features that
may disappear on a subsequent release).
A polynomial p(X) in Q[X] is said square-free
if, whenever two polynomials q(X) and r(X) in Q[X]
are such that p(X) = q(X)^2*r(X), q(X) is constant.
The sqrfree function computes polynomials p1(X), ..., pk(X)
and positive integers e1, ..., ek such that
p(X) = p1(X)^e1 * ... * pk(X)^ek
and p1(X) * ... * pk(X) is square-free.
If you agree with the above definition we are willing to provide
Doxygen and TeXinfo patches against the current sources.
All the best,
the PURRS team
--
Prof. Roberto Bagnara
Computer Science Group
Department of Mathematics, University of Parma, Italy
http://www.cs.unipr.it/~bagnara/
mailto:bagnara at cs.unipr.it
From kreckel at thep.physik.uni-mainz.de Wed Jan 9 18:21:33 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Wed, 9 Jan 2002 18:21:33 +0100 (CET)
Subject: The specification of sqrfree()
In-Reply-To: <3C3C6DF1.B117F8EF@cs.unipr.it>
Message-ID:
Hi,
On Wed, 9 Jan 2002, Roberto Bagnara wrote:
[...]
> During this work we have found that GiNaC's documentation
> is not very precise about what a "square-free factorization" is.
Yeah, the defintion given there is not very strict... :-)
> Below you find what we believe is a sensible definition
> (which also seems to be compatible with the current implementation).
> Please, check if that is also consistent with the specification
> of GiNaC (we would like to avoid relying on non-features that
> may disappear on a subsequent release).
>
> A polynomial p(X) in Q[X] is said square-free
> if, whenever two polynomials q(X) and r(X) in Q[X]
> are such that p(X) = q(X)^2*r(X), q(X) is constant.
I had to read this three times. Do we agree to read `X' ad a n-tuple of
symbols? Then I thought this definition does not account of the
square-free factorization of p(a,b,c,d) = a*c - b*c - a*d + b*d into
(a-b)*(c-d), which is now handled -- this being the change that went into
version 1.0.1. But now methinks your definition does indeed cover this.
Isn't there a canonical definition for the multivariate case in the
literature?
And at least over Z[X] and Q[X], you can rely on this extended behaviour.
Maple and Mathematica do the same and I need it for my work.
If you think it over again with the above case in mind and find that it's
okay, a patch for the documentation would be welcome.
Regards
-richy.
--
Richard B. Kreckel
From stefanw at fis.unipr.it Thu Jan 10 14:04:21 2002
From: stefanw at fis.unipr.it (Stefan Weinzierl)
Date: Thu, 10 Jan 2002 14:04:21 +0100 (CET)
Subject: News from the gTybalt corner
Message-ID:
Dear all,
I wrote a library based on GiNaC which can handle the expansion
of transcendental functions in a small parameter, like for
example
3F2(-2eps,-2eps;1-eps,1-2eps,1-2eps;x) = 1 + 4 Li2(x) eps^2
+ O(eps3)
This is a domain where commercial CAS usually have no clue at
all.
The library is available under GPL from
http://www.fis.unipr.it/~stefanw/nestedsums
I should also say that at the moment you need GiNaC version 0.8.3,
a migration to the actual GiNaC version is planned for the future.
The rest of the mail is more for the developpers of GiNaC:
First of all my thanks to the developpers of GiNaC, GiNaC
proved to be quite a solid piece of software.
For a part of my program I could do a benchmark test with a
corresponding program written in FORM, and it turned out
that the GiNaC/C++ code was roughly a factor two faster
(as long as all expressions fitted into the available RAM).
Since FORM is mainly known for its speed, this is quite an
achievement.
To the more technical points:
The algorithms for the expansion of transcendental functions
relies heavily on various algebras, e.g. you have two
elements a1 and a2 in an algebra A and you can multiply them
to get a third element:
a1 * a2 = a3
Although the algebras are all commutative, I implemented them
as non-commutative objects, then GiNaC automatically groups
elements of the same algebra together and I only have to supply
the actual multiplication routine in the method
simplify_ncmul
I think, this is handled elegantly and effciently in GiNaC.
However, the expand function does not handle cases if not all
types are the same.
I can even blame myself for making the suggestion that the class
add should throw an exception if it encounters an expression like
1 + A,
where A is non-commutative.
Right now one is supposed to write
ONE_in_A + A,
where ONE_in_A is the unit in the algebra of A.
With more than one algebra this becomes rather ugly and inefficient.
My suggestion would be therefore to think about a class
mixed_type_add in GiNaC, which should have a similar relation
with class add as class ncmul has with class mul:
If all terms in a sum are of the same type, they will end up in class
add, otherwise in this new class.
I think, that can be implemented efficiently, such that users who do not
care about non-commutative objects will not suffer any severe penalty.
In addition pedantics can check at run-time their expressions and
start a panic attack, if they encounter a class mixed_type_add.
In short, one would have a container, where you can put an "apple" and
a "potato" in.
I had a look at the GiNaC source code, how one would do it, but since
it involves quite a bit of cross links, I would not directly volunteer
for it.
What do you think ?
Best wishes,
Stefan
From kreckel at thep.physik.uni-mainz.de Fri Jan 11 16:30:47 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Fri, 11 Jan 2002 16:30:47 +0100 (CET)
Subject: News from the gTybalt corner
In-Reply-To:
Message-ID:
Hi,
On Thu, 10 Jan 2002, Stefan Weinzierl wrote:
[...]
Glad it turned out helpful!
[...]
> However, the expand function does not handle cases if not all
> types are the same.
> I can even blame myself for making the suggestion that the class
> add should throw an exception if it encounters an expression like
> 1 + A,
> where A is non-commutative.
> Right now one is supposed to write
> ONE_in_A + A,
> where ONE_in_A is the unit in the algebra of A.
> With more than one algebra this becomes rather ugly and inefficient.
>
> My suggestion would be therefore to think about a class
> mixed_type_add in GiNaC, which should have a similar relation
> with class add as class ncmul has with class mul:
> If all terms in a sum are of the same type, they will end up in class
> add, otherwise in this new class.
>
> I think, that can be implemented efficiently, such that users who do not
> care about non-commutative objects will not suffer any severe penalty.
> In addition pedantics can check at run-time their expressions and
> start a panic attack, if they encounter a class mixed_type_add.
>
> In short, one would have a container, where you can put an "apple" and
> a "potato" in.
>
> I had a look at the GiNaC source code, how one would do it, but since
> it involves quite a bit of cross links, I would not directly volunteer
> for it.
> What do you think ?
Sounds ugly, doen't it? Adding an SU(2) object to an SU(3) object does
not make sense mathematically. As you are saying yourself, you are adding
"apples" to "potatoes"...
Let's think: back to the example about SU(2) and SU(3), one should not add
\sigma_1 to \lambda_3. This is reflected in add::return_type() and
add::return_type_tinfo() which don't even bother traversing the sum! The
invariance that the sum makes sense is not even checked. (Thinking about
it, it probably should be checked #if defined(DO_GINAC_ASSERT).)
However, you can well add \sigma_1 to \lambda_3 when you first multiply
\sigma_1 with the one in SU(3) and the \lambda_3 with the one in SU(2).
A mul object is basically a tensorial product, here.
When you do all this rigorously, you could even sort out the elements
properly. It would be some combination of calls to collect() and
coeff(). Is it that, what you want?
Regards
-richy.
--
Richard B. Kreckel
From bagnara at cs.unipr.it Tue Jan 15 10:18:43 2002
From: bagnara at cs.unipr.it (Roberto Bagnara)
Date: Tue, 15 Jan 2002 10:18:43 +0100
Subject: End of a nightmare: patch for GiNaC's acinclude.m4
Message-ID: <3C43F3F3.4AE45FF3@cs.unipr.it>
A student of mine started having troubles with her Linux machine at
home soon after she started playing with GiNaC. The machine seemed to
become instable after a few days of work and she reinstalled GNU/Linux
several times because of that. A few days ago, we discovered that
/dev/null was no longer a character device: it was an ordinary file
and that caused the system not to boot properly. We recreated the
/dev/null device and everything seemed to go well until this morning,
when the device disappeared again. BUT this time Tatiana provided
the relevant bit of information; here it is:
...
config.status: creating doc/reference/Makefile
config.status: creating config.h
**** The following problems have been detected by configure.
**** Please check the messages below before running "make".
**** (see the section 'Common Problems' in the INSTALL file)
** No suitable installed version of CLN could be found.
deleting cache /dev/null
[root at crystal GiNaC-1.0.3]#
Aaargh!!! It was GiNaC configuration erasing /dev/null!
Yeah, right, there is no need to run `configure' while
being root. However, I feel in this case the price that
had to be payed was a bit too high. That is why I propose
the following patch is applied to `acinclude.m4':
if `cache_file' is `/dev/null' do not delete it.
All the best,
Roberto
--
Prof. Roberto Bagnara
Computer Science Group
Department of Mathematics, University of Parma, Italy
http://www.cs.unipr.it/~bagnara/
mailto:bagnara at cs.unipr.it
diff -rcp GiNaC-1.0.3.orig/acinclude.m4 GiNaC-1.0.3/acinclude.m4
*** GiNaC-1.0.3.orig/acinclude.m4 Tue Nov 20 18:42:05 2001
--- GiNaC-1.0.3/acinclude.m4 Tue Jan 15 09:56:33 2002
*************** if test "x${ginac_error}" = "xyes"; then
*** 86,93 ****
if test "x${ginac_warning_txt}" != "x"; then
echo "${ginac_warning_txt}"
fi
! echo "deleting cache ${cache_file}"
! rm -f $cache_file
else
if test x$ginac_warning = xyes; then
echo "=== The following minor problems have been detected by configure."
--- 86,95 ----
if test "x${ginac_warning_txt}" != "x"; then
echo "${ginac_warning_txt}"
fi
! if test "x$cache_file" != "x/dev/null"
! echo "deleting cache ${cache_file}"
! rm -f $cache_file
! fi
else
if test x$ginac_warning = xyes; then
echo "=== The following minor problems have been detected by configure."
From stefanw at fis.unipr.it Tue Jan 15 11:23:46 2002
From: stefanw at fis.unipr.it (Stefan Weinzierl)
Date: Tue, 15 Jan 2002 11:23:46 +0100 (CET)
Subject: News from the gTybalt corner
In-Reply-To:
Message-ID:
On Fri, 11 Jan 2002, Richard B. Kreckel wrote:
> > My suggestion would be therefore to think about a class
> > mixed_type_add in GiNaC, which should have a similar relation
> > with class add as class ncmul has with class mul:
> > If all terms in a sum are of the same type, they will end up in class
> > add, otherwise in this new class.
> >
> > I think, that can be implemented efficiently, such that users who do not
> > care about non-commutative objects will not suffer any severe penalty.
> > In addition pedantics can check at run-time their expressions and
> > start a panic attack, if they encounter a class mixed_type_add.
> >
> > In short, one would have a container, where you can put an "apple" and
> > a "potato" in.
> >
> Sounds ugly, doen't it? Adding an SU(2) object to an SU(3) object does
> not make sense mathematically. As you are saying yourself, you are adding
> "apples" to "potatoes"...
>
> Let's think: back to the example about SU(2) and SU(3), one should not add
> \sigma_1 to \lambda_3. This is reflected in add::return_type() and
> add::return_type_tinfo() which don't even bother traversing the sum! The
> invariance that the sum makes sense is not even checked. (Thinking about
> it, it probably should be checked #if defined(DO_GINAC_ASSERT).)
>
> However, you can well add \sigma_1 to \lambda_3 when you first multiply
> \sigma_1 with the one in SU(3) and the \lambda_3 with the one in SU(2).
> A mul object is basically a tensorial product, here.
>
> When you do all this rigorously, you could even sort out the elements
> properly. It would be some combination of calls to collect() and
> coeff(). Is it that, what you want?
>
Hi Richy,
what I would like to do is to write
\sigma_1 + \lambda_3
when I mean
\sigma_1 * ONE_su3 + ONE_su2 * \lambda_3
to avoid a proliferation of unit elements of various algebras.
This would give a better readability of results and would be more
efficient.
Calls to coeff() and collect() have to transverse the whole expression
tree, and in routines which have to be efficient I would like to avoid
such "global" operations as much as possible.
A container for the addition of mixed type elements would just be the
missing piece in GiNaC.
Of course, one can have a philosophical discussion, if it would make
sense to write \sigma_1+\lambda_3 down in the first place, but I'm
using the non-commutative feature in a slightly different context.
It is more about expressions of the form
1 + x + pow(x,2) + LOG(x)
where LOG(x) is an instance of a class with special simplification rules:
LOG(x)*LOG(y) = Li2(x*y) + other terms
This multiplication rule can be implemented very elegantly in GiNaC if I
declare the "LOG"-class non-commutative and put the multiplication in
simplify_ncmul.
But a lot of this elegance and efficiency is lost, if I'm forced to write
1*ONE + x*ONE + pow(x,2)*ONE + LOG(x)
where ONE is the unit in the algebra of the "LOG"'s.
Best wishes,
Stefan
From kreckel at thep.physik.uni-mainz.de Tue Jan 15 12:55:03 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Tue, 15 Jan 2002 12:55:03 +0100 (CET)
Subject: End of a nightmare: patch for GiNaC's acinclude.m4
In-Reply-To: <3C43F3F3.4AE45FF3@cs.unipr.it>
Message-ID:
On Tue, 15 Jan 2002, Roberto Bagnara wrote:
> diff -rcp GiNaC-1.0.3.orig/acinclude.m4 GiNaC-1.0.3/acinclude.m4
> *** GiNaC-1.0.3.orig/acinclude.m4 Tue Nov 20 18:42:05 2001
> --- GiNaC-1.0.3/acinclude.m4 Tue Jan 15 09:56:33 2002
> *************** if test "x${ginac_error}" = "xyes"; then
> *** 86,93 ****
> if test "x${ginac_warning_txt}" != "x"; then
> echo "${ginac_warning_txt}"
> fi
> ! echo "deleting cache ${cache_file}"
> ! rm -f $cache_file
> else
> if test x$ginac_warning = xyes; then
> echo "=== The following minor problems have been detected by configure."
> --- 86,95 ----
> if test "x${ginac_warning_txt}" != "x"; then
> echo "${ginac_warning_txt}"
> fi
> ! if test "x$cache_file" != "x/dev/null"
> ! echo "deleting cache ${cache_file}"
> ! rm -f $cache_file
> ! fi
> else
> if test x$ginac_warning = xyes; then
> echo "=== The following minor problems have been detected by configure."
Fixed in CVS. Thanks a lot! Brown paper bag applied.
BTW: this particular piece of error/warning accumulator was lifted from
another package. It is not an uncommon macro, so I suspect other packages
are broken in the same way. It was latent with the older autoconf but the
switch to autoconf 2.50 made it surface: `config.cache' is now disabled by
default (because it tended to confuse new users) and must be switched on
explicitly with `./configure -C'.
Ouch, this hurts
-richy.
--
Richard B. Kreckel
From markus.nullmeier at urz.uni-heidelberg.de Fri Jan 18 15:44:36 2002
From: markus.nullmeier at urz.uni-heidelberg.de (Markus Nullmeier)
Date: Fri, 18 Jan 2002 15:44:36 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
Message-ID: <200201181444.PAA66182@aixterm7.urz.uni-heidelberg.de>
Hello,
being unware of GiNaC's implemetation, I wrote my own
Bernoulli number function after reading Peter Stone's
straightforward description in his Maple Worksheet
http://minyos.its.rmit.edu.au/~e05254/MAPLE/ENGMATH/TAYLOR/BERNUM.MWS
Later I adapted my function to bernoulli() in numeric.cpp, as it
turned out to be about twice as fast as GiNaC 1.0.3 (gcc 2.95, -O2).
Thus maybe you'll find the enclosed patch useful in some
way or the other. Hopefully no hidden bugs remain.
Markus
--- GiNaC-1.0.3/ginac/numeric.cpp Wed Dec 19 12:11:46 2001
+++ GiNaC-1.0.3_new_bernoulli/ginac/numeric.cpp Fri Jan 18 14:44:16 2002
@@ -1512,7 +1512,7 @@
// But if somebody works with the n'th Bernoulli number she is likely to
// also need all previous Bernoulli numbers. So we need a complete remember
// table and above divide and conquer algorithm is not suited to build one
- // up. The code below is adapted from Pari's function bernvec().
+ // up.
//
// (There is an interesting relation with the tangent polynomials described
// in `Concrete Mathematics', which leads to a program twice as fast as our
@@ -1520,38 +1520,38 @@
// addition to the remember table. This doubles the memory footprint so
// we don't use it.)
+ unsigned n = nn.to_int();
+
// the special cases not covered by the algorithm below
- if (nn.is_equal(_num1))
- return _num_1_2;
- if (nn.is_odd())
- return _num0;
-
+ if (n & 1)
+ return (n == 1) ? _num_1_2 : _num0;
+ if (!n)
+ return _num1;
+
// store nonvanishing Bernoulli numbers here
static std::vector< cln::cl_RA > results;
- static int highest_result = 0;
- // algorithm not applicable to B(0), so just store it
- if (results.empty())
- results.push_back(cln::cl_RA(1));
-
- int n = nn.to_long();
- for (int i=highest_result; i0; --j) {
- B = cln::cl_I(n*m) * (B+results[j]) / (d1*d2);
- n += 4;
- m += 2;
- d1 -= 1;
- d2 -= 2;
- }
- B = (1 - ((B+1)/(2*i+3))) / (cln::cl_I(1)<<(2*i+2));
- results.push_back(B);
- ++highest_result;
- }
- return results[n/2];
+ static unsigned next_r = 0;
+
+ // algorithm not applicable to B(2), so just store it
+ if (!next_r) {
+ results.push_back(cln::recip(cln::cl_RA(6)));
+ next_r = 4;
+ }
+ for (unsigned p = next_r; p <= n; p += 2) {
+ cln::cl_I c = 1;
+ cln::cl_RA b = cln::cl_RA(1-p)/2;
+ unsigned p3 = p+3;
+ unsigned p2 = p+2;
+ unsigned pm = p-2;
+ unsigned i, k;
+ for (i=2, k=0; i <= pm; i += 2, k++) {
+ c = cln::exquo(c * ((p3 - i)*(p2 - i)), (i - 1)*i);
+ b = b + c * results[k];
+ }
+ results.push_back(- b / (p + 1));
+ next_r += 2;
+ }
+ return results[n/2 - 1];
}
From kreckel at thep.physik.uni-mainz.de Fri Jan 18 18:05:24 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Fri, 18 Jan 2002 18:05:24 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To: <200201181444.PAA66182@aixterm7.urz.uni-heidelberg.de>
Message-ID:
On Fri, 18 Jan 2002, Markus Nullmeier wrote:
> being unware of GiNaC's implemetation, I wrote my own
> Bernoulli number function after reading Peter Stone's
> straightforward description in his Maple Worksheet
> http://minyos.its.rmit.edu.au/~e05254/MAPLE/ENGMATH/TAYLOR/BERNUM.MWS
>
> Later I adapted my function to bernoulli() in numeric.cpp, as it
> turned out to be about twice as fast as GiNaC 1.0.3 (gcc 2.95, -O2).
>
> Thus maybe you'll find the enclosed patch useful in some
> way or the other. Hopefully no hidden bugs remain.
[...]
Cool. Fast. It's included now.
Thanks!
-richy.
--
Richard B. Kreckel
From markus.nullmeier at urz.uni-heidelberg.de Fri Jan 18 19:34:12 2002
From: markus.nullmeier at urz.uni-heidelberg.de (Markus Nullmeier)
Date: Fri, 18 Jan 2002 19:34:12 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To:
from "Richard B. Kreckel" at "Jan 18, 2002 06:05:24 pm"
Message-ID: <200201181834.TAA73990@aixterm7.urz.uni-heidelberg.de>
Maybe bernoulli() should warn if some user tries to feed it
an even number greater than 23168, since that would give an
integer >= 2^29, wrongly converted to cln::cl_I in the line
with "exquo".
On the other hand, I don't know how much time and memory
would be wasted before that happens. Already bernoulli(5000)
takes 10 minutes and 10 megs on an Athlon 900.
Markus
From kreckel at thep.physik.uni-mainz.de Fri Jan 18 19:46:26 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Fri, 18 Jan 2002 19:46:26 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To: <200201181834.TAA73990@aixterm7.urz.uni-heidelberg.de>
Message-ID:
On Fri, 18 Jan 2002, Markus Nullmeier wrote:
> Maybe bernoulli() should warn if some user tries to feed it
> an even number greater than 23168, since that would give an
> integer >= 2^29, wrongly converted to cln::cl_I in the line
> with "exquo".
Hmm, I think a simple cast to long would get the job done, in the same
manner as the numeric ctors from builtin types work.
Regards
-richy.
--
Richard B. Kreckel
From markus.nullmeier at urz.uni-heidelberg.de Fri Jan 18 20:48:54 2002
From: markus.nullmeier at urz.uni-heidelberg.de (Markus Nullmeier)
Date: Fri, 18 Jan 2002 20:48:54 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To:
from "Richard B. Kreckel" at "Jan 18, 2002 07:48:16 pm"
Message-ID: <200201181948.UAA25824@aixterm7.urz.uni-heidelberg.de>
From: Richard B. Kreckel
> Hmm, I think a simple cast to long would get the job done, in the same
> manner as the numeric ctors from builtin types work.
Yes, a change to `long' would push the limit for even arguments to
65534. However I think the real problem lies in the "philosophical"
nature of my question. I guess nobody will want to calculate Bernoulli
numbers this big (the limit of the 1.0.3 code seems to be 8190).
Thus I think things could be left as they are, since the CLN manual
hints that conversions from `long' are less efficient. By altogether
abolishing the theoretical limit and letting CLN calculate (p3-i)*(p2-i),
we would slow the procedure down by some per cent without any real gain.
But I suppose this road should be taken if 23168 did become an issue, like
if (p < 23168) { normal_inner_loop; } else { slow_inner_loop_with_CLN; }
If you like this better I can make a patch.
Markus
From kreckel at thep.physik.uni-mainz.de Fri Jan 18 21:03:44 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Fri, 18 Jan 2002 21:03:44 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To: <200201181948.UAA25824@aixterm7.urz.uni-heidelberg.de>
Message-ID:
On Fri, 18 Jan 2002, Markus Nullmeier wrote:
> Yes, a change to `long' would push the limit for even arguments to
> 65534. However I think the real problem lies in the "philosophical"
> nature of my question. I guess nobody will want to calculate Bernoulli
> numbers this big (the limit of the 1.0.3 code seems to be 8190).
Sure, they are notoriously untractable. But was there really such a
limit in the old code? I was under the impression that I once had it
compute B_{30000} but I might be wrong...
> Thus I think things could be left as they are, since the CLN manual
> hints that conversions from `long' are less efficient.
They involve a function call and the constructed number isn't immiediate
any more but heap-allocated instead. But I doubt you'll see the differnce
in this case.
> By altogether
> abolishing the theoretical limit and letting CLN calculate (p3-i)*(p2-i),
> we would slow the procedure down by some per cent without any real gain.
> But I suppose this road should be taken if 23168 did become an issue, like
> if (p < 23168) { normal_inner_loop; } else { slow_inner_loop_with_CLN; }
> If you like this better I can make a patch.
I do not think the difference in times will be worth the effort, but I
haven't tried. Making it safe such that somebody who wants to see it
break down would have to let it run for a week or so is more important.
Regards
-richy.
--
Richard B. Kreckel
From markus.nullmeier at urz.uni-heidelberg.de Sat Jan 19 00:49:53 2002
From: markus.nullmeier at urz.uni-heidelberg.de (Markus Nullmeier)
Date: Sat, 19 Jan 2002 00:49:53 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To:
from "Richard B. Kreckel" at "Jan 18, 2002 09:03:44 pm"
Message-ID: <200201182349.AAA67130@aixterm7.urz.uni-heidelberg.de>
> Sure, they are notoriously untractable. But was there really such a
> limit in the old code? I was under the impression that I once had it
> compute B_{30000} but I might be wrong...
Oh, sorry about this confusion. I overlooked some points while reading
the 1.0.3 code. I rather should have said that the (long) value of (m*n)
overflows 2^32-1 for B_{32768}. I'm guessing that this should impact the
calculated values.
> I do not think the difference in times will be worth the effort, but I
> haven't tried. Making it safe such that somebody who wants to see it
> break down would have to let it run for a week or so is more important.
Make-safe patch to follow ... :)
Markus
From kreckel at thep.physik.uni-mainz.de Sat Jan 19 16:05:27 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Sat, 19 Jan 2002 16:05:27 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To: <200201182349.AAA67130@aixterm7.urz.uni-heidelberg.de>
Message-ID:
Hi,
On Sat, 19 Jan 2002, Markus Nullmeier wrote:
> > Sure, they are notoriously untractable. But was there really such a
> > limit in the old code? I was under the impression that I once had it
> > compute B_{30000} but I might be wrong...
>
> Oh, sorry about this confusion. I overlooked some points while reading
> the 1.0.3 code. I rather should have said that the (long) value of (m*n)
> overflows 2^32-1 for B_{32768}. I'm guessing that this should impact the
> calculated values.
Ah, okay.
> > I do not think the difference in times will be worth the effort, but I
> > haven't tried. Making it safe such that somebody who wants to see it
> > break down would have to let it run for a week or so is more important.
>
> Make-safe patch to follow ... :)
While you are at it, could you also make sure to call resize() on results
with an appropiate argument before entering the loop in wich the
push_back() is being done? This is, because push_back() is likely to
trigger a couple of reallocations in there and this is quite expensive for
any vectors, since they are contiguous in their memory layout. (The old
code should have already done that.)
Regards
-richy.
--
Richard B. Kreckel
From gregod at cs.rpi.edu Sat Jan 19 16:25:15 2002
From: gregod at cs.rpi.edu (Douglas Gregor)
Date: Sat, 19 Jan 2002 10:25:15 -0500
Subject: Alternative bernoulli(const numeric &)
In-Reply-To:
References:
Message-ID: <200201191524.g0JFOgu03784@mailout6-0.nyroc.rr.com>
On Saturday 19 January 2002 10:05 am, you wrote:
> While you are at it, could you also make sure to call resize() on results
Just to be picky, you want to call reserve() and not resize().
Doug
From kreckel at thep.physik.uni-mainz.de Sat Jan 19 16:31:22 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Sat, 19 Jan 2002 16:31:22 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To: <200201191524.g0JFOgu03784@mailout6-0.nyroc.rr.com>
Message-ID:
On Sat, 19 Jan 2002, Douglas Gregor wrote:
> On Saturday 19 January 2002 10:05 am, you wrote:
> > While you are at it, could you also make sure to call resize() on results
>
> Just to be picky, you want to call reserve() and not resize().
Err, sure. I guess I didn't have enough cereals this morning...
From bagnara at cs.unipr.it Sun Jan 20 10:17:36 2002
From: bagnara at cs.unipr.it (Roberto Bagnara)
Date: Sun, 20 Jan 2002 10:17:36 +0100
Subject: PATCH: remove_all() method added to the containers
Message-ID: <3C4A8B30.BE68AA03@cs.unipr.it>
Dear all,
of course there are several ways in the current version of GiNaC to erase
all the elements from a GiNaC container. However, as far as I can tell,
they are either obscure or inefficient or both (removing one element at
a time is inefficient, assigning from an empty container is obscure and
still measurably inefficient and so forth). Removing all the elements of
a container is a very common operation and forcing users to resort to
kludges is, I believe, not a very good idea. Moreover, not providing
the right method for a common operation reduces the implementor's latitude
for future choices (what if you would like to use a different representation
for containers? What if this new representation is such that the user's
kludges for erasing all the elements are seriously inefficient?)
You don't want your users to ask you whether _today_ it is best to clear
all the elements using the technique A or by the tecnique B, right?
It is not by chance that all the STL containers have a clear() method.
Enough for advocacy, please find below a patch adding a remove_all() method
to the containers. I feel that the name "remove_all" fits well with the others.
The patch allows `make check' to complete successfully and the documentation
looks good. I have been careful to follow your coding style. Please let me
know if I haven't been successful and I will submit a revised patch.
All the best
Roberto
--
Prof. Roberto Bagnara
Computer Science Group
Department of Mathematics, University of Parma, Italy
http://www.cs.unipr.it/~bagnara/
mailto:bagnara at cs.unipr.it
diff -rcp GiNaC-1.0.3.orig/doc/tutorial/ginac.texi GiNaC-1.0.3/doc/tutorial/ginac.texi
*** GiNaC-1.0.3.orig/doc/tutorial/ginac.texi Fri Dec 21 12:38:03 2001
--- GiNaC-1.0.3/doc/tutorial/ginac.texi Sat Jan 19 18:22:28 2002
*************** canonical form.
*** 1193,1198 ****
--- 1193,1199 ----
@cindex @code{prepend()}
@cindex @code{remove_first()}
@cindex @code{remove_last()}
+ @cindex @code{remove_all()}
The GiNaC class @code{lst} serves for holding a @dfn{list} of arbitrary
expressions. These are sometimes used to supply a variable number of
*************** and @code{prepend()} methods:
*** 1230,1242 ****
// ...
@end example
! Finally you can remove the first or last element of a list with
@code{remove_first()} and @code{remove_last()}:
@example
// ...
l.remove_first(); // l is now @{x, 2, y, x+y, 4*x@}
l.remove_last(); // l is now @{x, 2, y, x+y@}
@}
@end example
--- 1231,1251 ----
// ...
@end example
! You can remove the first or last element of a list with
@code{remove_first()} and @code{remove_last()}:
@example
// ...
l.remove_first(); // l is now @{x, 2, y, x+y, 4*x@}
l.remove_last(); // l is now @{x, 2, y, x+y@}
+ @end example
+
+ Finally, you can remove all the elements of a list with
+ @code{remove_all()}:
+
+ @example
+ // ...
+ l.remove_all(); // l is now empty
@}
@end example
diff -rcp GiNaC-1.0.3.orig/ginac/container.pl GiNaC-1.0.3/ginac/container.pl
*** GiNaC-1.0.3.orig/ginac/container.pl Wed Dec 19 11:54:04 2001
--- GiNaC-1.0.3/ginac/container.pl Sat Jan 19 18:19:14 2002
*************** protected:
*** 250,255 ****
--- 250,256 ----
public:
virtual ${CONTAINER} & append(const ex & b);
virtual ${CONTAINER} & remove_last(void);
+ virtual ${CONTAINER} & remove_all(void);
${PREPEND_INTERFACE}
${SORT_INTERFACE}
protected:
*************** ${CONTAINER} & ${CONTAINER}::remove_last
*** 547,552 ****
--- 548,560 ----
return *this;
}
+ ${CONTAINER} & ${CONTAINER}::remove_all(void)
+ {
+ ensure_if_modifiable();
+ seq.clear();
+ return *this;
+ }
+
${PREPEND_IMPLEMENTATION}
${SORT_IMPLEMENTATION}
From kreckel at thep.physik.uni-mainz.de Sun Jan 20 13:29:04 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Sun, 20 Jan 2002 13:29:04 +0100 (CET)
Subject: ABIisms (Was: PATCH: remove_all() method added to the containers)
In-Reply-To: <3C4A8B30.BE68AA03@cs.unipr.it>
Message-ID:
Hi,
On Sun, 20 Jan 2002, Roberto Bagnara wrote:
[...]
> public:
> virtual ${CONTAINER} & append(const ex & b);
> virtual ${CONTAINER} & remove_last(void);
> + virtual ${CONTAINER} & remove_all(void);
[...]
This changes lst's vtable layout and is therefore likely to break binary
compatibility. Whatever is decided on this, we should not put it in CVS
for this reason.
Until November we have not been nice to people w.r.t. compatibility. I
hope we manage to be a little bit more civilized from now on. The scheme
I am having in mind is currently this:
During 1.0.n, don't break binary compatibility, i.e. never set
BINARY_AGE to zero. (Of course, INTERFACE_AGE may be set to zero,
though.) Accumulate such paches as the above or Douglas' safe_bool
and throw them all into 1.1.0. Repeat the game for 1.1.n.
I know how to handle my libraries and others may know also, but let's face
it: this stuff is somewhat advanced and we don't want to run around and
explain LD_PRELOAD and friends to the people. It also helps package
maintainance on distro-side as a whole. Because then, a package libginac0
bringing with it the file $prefix/lib/libginac-1.0.so.0.2.1 can live on
its own and have other packages depend on it and later on a package
libginac1 can be added and just adds $prefix/lib/libginac-1.1.so.1.0.0 but
does not replace the old library. That one can be phased out when no
other packages depend on it downstream. Dazed and confused? Hmm, shared
library management tends to confuse people but the libtool scheme can
potentially solve all the problems, so let's enforce it a bit.
Now the real question is: How do we introduce branches in our CVS tree?
Regards
-richy.
--
Richard B. Kreckel
From markus.nullmeier at urz.uni-heidelberg.de Mon Jan 21 18:49:01 2002
From: markus.nullmeier at urz.uni-heidelberg.de (Markus Nullmeier)
Date: Mon, 21 Jan 2002 18:49:01 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To:
from "Richard B. Kreckel" at "Jan 19, 2002 04:05:27 pm"
Message-ID: <200201211749.SAA61208@aixterm7.urz.uni-heidelberg.de>
> > Make-safe patch to follow ... :)
Here it is, relative to the first patch and maybe a bit ugly.
But using "(p < 32768)" does gain a speedup of about 5% for
arguments smaller than approximately 1000. Cancelling 2
from both arguments of exquo changed the strange limiting
value of the previous patch.
Storing the binomial coefficients would certainly accelerate
the calculation, again at the expense of something like
doubled memory usage...
Regards,
Markus
--- GiNaC-1.0.3_new_bernoulli/ginac/numeric.cpp Fri Jan 18 15:48:29 2002
+++ GiNaC-1.0.3_new_bernoulli_safe/ginac/numeric.cpp Mon Jan 21 18:02:03 2002
@@ -1534,24 +1534,35 @@
// algorithm not applicable to B(2), so just store it
if (!next_r) {
+ results.push_back(); // results[0] is not used
results.push_back(cln::recip(cln::cl_RA(6)));
next_r = 4;
}
+ if (n < next_r)
+ return results[n/2];
+
+ results.reserve(n/2 + 1);
for (unsigned p = next_r; p <= n; p += 2) {
- cln::cl_I c = 1;
+ cln::cl_I c = 1; // binonmial coefficients
cln::cl_RA b = cln::cl_RA(1-p)/2;
unsigned p3 = p+3;
- unsigned p2 = p+2;
unsigned pm = p-2;
- unsigned i, k;
- for (i=2, k=0; i <= pm; i += 2, k++) {
- c = cln::exquo(c * ((p3 - i)*(p2 - i)), (i - 1)*i);
- b = b + c * results[k];
- }
+ unsigned i, k, p_2;
+ // test if intermediate unsigned int results < 2^29
+ if (p < 32768)
+ for (i=2, k=1, p_2=p/2; i <= pm; i += 2, k++, p_2--) {
+ c = cln::exquo(c * ((p3 - i) * p_2), (i - 1) * k);
+ b = b + c * results[k];
+ }
+ else
+ for (i=2, k=1, p_2=p/2; i <= pm; i += 2, k++, p_2--) {
+ c = cln::exquo((c * (p3 - i)) * p_2, cln::cl_I(i - 1) * k);
+ b = b + c * results[k];
+ }
results.push_back(- b / (p + 1));
- next_r += 2;
- }
- return results[n/2 - 1];
+ }
+ next_r = n + 2;
+ return results[n/2];
}
From kreckel at thep.physik.uni-mainz.de Tue Jan 22 13:25:48 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Tue, 22 Jan 2002 13:25:48 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To: <200201211749.SAA61208@aixterm7.urz.uni-heidelberg.de>
Message-ID:
On Mon, 21 Jan 2002, Markus Nullmeier wrote:
[...]
> But using "(p < 32768)" does gain a speedup of about 5% for
[...]
> + if (p < 32768)
[...]
I suspect the condition should really be `(p < (1UL<
From markus.nullmeier at urz.uni-heidelberg.de Tue Jan 22 23:58:36 2002
From: markus.nullmeier at urz.uni-heidelberg.de (Markus Nullmeier)
Date: Tue, 22 Jan 2002 23:58:36 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To:
from "Richard B. Kreckel" at "Jan 22, 2002 01:25:48 pm"
Message-ID: <200201222258.XAA61448@aixterm7.urz.uni-heidelberg.de>
>
> I suspect the condition should really be `(p < (1UL< order to be portable across platforms...
>
Well, I just followed the documentation which made me believe that
int -> cl_I conversion should work at least up to values of 2^29-1.
So maybe cl_value_len should be documented, as it indeed makes one
feel somewhat more secure. Now reading the headers it looks like a
weird (hypothetical?) system with alignment = 8 and sizeof(long) =
"32 bits" will break the documented behaviour :-/ Anyhow, I'm just
too lazy to incorporate this version of p^2/2 < 2^(cl_value_len-1)
into my sources and waiting for the next release...
Cheers, Markus
From kreckel at thep.physik.uni-mainz.de Wed Jan 23 11:56:13 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Wed, 23 Jan 2002 11:56:13 +0100 (CET)
Subject: Alternative bernoulli(const numeric &)
In-Reply-To: <200201222258.XAA61448@aixterm7.urz.uni-heidelberg.de>
Message-ID:
On Tue, 22 Jan 2002, Markus Nullmeier wrote:
> > I suspect the condition should really be `(p < (1UL< > order to be portable across platforms...
> >
>
> Well, I just followed the documentation which made me believe that
> int -> cl_I conversion should work at least up to values of 2^29-1.
Hmm, from CLN's docu:
: Small integers (typically in the range `-2^29'...`2^29-1', for 32-bit
: machines) are especially efficient, because they consume no heap
: allocation. Otherwise the distinction between these immediate integers
: (called "fixnums") and heap allocated integers (called "bignums") is
: completely transparent.
> So maybe cl_value_len should be documented, as it indeed makes one
> feel somewhat more secure. Now reading the headers it looks like a
> weird (hypothetical?) system with alignment = 8 and sizeof(long) =
> "32 bits" will break the documented behaviour :-/
??? On any machine where an address is 64 Bit (Alpha, ia64,...) and
matching alignment we store a full 32-Bit word and tag it as not being an
address (as immediate). That gives the range `-2^31'...`2^31-1'.
Pointer size and alignment==8, but sizeof(long)==4, hmmm, dunno what would
have to be changed... What do you say is gonna break there? (BTW, the
`-2^29'...`2^29-1' range should even hold for m68k, where alignment==2.
There we just have one tag bit less.)
> Anyhow, I'm just
> too lazy to incorporate this version of p^2/2 < 2^(cl_value_len-1)
> into my sources and waiting for the next release...
I have already put it into CVS with the (p<(1UL<
From kreckel at thep.physik.uni-mainz.de Thu Jan 24 23:54:48 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Thu, 24 Jan 2002 23:54:48 +0100 (CET)
Subject: 1.0.4 unleashed
Message-ID:
Like all other 1.0.n releases, this one claims binary compatibility to
1.0.0. Cebix has opened a branch in CVS now where ABI-breaking patches
may go in (but is too shy to make the announcement). Anyway, the news
for 1.0.4 are:
* Speedup in expand().
* Faster Bernoulli numbers (thanks to Markus Nullmeier).
* Some minor bugfixes and documentation updates.
.rpm's already on FTP, .deb's to follow soon thru all woody-mirrors.
Enjoy
-richy.
From bagnara at cs.unipr.it Fri Jan 25 14:40:13 2002
From: bagnara at cs.unipr.it (Roberto Bagnara)
Date: Fri, 25 Jan 2002 14:40:13 +0100
Subject: Parenthesization bug
Message-ID: <3C51603D.5AB282A2@cs.unipr.it>
Dear all,
there is a bug somewhere in GiNaC whereby some non-redundant parentheses
are not output. The problem is shown below by means of simple `ginsh'
session, but it can be reproduced equally well with C++ code.
$ ginsh
ginsh - GiNaC Interactive Shell (GiNaC V1.0.4)
__, _______ Copyright (C) 1999-2002 Johannes Gutenberg University Mainz,
(__) * | Germany. This is free software with ABSOLUTELY NO WARRANTY.
._) i N a C | You are welcome to redistribute it under certain conditions.
<-------------' For details type `warranty;'.
Type ?? for a list of help topics.
> 2*I^(1/3);
2*I^(1/3)
> (2*I)^(1/3);
2*I^(1/3)
> 2*(I^(1/3));
2*I^(1/3)
>
If this is confirmed to be a bug, we would need to develop a fix or
a workaround quite urgently.
Thanks a lot
Roberto
--
Prof. Roberto Bagnara
Computer Science Group
Department of Mathematics, University of Parma, Italy
http://www.cs.unipr.it/~bagnara/
mailto:bagnara at cs.unipr.it
From kreckel at thep.physik.uni-mainz.de Fri Jan 25 14:52:36 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Fri, 25 Jan 2002 14:52:36 +0100 (CET)
Subject: Parenthesization bug
In-Reply-To: <3C51603D.5AB282A2@cs.unipr.it>
Message-ID:
Hi,
On Fri, 25 Jan 2002, Roberto Bagnara wrote:
> there is a bug somewhere in GiNaC whereby some non-redundant parentheses
> are not output. The problem is shown below by means of simple `ginsh'
> session, but it can be reproduced equally well with C++ code.
>
> $ ginsh
> ginsh - GiNaC Interactive Shell (GiNaC V1.0.4)
> __, _______ Copyright (C) 1999-2002 Johannes Gutenberg University Mainz,
> (__) * | Germany. This is free software with ABSOLUTELY NO WARRANTY.
> ._) i N a C | You are welcome to redistribute it under certain conditions.
> <-------------' For details type `warranty;'.
>
> Type ?? for a list of help topics.
> > 2*I^(1/3);
> 2*I^(1/3)
> > (2*I)^(1/3);
> 2*I^(1/3)
> > 2*(I^(1/3));
> 2*I^(1/3)
> >
>
> If this is confirmed to be a bug, we would need to develop a fix or
> a workaround quite urgently.
Indeed, it's a bug. Internally, these are differnt objects. It just
seems to be their output, i.e. power::print(). Look at this:
> (2*I)^(1/3);
2*I^(1/3)
> print(%);
power, hash=0xbffff638, flags=0x3, nops=2
2i (numeric), hash=0x80000840, flags=0xf
1/3 (numeric), hash=0x80000020, flags=0xf
Can you look into that method?
Regards
-richy.
--
Richard B. Kreckel
From bagnara at cs.unipr.it Fri Jan 25 18:28:27 2002
From: bagnara at cs.unipr.it (Roberto Bagnara)
Date: Fri, 25 Jan 2002 18:28:27 +0100
Subject: PATCH for "Parenthesization bug"
References:
Message-ID: <3C5195BB.99D831A7@cs.unipr.it>
"Richard B. Kreckel" wrote:
>
> Hi,
>
> On Fri, 25 Jan 2002, Roberto Bagnara wrote:
> > there is a bug somewhere in GiNaC whereby some non-redundant parentheses
> > are not output. The problem is shown below by means of simple `ginsh'
> > session, but it can be reproduced equally well with C++ code.
> >
> > $ ginsh
> > ginsh - GiNaC Interactive Shell (GiNaC V1.0.4)
> > __, _______ Copyright (C) 1999-2002 Johannes Gutenberg University Mainz,
> > (__) * | Germany. This is free software with ABSOLUTELY NO WARRANTY.
> > ._) i N a C | You are welcome to redistribute it under certain conditions.
> > <-------------' For details type `warranty;'.
> >
> > Type ?? for a list of help topics.
> > > 2*I^(1/3);
> > 2*I^(1/3)
> > > (2*I)^(1/3);
> > 2*I^(1/3)
> > > 2*(I^(1/3));
> > 2*I^(1/3)
> > >
> >
> > If this is confirmed to be a bug, we would need to develop a fix or
> > a workaround quite urgently.
>
> Indeed, it's a bug. Internally, these are differnt objects. It just
> seems to be their output, i.e. power::print(). Look at this:
> > (2*I)^(1/3);
> 2*I^(1/3)
> > print(%);
> power, hash=0xbffff638, flags=0x3, nops=2
> 2i (numeric), hash=0x80000840, flags=0xf
> 1/3 (numeric), hash=0x80000020, flags=0xf
>
> Can you look into that method?
I checked power::print(), but this looks fine.
The problem is, I believe, in numeric::print().
A patch that solves the problem is attached:
it passes `make check' and fixes also the related bug
exemplified by the following excerpt from a ginsh session (1.0.4):
> (-I)^e;
(I)^e
What is your advice? Given that we cannot put up with that bugs,
should we install the patched version on all our machines
or should we wait for 1.0.5? Translation: do you plan to release
1.0.5 RSN? ;-)
All the best
Roberto
P.S. Did my patch adding erase_all() to the containers find its
way to the CVS branch?
--
Prof. Roberto Bagnara
Computer Science Group
Department of Mathematics, University of Parma, Italy
http://www.cs.unipr.it/~bagnara/
mailto:bagnara at cs.unipr.it
-------------- next part --------------
diff -rcp GiNaC-1.0.4.orig/ginac/numeric.cpp GiNaC-1.0.4/ginac/numeric.cpp
*** GiNaC-1.0.4.orig/ginac/numeric.cpp Thu Jan 24 22:40:11 2002
--- GiNaC-1.0.4/ginac/numeric.cpp Fri Jan 25 17:52:29 2002
*************** void numeric::print(const print_context
*** 403,427 ****
} else {
if (cln::zerop(r)) {
// case 2, imaginary: y*I or -y*I
! if ((precedence() <= level) && (i < 0)) {
! if (i == -1) {
! c.s << par_open+imag_sym+par_close;
! } else {
c.s << par_open;
print_real_number(c, i);
! c.s << mul_sym+imag_sym+par_close;
! }
! } else {
! if (i == 1) {
! c.s << imag_sym;
! } else {
! if (i == -1) {
! c.s << "-" << imag_sym;
! } else {
! print_real_number(c, i);
! c.s << mul_sym+imag_sym;
! }
}
}
} else {
// case 3, complex: x+y*I or x-y*I or -x+y*I or -x-y*I
--- 403,421 ----
} else {
if (cln::zerop(r)) {
// case 2, imaginary: y*I or -y*I
! if (i == 1)
! c.s << imag_sym;
! else {
! if (precedence() <= level)
c.s << par_open;
+ if (i == -1)
+ c.s << "-" << imag_sym;
+ else {
print_real_number(c, i);
! c.s << mul_sym+imag_sym;
}
+ if (precedence() <= level)
+ c.s << par_close;
}
} else {
// case 3, complex: x+y*I or x-y*I or -x+y*I or -x-y*I
From kreckel at thep.physik.uni-mainz.de Sat Jan 26 13:23:35 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Sat, 26 Jan 2002 13:23:35 +0100 (CET)
Subject: PATCH for "Parenthesization bug"
In-Reply-To: <3C5195BB.99D831A7@cs.unipr.it>
Message-ID:
Hi,
On Fri, 25 Jan 2002, Roberto Bagnara wrote:
> A patch that solves the problem is attached:
> it passes `make check'
Ok, but there really isn't any output checking. Originally, there was.
But then we dropped it because the low degree of predictablility in
canonical ordering made it a burden to maintain.
Now, that we have an input parser, I fancy an idea: We can randomly create
all sorts of rational numbers with weird exponents, rational complex bases
etc. and build an ex from it. This has then been run through the
anonymous evaluator and we can print it into an ostringstream. Then we
can apply .to_str() on it, and have the expression be parsed again and
compared with the original. If there are remaining bugs in either the
parser or (more likely) output, this should help shake them out. Having
this in the default regression tests would guard us against future
failures, which are not unlikely to happen, a point to be proven soon...
> and fixes also the related bug
> exemplified by the following excerpt from a ginsh session (1.0.4):
>
> > (-I)^e;
> (I)^e
Applied, thanks. This one seems to have crept in in version 0.8.1.
Point proven. :-)
> What is your advice? Given that we cannot put up with that bugs,
> should we install the patched version on all our machines
> or should we wait for 1.0.5? Translation: do you plan to release
> 1.0.5 RSN? ;-)
Maybe next week.
Regards
-richy.
--
Richard B. Kreckel
From cbauer at student.physik.uni-mainz.de Sun Jan 27 21:07:03 2002
From: cbauer at student.physik.uni-mainz.de (Christian Bauer)
Date: Sun, 27 Jan 2002 21:07:03 +0100
Subject: GiNaC 1.0.5 released
Message-ID: <20020127210703.B27874@student.physik.uni-mainz.de>
Hi!
You probably won't see it on national TV but it's new, improved, and
finger-licking good. :-) GiNaC 1.0.5 is binary compatible to 1.0.x and
has
- a slightly more versatile degree()/coeff()/collect() facility
- a bugfix for the output of imaginary numbers
As usual, you can get it from
ftp://ftpthep.physik.uni-mainz.de/pub/GiNaC
Bye,
Christian
--
/ Coding on PowerPC and proud of it
\/ http://www.uni-mainz.de/~bauec002/
From wurmli at freesurf.ch Mon Jan 28 21:04:37 2002
From: wurmli at freesurf.ch (Hans Peter Würmli)
Date: Mon, 28 Jan 2002 21:04:37 +0100
Subject: Questions re GiNaC design
In-Reply-To: <20020127210703.B27874@student.physik.uni-mainz.de>
References: <20020127210703.B27874@student.physik.uni-mainz.de>
Message-ID: <20020128210437.24e6a8fa.wurmli@freesurf.ch>
Sorry, should I ask questions that are answered somewhere. I browsed through most of the tutorial, the "Introduction to the GiNaC Framework ..." and GiNaC's wishlist. My own motivation is rather doing algorithmic algebra than physics and being able to make use of your choice of C++.
The situation I stumble again and again in GiNaC is that "ex" expressions seem to make implicit assumption about the algebraic structure I am in. It is comparable to the implicit type conversions of C (that C++ unfortunately inherited). For example, if the coefficients of a polynomial are seen as integers, it is assumed that the splitting field is contained in C. But it could also be a polynomial over a finite field, or one might be interested in p-adic extensions. I checked, how GAP handles it. There you would have to declare an indeterminate x as indeterminate over some specific ring.
Now the question: why did choose expression ex to be unspecified? Specified expressions would probably have a definition like
template class ex { ... }
with a constructor
ex(const lst & s, ...)
that would require a list of symbols, being the indeterminates. ex would then be implemented for abstract structures T like groups, rings, fields, algebras and specialisations like integers, rationals, finite fields etc. The syntax and substitution rules of the expressions could then differ according to what is allowed and possible in T. The semantics of such expressions would also always be clear and would allow for specialised algorithms.
I ask this question because I am interested to know how you vision the future evolution of GiNaC. I cannot commit myself, but will be happy to contribute as much (or little) as I can.
From frink at thep.physik.uni-mainz.de Tue Jan 29 00:08:39 2002
From: frink at thep.physik.uni-mainz.de (Alexander Frink)
Date: Tue, 29 Jan 2002 00:08:39 +0100 (CET)
Subject: PATCH for "Parenthesization bug"
In-Reply-To:
Message-ID:
On Sat, 26 Jan 2002, Richard B. Kreckel wrote:
> Now, that we have an input parser, I fancy an idea: We can randomly create
> all sorts of rational numbers with weird exponents, rational complex bases
> etc. and build an ex from it. This has then been run through the
If you replace "random" by "pseudo-random", i.e. deterministic,
then this sounds like a good idea.
However I dislike a "make check" which passes with a probability
of 95% and mysteriously fails in 5%.
Alex
--
Alexander Frink E-Mail: Alexander.Frink at Uni-Mainz.DE
Institut fuer Physik Phone: +49-6131-3923391
Johannes-Gutenberg-Universitaet
D-55099 Mainz, Germany
From kreckel at thep.physik.uni-mainz.de Tue Jan 29 11:06:16 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Tue, 29 Jan 2002 11:06:16 +0100 (CET)
Subject: PATCH for "Parenthesization bug"
In-Reply-To:
Message-ID:
Hi,
On Tue, 29 Jan 2002, Alexander Frink wrote:
> On Sat, 26 Jan 2002, Richard B. Kreckel wrote:
> > Now, that we have an input parser, I fancy an idea: We can randomly create
> > all sorts of rational numbers with weird exponents, rational complex bases
> > etc. and build an ex from it. This has then been run through the
>
> If you replace "random" by "pseudo-random", i.e. deterministic,
> then this sounds like a good idea.
While nitpicking: if you give us some real random number source it
shouldn't make a difference. :-)
> However I dislike a "make check" which passes with a probability
> of 95% and mysteriously fails in 5%.
Hmmmmm??? Why should it fail at all? The whole point about the default
output and the input-parser (a.k.a. ginsh-input syntax) was to be mutually
compatible.
Saludos
-richy.
--
Richard B. Kreckel
From kreckel at thep.physik.uni-mainz.de Tue Jan 29 12:45:57 2002
From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel)
Date: Tue, 29 Jan 2002 12:45:57 +0100 (CET)
Subject: Questions re GiNaC design
In-Reply-To: <20020128210437.24e6a8fa.wurmli@freesurf.ch>
Message-ID:
Hi,
On Mon, 28 Jan 2002, Hans Peter W?rmli wrote:
> Sorry, should I ask questions that are answered somewhere. I browsed
> through most of the tutorial, the "Introduction to the GiNaC Framework
> ..." and GiNaC's wishlist. My own motivation is rather doing algorithmic
> algebra than physics and being able to make use of your choice of C++.
>
> The situation I stumble again and again in GiNaC is that
> "ex" expressions seem to make implicit assumption about the algebraic
> structure I am in. It is comparable to the implicit type conversions of
> C (that C++ unfortunately inherited). For example, if the coefficients
> of a polynomial are seen as integers, it is assumed that the splitting
> field is contained in C. But it could also be a polynomial over a finite
> field, or one might be interested in p-adic extensions. I checked, how
> GAP handles it. There you would have to declare an indeterminate x as
> indeterminate over some specific ring.
>
> Now the question: why did choose expression ex to be unspecified?
> Specified expressions would probably have a definition like
>
> template class ex { ... }
>
> with a constructor
>
> ex(const lst & s, ...)
>
> that would require a list of symbols, being the indeterminates. ex
> would then be implemented for abstract structures T like groups, rings,
> fields, algebras and specialisations like integers, rationals, finite
> fields etc. The syntax and substitution rules of the expressions could
> then differ according to what is allowed and possible in T. The
> semantics of such expressions would also always be clear and would allow
> for specialised algorithms.
>
> I ask this question because I am interested to know how you vision the
> future evolution of GiNaC. I cannot commit myself, but will be happy to
> contribute as much (or little) as I can.
Good question. Such a hierarchic algebraic approach would make some
things clearer. Mathematicians frequently seem to favor it but they tend
to seep the implementation problems under the rug. Why didn't we do it?
Because designing and implementing such a thing would have been much more
work and we needed to use the library and have it run fast and we simply
had not enough spare man-years lying around.
Richard Fateman once wrote an interesting comparison betweeen the two
approaches: .
Regards
-richy.
--
Richard B. Kreckel
From wurmli at freesurf.ch Wed Jan 30 19:12:27 2002
From: wurmli at freesurf.ch (Hans Peter Würmli)
Date: Wed, 30 Jan 2002 19:12:27 +0100
Subject: Questions re GiNaC design
In-Reply-To:
References: <20020128210437.24e6a8fa.wurmli@freesurf.ch>
Message-ID: <20020130191227.5833b641.wurmli@freesurf.ch>
On Tue, 29 Jan 2002 12:45:57 +0100 (CET)
"Richard B. Kreckel" wrote:
> >
> > Now the question: why did choose expression ex to be unspecified?
>
> Good question. Such a hierarchic algebraic approach would make some
> things clearer. Mathematicians frequently seem to favor it but they tend
> to seep the implementation problems under the rug. Why didn't we do it?
> Because designing and implementing such a thing would have been much more
> work and we needed to use the library and have it run fast and we simply
> had not enough spare man-years lying around.
>
> Richard Fateman once wrote an interesting comparison betweeen the two
> approaches: .
>
Thanks for the reference. It was interesting reading, but I was not totally sure whether Richard Fateman also expects too much, e.g. when he writes "... Could we anticipate new results such as the invention of the Risch integration algorithm ..." (With a diagonal argument you could probably disprove such a conjecture, if it were mad.) But also on the practical side, he does not seem to realise that many "facts" of mathematics cannot be constructed, either because no algorithm has been found or because no algorithm can be found. Even the simple test whether something is zero can only be answered in special situations. (You have to give an algorithm that finishes, not to speak of the complexity it might have.)
So far I have found for most of my little problems a solution, maybe not an elegant one, but one that seems to work.
Maybe one pattern matching situation could find a different implementation in a future release (I am aware of your statement that pattern matching is purely syntactic):
find(x^2, pow(x,wild())) will return {x^2}
find(x, pow(x,wild())) will return {}
whereas
find(x, pow(x,1)) will return {x}
I would prefer if the latter two were equal and == {x}
Cheers, H.P.