From stefanw at fis.unipr.it Mon Jul 2 16:04:17 2001 From: stefanw at fis.unipr.it (Stefan Weinzierl) Date: Mon, 2 Jul 2001 16:04:17 +0200 (CEST) Subject: Wishlist In-Reply-To: Message-ID: On Fri, 29 Jun 2001, Alexander Frink wrote: > I expect similar results for a GiNaC in double-precision mode. > > > Alternatively emit the file, automatically add the necessary boilerplate, > > compile it and link it back in using dlopen(3). On systems that support > > dlopen, such as Linux, with a little effort the whole procedure can be > > entirely autmated, as far as I can see. > > I think it is worth writing a prototype for this and include > it in the distribution (or at least documentation) if it is generic > enough. cint can use a similar trick (#pragma compile). > Hi, thinking about it, I believe that an automated approach based on dlopen is indeed the better alternative. Stefan From kreckel at thep.physik.uni-mainz.de Mon Jul 2 20:15:22 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Mon, 2 Jul 2001 20:15:22 +0200 (CEST) Subject: Wishlist In-Reply-To: Message-ID: On Mon, 2 Jul 2001, Stefan Weinzierl wrote: > On Fri, 29 Jun 2001, Alexander Frink wrote: > > > I expect similar results for a GiNaC in double-precision mode. Well, it should be womewhat faster. Those for-loops in interpreted CAS are so incredibly sloooowww... Just assembling those matrices for the Lewis-Wester checks takes some while in Maple and MuPAD whereas it is a no-timer in GiNaC. > > > Alternatively emit the file, automatically add the necessary boilerplate, > > > compile it and link it back in using dlopen(3). On systems that support > > > dlopen, such as Linux, with a little effort the whole procedure can be > > > entirely autmated, as far as I can see. > > > > I think it is worth writing a prototype for this and include > > it in the distribution (or at least documentation) if it is generic > > enough. cint can use a similar trick (#pragma compile). > > > > > Hi, > > thinking about it, I believe that an automated approach based on dlopen is > indeed the better alternative. Maybe someone should research this libltdl-thing which comes with the libtool distribution. From the README: : This is GNU libltdl, a system independent dlopen wrapper for GNU libtool. : : It supports the following dlopen interfaces: : * dlopen (Solaris, Linux and various BSD flavors) : * shl_load (HP-UX) : * LoadLibrary (Win16 and Win32) : * load_add_on (BeOS) : * GNU DLD (emulates dynamic linking for static libraries) : * libtool's dlpreopen Regards -richy. -- Richard Kreckel From abele at kingmemo.de Thu Jul 5 18:41:56 2001 From: abele at kingmemo.de (Wolfgang Abele) Date: Thu, 5 Jul 2001 12:41:56 -0400 Subject: Factorization Message-ID: <01070512415600.01185@localhost.localdomain> Hi everyone, can you tell me what's the current situation regarding polynomial factorization in GiNaC? According to your wishlist, it's still missing, but I've just read some mails in this list saying that some people are working on the task. I'm a student of mathematics, and my diploma thesis is about polynomial factorization. I'm interested in both factorization in Z[x] and algebraic extensions, and I wonder if I could implement some algorithms or contribute to any ongoing projects. Wolfgang By the way, I've found out how to declare a polynomial ring over modular integers in CLN, so I don't need an example any more. From kreckel at thep.physik.uni-mainz.de Fri Jul 6 19:36:00 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Fri, 6 Jul 2001 19:36:00 +0200 (CEST) Subject: Factorization In-Reply-To: <01070512415600.01185@localhost.localdomain> Message-ID: Hi there, On Thu, 5 Jul 2001, Wolfgang Abele wrote: > can you tell me what's the current situation regarding polynomial > factorization in GiNaC? According to your wishlist, it's still missing, but > I've just read some mails in this list saying that some people are working on > the task. Not that I knew of... > I'm a student of mathematics, and my diploma thesis is about polynomial > factorization. I'm interested in both factorization in Z[x] and algebraic > extensions, and I wonder if I could implement some algorithms or contribute > to any ongoing projects. Factorization in Z[x] is something that would be somewhat outside the normal GiNaC usage. As you have already found out there is no support for univariate polynomials over, say, Z. The reason for this is threefold: 1) for our physics applications here this is uninteresting, 2) we needed the more general expressions anyways and 3) implementation-wise this is the more trivial case anyway and already cleanly implemented in other libraries, CLN for instance. However, factorization in GiNaC would become really interesting as soon as one considers all the lifting up to multivariate polynomials over Z (and maybe algebraic extensions but I think these are not the main difficulty). If this is in any way tractable, I would suggest not investing too much time into factorization over Zp[x] by maybe implementing this in CLN -- which by the way would be a good place for factorization. Instead, maybe what should be done is use Victor Shoup's GPL'ed library NTL which seems to be the powerhorse in this field. But then again, I am not an expert in this field and would be glad to be convinced otherwise. Suggestions? > By the way, I've found out how to declare a polynomial ring over modular > integers in CLN, so I don't need an example any more. Cool! Regards -richy. -- Richard Kreckel From abele at kingmemo.de Sun Jul 8 05:03:13 2001 From: abele at kingmemo.de (Wolfgang Abele) Date: Sat, 7 Jul 2001 23:03:13 -0400 Subject: Factorization In-Reply-To: References: Message-ID: <01070713044200.01341@localhost.localdomain> Am Freitag, 6. Juli 2001 13:36 schrieben Sie: > time into factorization over Zp[x] by maybe implementing this in CLN -- > which by the way would be a good place for factorization. Instead, maybe > what should be done is use Victor Shoup's GPL'ed library NTL which seems > to be the powerhorse in this field. I've played around with NTL a bit, and once you've got the hang of using those numerous conversions, I find it quite easy to work with. When it comes to factoring polynomials over Z[x] or Zp[x], NTL is the best tool you can get. So you could do a lot worse than integrate NTL in GiNaC. I don't know, though, how this integration should be done technically since NTL uses its own number classes that may confict with CLN's. Also, NTL doesn't support multivariate polynomials, calculations in Q, and algebraic extensions of Q. As for the ginsh installation, yes, you were right. The rpms worked fine. Don't know why I didn't use them in the first place. Wolfgang From kreckel at thep.physik.uni-mainz.de Sun Jul 8 17:04:22 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Sun, 8 Jul 2001 17:04:22 +0200 (CEST) Subject: Factorization In-Reply-To: <01070713044200.01341@localhost.localdomain> Message-ID: Hi there, On Sat, 7 Jul 2001, Wolfgang Abele wrote: > I've played around with NTL a bit, and once you've got the hang of using > those numerous conversions, I find it quite easy to work with. When it comes > to factoring polynomials over Z[x] or Zp[x], NTL is the best tool you can > get. So you could do a lot worse than integrate NTL in GiNaC. I don't know, > though, how this integration should be done technically since NTL uses its > own number classes that may confict with CLN's. That should not be a real problem. The newest version of NTL is fully powered by GMP, so the underlying representation is the same. Also, Victor is wise enough to target for ANSI C++ and he even has wrapped all his bases into a namespace. Care has to be taken for a couple of exceptions like CLN's immediate data types and so on but if serious interest arises I could provide reasonable adaptor stuff. > Also, NTL doesn't support multivariate polynomials, calculations in Q, and > algebraic extensions of Q. Q is a no-brainer once the lifting is in place. It is the latter which we know nothing about over here. Are algebraic extensions really difficult? I remember Bernard Parisse once claimed they are not. Bernard? Regards -richy. -- Richard Kreckel From Bernard.Parisse at wanadoo.fr Sun Jul 8 19:56:34 2001 From: Bernard.Parisse at wanadoo.fr (Bernard Parisse) Date: Sun, 08 Jul 2001 18:56:34 +0100 Subject: Factorization References: Message-ID: <3B489ED2.E55E2802@wanadoo.fr> "Richard B. Kreckel" wrote: > > > Also, NTL doesn't support multivariate polynomials, calculations in Q, and > > algebraic extensions of Q. > > Q is a no-brainer once the lifting is in place. It is the latter which we > know nothing about over here. Are algebraic extensions really difficult? > I remember Bernard Parisse once claimed they are not. Bernard? Factoring over algebraic extension is indeed not difficult. The algorithm can be found for example in the excellent book of number theory of Henri Cohen (See algorithm 3.6.4 in Henri Cohen book). In short, if P(X,Y) is a polynomial of X where Y denotes the algebraic extension, and Q(Y) the minimal polynomial of Y, you compute the resultant N(X) with respect to Y of P(X+kY,Y) and Q(Y), where k is an integer such that the resultant N is square-free. Then the factors of P(X,Y) are the gcd of the factors of N(X+kY) and Q(Y). The implementation in giac is in the gausspol.cc file (functions ext_factor and algfactor). BTW, I have not checked NTL recently, but last year, it did not implement the best reconstruction algorithm which is AFAIK the knapsack algorithm (based on LLL). Did that change? From abele at kingmemo.de Mon Jul 9 09:05:00 2001 From: abele at kingmemo.de (Wolfgang Abele) Date: Mon, 9 Jul 2001 03:05:00 -0400 Subject: Factorization In-Reply-To: <3B489ED2.E55E2802@wanadoo.fr> References: <3B489ED2.E55E2802@wanadoo.fr> Message-ID: <01070903050000.02430@localhost.localdomain> Am Sonntag, 8. Juli 2001 13:56 schrieben Sie: > Factoring over algebraic extension is indeed not difficult. > In short, if P(X,Y) is a polynomial of X where Y denotes > the algebraic extension, and Q(Y) the minimal polynomial of Y, > you compute the resultant N(X) with respect to Y of P(X+kY,Y) and Q(Y), > where k is an integer such that the resultant N is square-free. Then > the factors of P(X,Y) are the gcd of the factors of N(X+kY) and Q(Y). This is the standard Trager algorithm. Like you're saying, you need to have a gcd over extensions and resultant as subroutines. If Richy provides that adaptor stuff, I'll try and implement Trager. If anybody's interested in the more difficult multivariate factorization I could provide him or her with a step-by-step guide to start with. Incidentally, the problem with Trager is that it soon becomes inefficient with high degree polynomials. This is because the polynomial's coefficients and degree become much bigger in the norm polynomial. Also, the norm tends to have many modular factors. People are currently looking into whether Trager can be improved by the new knapsack algorithm, either through faster factorization of the norm or a direct application to the algebraic extensions. > BTW, I have not checked NTL recently, but last year, it did not > implement the best reconstruction algorithm which is AFAIK the knapsack > algorithm (based on LLL). Did that change? There is an implementation for NTL written by Paul Zimmermann. It's linked on the NTL web site. Regards, Wolfgang From stefanw at fis.unipr.it Mon Jul 9 10:09:37 2001 From: stefanw at fis.unipr.it (Stefan Weinzierl) Date: Mon, 9 Jul 2001 10:09:37 +0200 (CEST) Subject: Factorization In-Reply-To: <01070903050000.02430@localhost.localdomain> Message-ID: Gentlemen, we wrote some small routines which integrates the factorization of univariate polynomials into the GiNaC framework. The hard work is done by the NTL library, we just wrote the necesarry conversion routines for (possibly very long) integers and a function ex polyfactor( const ex &PolyIn, const symbol &x ) which factorizes PolyIn if it is a univariate polynomial in x. The source code is in the latest verstion of gTybalt (0.0.6) and is quite independent of gTybalt. Feel free to use this code, we don't have any plans to move on to multivariate polynomials. Stefan From kreckel at thep.physik.uni-mainz.de Mon Jul 9 11:15:19 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Mon, 9 Jul 2001 11:15:19 +0200 (CEST) Subject: Factorization In-Reply-To: <01070903050000.02430@localhost.localdomain> Message-ID: On Mon, 9 Jul 2001, Wolfgang Abele wrote: > This is the standard Trager algorithm. Like you're saying, you need to have a > gcd over extensions and resultant as subroutines. If Richy provides that > adaptor stuff, I'll try and implement Trager. If anybody's interested in the > more difficult multivariate factorization I could provide him or her with a > step-by-step guide to start with. Err,... the problem is that we never bothered with univariate polynomials in GiNaC. You can of course declare them using the sparse general representation provided by classes `add' and `mul'. Then you need a conversion routine from NTL's data type to this one and vice-versa. Is that what you are looking for? (It seems like Stefan has written this already; I'll look into it this weekend.) However, nothing prevents people from poking multivariate polynomials into such a factorizer. Hence, the general mutivariate stuff would be what is really suited for GiNaC. But if you think that doing univariate first is the right thing to do in order to get started with factorization, then please go ahead! Two remarks: there is currently no class that represents algebraic extensions. Representation is another no-brainer, as far as I can see, since it should just hold one expression which represents the zero. Also, our GCD routines are not prepared for extensions. Is that needed? Is it difficult??? Regards -richy. -- Richard Kreckel From abele at kingmemo.de Tue Jul 10 20:27:59 2001 From: abele at kingmemo.de (Wolfgang Abele) Date: Tue, 10 Jul 2001 14:27:59 -0400 Subject: Factorization In-Reply-To: References: Message-ID: <01071012524000.02972@localhost.localdomain> Can anybody tell me how to include Stefan's conversion routines from gtybalt in GiNaC? Should I write a symbolic function or add a class or copy the routines in normal.cpp and normal.h and 'make' GiNaC again? (The latter won't work with me, though, because I get those ginsh error messages no matter if I use readline 4.1 or 4.2 (the bug is supposedly fixed in 4.2)) >>Hence, the general mutivariate stuff would be what is really suited for >>GiNaC. But if you think that doing univariate first is the right thing to >>do in order to get started with factorization Multivariate factorization is always reduced to univariate factorization. So you've got to have it in GiNaC, even if it's just a subroutine. >>there is currently no class that represents algebraic >>extensions. Representation is another no-brainer, as far as I can see, >>since it should just hold one expression which represents the zero. Yes, that's right. Computing the polynomial remainder of a division is already implemented, isn't it? Later we also may need a routine to compute the primitive element for a tower of extensions. >>our GCD routines are not prepared for extensions. Is that needed? We would need one as a subroutine for Trager. Trager, by the way, can handle multivariate polynomials as well. >> Is it difficult???applies to The algorithm isn't, which doesn't imply that's it's also easy to implement, at least not as far as I'm concerned. Servus, Wolfgang From mriedel at neuearbeit.de Wed Jul 11 23:02:40 2001 From: mriedel at neuearbeit.de (Marko Riedel) Date: Wed, 11 Jul 2001 23:02:40 +0200 (CEST) Subject: Indices and differentiation. CLN. Message-ID: <15180.43556.534471.830673@linuxsexi.neuearbeit.de> Greetings. Two questions: 1. Could you make precompiled CLN library binaries for Linux (RPM format) available somewhere on the web? 2. What is wrong here? I downloaded and installed GiNaC-0.9.1-1.i386.rpm from ftp://ftpthep.physik.uni-mainz.de/pub/GiNaC/rpm/ and I downloaded and compiled cln-1.1.1.tar.gz. riedel at linuxfast:qselav > cat test.cpp #include #include using namespace std; using namespace GiNaC; static symbol g("g"); static symbol z("z"); int main(int argc, char **argv){ ex T; T=indexed(g, idx(3, 1))*pow(z, 3); cout << T.diff(z) << endl; } riedel at linuxfast:qselav > c++ test.cpp -o testdiff -lcln -lginac /usr/local/lib/libginac.so: undefined reference to `__dynamic_cast_2' collect2: ld returned 1 exit status riedel at linuxfast:qselav > ls -l /usr/local/lib/libginac.so lrwxrwxrwx 1 root root 26 Jul 11 20:24 /usr/local/lib/libginac.so -> /usr/lib/libginac-0.9.so.1 riedel at linuxfast:qselav > Best regards, Marko Riedel From kreckel at thep.physik.uni-mainz.de Wed Jul 11 23:30:06 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Wed, 11 Jul 2001 23:30:06 +0200 (CEST) Subject: Indices and differentiation. CLN. In-Reply-To: <15180.43556.534471.830673@linuxsexi.neuearbeit.de> Message-ID: Hi Marko, On Wed, 11 Jul 2001, Marko Riedel wrote: > 1. Could you make precompiled CLN library binaries for Linux (RPM > format) available somewhere on the web? They are already packaged at: > 2. What is wrong here? I downloaded and installed > GiNaC-0.9.1-1.i386.rpm from > ftp://ftpthep.physik.uni-mainz.de/pub/GiNaC/rpm/ and I downloaded and > compiled cln-1.1.1.tar.gz. > > riedel at linuxfast:qselav > cat test.cpp > #include > #include > using namespace std; > using namespace GiNaC; > > static symbol g("g"); > static symbol z("z"); > > > int main(int argc, char **argv){ > ex T; > > T=indexed(g, idx(3, 1))*pow(z, 3); > cout << T.diff(z) << endl; > } > riedel at linuxfast:qselav > c++ test.cpp -o testdiff -lcln -lginac > /usr/local/lib/libginac.so: undefined reference to `__dynamic_cast_2' > collect2: ld returned 1 exit status > riedel at linuxfast:qselav > ls -l /usr/local/lib/libginac.so > lrwxrwxrwx 1 root root 26 Jul 11 20:24 /usr/local/lib/libginac.so -> /usr/lib/libginac-0.9.so.1 > riedel at linuxfast:qselav > Probably a compiler error. Which is you version of GCC? EGCS is having some problems with symbols that it cannot find for some obscure reason. We recommend using GCC version 2.95.x or 3.0. Regards -richy. -- Richard Kreckel From kreckel at thep.physik.uni-mainz.de Thu Jul 12 18:13:08 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Thu, 12 Jul 2001 18:13:08 +0200 (CEST) Subject: 3/5 vs. \frac{3}{5} Message-ID: Do Hoang Son just raised the issue why in print_latex context rationals are not typeset as \frac{3}{5} but instead as plain 3/5. Before implementing this I would like to ask if anybody is having an objection? Regards -richy. -- Richard Kreckel From stefanw at fis.unipr.it Fri Jul 13 08:50:59 2001 From: stefanw at fis.unipr.it (Stefan Weinzierl) Date: Fri, 13 Jul 2001 08:50:59 +0200 (CEST) Subject: 3/5 vs. \frac{3}{5} In-Reply-To: Message-ID: Hi, I think \frac{3}{5} would indeed be better than 3/5. Originally I didn't bother because rationals are printed using CLN's printing routine. Therefore no objections from my side, TeXmacs should be able to handle \frac{3}{5} and it looks even nicer. Stefan On Thu, 12 Jul 2001, Richard B. Kreckel wrote: > Do Hoang Son just raised the issue why in print_latex context rationals > are not typeset as \frac{3}{5} but instead as plain 3/5. Before > implementing this I would like to ask if anybody is having an objection? > > Regards > -richy. > -- > Richard Kreckel > > > > From cbauer at student.physik.uni-mainz.de Fri Jul 13 13:50:05 2001 From: cbauer at student.physik.uni-mainz.de (Christian Bauer) Date: Fri, 13 Jul 2001 13:50:05 +0200 Subject: 3/5 vs. \frac{3}{5} In-Reply-To: References: Message-ID: <20010713135005.C6641@iphcip1.physik.uni-mainz.de> Hi! On Fri, Jul 13, 2001 at 08:50:59AM +0200, Stefan Weinzierl wrote: > I think \frac{3}{5} would indeed be better than 3/5. And maybe we should do this for polynomial fractions as well (i.e. collect all positive and negative powers in a mul and output a \frac unless they are all positive or all negative)? Bye, Christian -- / Coding on PowerPC and proud of it \/ http://www.uni-mainz.de/~bauec002/ From duraid at fl.net.au Mon Jul 23 07:17:16 2001 From: duraid at fl.net.au (Duraid Madina) Date: Mon, 23 Jul 2001 15:17:16 +1000 Subject: A move away from CLN? Message-ID: <5.1.0.14.2.20010723150929.02af0db0@pop.syd.fl.net.au> Hi all, Please forgive my ignorance, but is there any chance GiNaC might move away from CLN and in its place use a "softer" library such as NTL? (http://www.shoup.net/ntl/) At least then we might have a chance of moving away from (or at least not kneeling before) Mark Mitchell and his GCC henchmen ;-) What sort of obstacles would such a 'port' face? Just how many innocent undergrad students would need to be tortured for such a feat to occur? Disappointed with GCC, (albeit deleriously happy with GiNaC) Duraid From kreckel at thep.physik.uni-mainz.de Mon Jul 23 16:06:31 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Mon, 23 Jul 2001 16:06:31 +0200 (CEST) Subject: A move away from CLN? In-Reply-To: <5.1.0.14.2.20010723150929.02af0db0@pop.syd.fl.net.au> Message-ID: On Mon, 23 Jul 2001, Duraid Madina wrote: > Hi all, Hi Duraid, nice meeting you again! > Please forgive my ignorance, but is there any chance GiNaC might move away > from CLN and in its place use a "softer" library such as NTL? > (http://www.shoup.net/ntl/) At least then we might have a chance of moving > away from (or at least not kneeling before) Mark Mitchell and his GCC > henchmen ;-) What's so wrong about Mark and his henchmen? In my impression they are producing an excellent compiler. Both GiNaC and CLN should compile fine with GCC-3.0. (CLN-1.1.1 has compilation problems on non-x86 platforms with GCC-3.0, but I am working on these and plan to release 1.1.2 this week.) What is the exact problem? What alternative compiler are you having in mind? Note that CLN is ideally suited as a basis for computer algebra systems, mainly for three reasons: 1) Immediate types. An integer with absolute value smaller than 2^29 is immediate, not heap-allocated. Saves one indirection. 2) Honors the injection of integers into rationals automatically, many aspects are more algebraic than in any other comparable library. 3) Reference-counted memory management. This works seamlessly with GiNaC's memory management. Compare this with MuPAD, where the memory management occassionally clashes with that from the underlying PARI library or with Magma, which occassionaly blows up out of the blue sky because of interferences with some Kant remnants. > What sort of obstacles would such a 'port' face? Just how many innocent > undergrad students would need to be tortured for such a feat to occur? On the source-front only files numeric.h and numeric.cpp would have to be touched. But quite a number of functions would need to be implemented in GiNaC as opposed to delegating them to CLN. Hmm, I haven't gone through them and compared them with NTL's capabilities. Maybe one half or three undergrad students? Do you have some spare undergrads? ;-) There is of course also the packaging-front: NTL has no suitable packaging, in my opinion it badly needs to be libtoolized! > Disappointed with GCC, (albeit deleriously happy with GiNaC) Oh, you shameless flatterer! Seriously, if making things work on another compiler is all you are concerned with, I can assure you that CLN is not so non-portable. Here is demo that it can be done with moderate effort: . Be warned though: On non-GCC you must probably compile CLN with "-DNO_ASM -DNO_PROVIDE_REQUIRE", and build a static library only. By the way, all this nuisance is in order to prevent this: from happening. Even then you can expect a fair number of compiler-sillinesses (no operator-> on iterators and bullshit of this sort) to happen. If you then still find CLN sillinesses I'ld definitely like to hear about them and see if they can be fixed. Cheers -richy. -- Richard B. Kreckel From duraid at fl.net.au Tue Jul 24 06:23:49 2001 From: duraid at fl.net.au (Duraid Madina) Date: Tue, 24 Jul 2001 14:23:49 +1000 (EST) Subject: A move away from CLN? In-Reply-To: from "Richard B. Kreckel" at "Jul 23, 2001 4: 6:31 pm" Message-ID: <200107240423.OAA01241@jander.fl.net.au> > What's so wrong about Mark and his henchmen? In my impression they are > producing an excellent compiler. You must be living in x86 land, then. We poor souls stuck with alpha and IA64 are not so lucky. (If you want to know what I mean, try searching for some combination of {stupid, insane, broken, disgusting, alpha, ia64} in the GCC mailing list archive ;-) > Both GiNaC and CLN should compile fine with GCC-3.0. (CLN-1.1.1 has > compilation problems on non-x86 platforms with GCC-3.0, but I am working > on these and plan to release 1.1.2 this week.) Thanks for the heads-up :-) I'll see how I go with 1.1.2! > What is the exact problem? What alternative compiler are you having in > mind? The Intel C++ compiler, which rocks hard. It's turned into a hybrid of GCC and the KAI compiler, and it's just plain excellent, front-end and back! > Note that CLN is ideally suited as a basis for computer algebra systems, > mainly for three reasons: > 1) Immediate types. An integer with absolute value smaller than 2^29 > is immediate, not heap-allocated. Saves one indirection. Does NTL do this? I haven't looked. NTL feels a lot, lot faster than CLN, though this is just an awesomely subjective remark based on my experience with Not That Many programs I've written. > 2) Honors the injection of integers into rationals automatically, many > aspects are more algebraic than in any other comparable library. This has to be pretty easy to wrap around. I'm talking 0.4 undergrads here. ;-) > 3) Reference-counted memory management. This works seamlessly with > GiNaC's memory management. Compare this with MuPAD, where the memory > management occassionally clashes with that from the underlying PARI > library or with Magma, which occassionaly blows up out of the blue > sky because of interferences with some Kant remnants. Hey, if memory management was ever a problem for me, I just threw the Boehm garbage collector at it, and hey presto, no more problem. ;-) I know, I know, I'm pretty damn lazy sometimes, but really, is anyone going to be using CLN in a life support machine? Hmmm. > > What sort of obstacles would such a 'port' face? Just how many innocent > > undergrad students would need to be tortured for such a feat to occur? > > On the source-front only files numeric.h and numeric.cpp would have to be > touched. But quite a number of functions would need to be implemented in > GiNaC as opposed to delegating them to CLN. Hmm, I haven't gone through > them and compared them with NTL's capabilities. Maybe one half or three > undergrad students? Do you have some spare undergrads? ;-) I wish I did, the only one I can think of is my Significant Other and she's so busy with her thesis that I can therefore only donate 0.05 undergrads to GiNaC right now. There's a subject at our university 'Symbolic Computation', every student has to do some sort of 'project' to pass it. I might try talking to the lecturer... > There is of course also the packaging-front: NTL has no suitable > packaging, in my opinion it badly needs to be libtoolized! Hey! HEY! Are we _completely_ forgetting about the Win32 community? ;-) (Visual C++ 7 seems to be coming along half- (third-?) decently with regards to Compliance, it's funny how many MSVC 5/6 programs now break in 7 though.) > Oh, you shameless flatterer! Seriously, if making things work on another > compiler is all you are concerned with, I can assure you that CLN is not > so non-portable. Here is demo that it can be done with moderate effort: > . Thank you! I will always maintain that CLN is a bit perverted, but I only praise NTL because it Builds Anywhere. > Be warned though: On non-GCC you must probably compile CLN with > "-DNO_ASM -DNO_PROVIDE_REQUIRE", and build a static library only. Thank you, thank you, I will be quiet now and see where I get with that! > If you then still find CLN sillinesses I'ld > definitely like to hear about them and see if they can be fixed. I can't remember the last time I ever built something without warnings disabled. (Please don't laugh/cry) So I can't promise too much, but if I have real problems, I may well come back jumping up and down!^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^Hsend you an email. Duraid From kreckel at thep.physik.uni-mainz.de Tue Jul 24 09:52:33 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Tue, 24 Jul 2001 09:52:33 +0200 (CEST) Subject: A move away from CLN? In-Reply-To: <200107240423.OAA01241@jander.fl.net.au> Message-ID: On Tue, 24 Jul 2001, Duraid Madina wrote: [...] > > What is the exact problem? What alternative compiler are you having in > > mind? > > The Intel C++ compiler, which rocks hard. It's turned into a hybrid of GCC > and the KAI compiler, and it's just plain excellent, front-end and back! The KAI compiler (now Intel) is definitely worth trying, as is SGI's compiler. Both are trying very hard to be standard conforming. Tell us if the static way works for you. > > Note that CLN is ideally suited as a basis for computer algebra systems, > > mainly for three reasons: > > 1) Immediate types. An integer with absolute value smaller than 2^29 > > is immediate, not heap-allocated. Saves one indirection. > > Does NTL do this? I haven't looked. NTL feels a lot, lot faster than CLN, > though this is just an awesomely subjective remark based on my experience > with Not That Many programs I've written. I really do challange this statement. Dan Bernstein has made an effort and benchmarked some systems: . This measures multiplication only, though, and it is quite old. For CLN-1.1 I have adjusted the break-even points for different algorithms anew and it should be about twice as fast now. NTL may also have gained in the meanwhile, of course. [...] > Hey! HEY! Are we _completely_ forgetting about the Win32 community? ;-) > (Visual C++ 7 seems to be coming along half- (third-?) decently with regards > to Compliance, it's funny how many MSVC 5/6 programs now break in 7 though.) What is Win32? What is MSVC??? Regards -richy. -- Richard Kreckel From kreckel at thep.physik.uni-mainz.de Tue Jul 24 17:53:41 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Tue, 24 Jul 2001 17:53:41 +0200 (CEST) Subject: A move away from CLN? In-Reply-To: <200107240423.OAA01241@jander.fl.net.au> Message-ID: On Tue, 24 Jul 2001, Duraid Madina wrote: > The Intel C++ compiler, which rocks hard. It's turned into a hybrid of GCC > and the KAI compiler, and it's just plain excellent, front-end and back! Err, I just gave the German distributor a call and this is what they claimed: KAI is a frontend-only compiler, it uses GCC or another C-compiler as backend. (?) Regards -richy. -- Richard B. Kreckel From duraid at fl.net.au Tue Jul 24 23:20:43 2001 From: duraid at fl.net.au (Duraid Madina) Date: Wed, 25 Jul 2001 07:20:43 +1000 (EST) Subject: A move away from CLN? In-Reply-To: from "Richard B. Kreckel" at "Jul 24, 2001 5:53:41 pm" Message-ID: <200107242120.HAA03406@jander.fl.net.au> > Err, I just gave the German distributor a call and this is what they > claimed: KAI is a frontend-only compiler, it uses GCC or another > C-compiler as backend. (?) Are you sure they're not talking about another KAI product, "Visual KAP". This is a preprocessor which automatically parallizes source code (and also performs some source-level optimizations even for serial codes) before passing it along to another compiler. It's not really a 'compiler' in and of itself, though. You'd require another compiler as a backend. (That product is about to be discontinued, by the way.) What seems to be happening (and is good news) is that the Intel compiler is slowly transforming from a "rebadged" EPC compiler (with Intel-tweaked) back-end to a KAI/EPC hybrid (with what appears to be a new back-end for IA64 (that or they did a LOT of work on an existing one) - the reason I spoke about "GCC front-end" is that the Intel compiler is doing a pretty decent job as a chameleon - on Windows, it acts just like Microsoft's Visual C++ and on Linux, it makes a pretty decent stab at acting like GCC. It's not enough to let you build a kernel, but it's pretty good! Duraid From dhson at thep.physik.uni-mainz.de Sun Jul 29 21:19:02 2001 From: dhson at thep.physik.uni-mainz.de (Do Hoang Son) Date: Sun, 29 Jul 2001 21:19:02 +0200 (CEST) Subject: GiNaC/ginac clifford.cpp In-Reply-To: <200107271449.f6REng405566@doraemon.physik.uni-mainz.de> Message-ID: Hi, There still has a bug somewhere in simplify_index(). Look at this case: ex dt = dirac_slash(p1, D) + m1* dirac_ONE(); ex db = dirac_slash(p2, D) + m2* dirac_ONE(); cout << (dt*dirac_gamma5()*db*dirac_gamma5()).simplify_indexed()<< endl; It returns: -m2*p1\-p1\*p2\-m2*ONE*m1-p2\*m1 The correct one should be: m2*p1\-p1\*p2\+m2*ONE*m1+p2\*m1 It seems that GiNaC does: dirac_gamma5()*dirac_ONE() = -dirac_ONE()*dirac_gamma5() Cheers, Son From dhson at thep.physik.uni-mainz.de Mon Jul 30 10:37:45 2001 From: dhson at thep.physik.uni-mainz.de (Do Hoang Son) Date: Mon, 30 Jul 2001 10:37:45 +0200 (CEST) Subject: GiNaC/ginac clifford.cpp In-Reply-To: Message-ID: On Sun, 29 Jul 2001, Do Hoang Son wrote: ........> > It returns: > -m2*p1\-p1\*p2\-m2*ONE*m1-p2\*m1 > > The correct one should be: > m2*p1\-p1\*p2\+m2*ONE*m1+p2\*m1 ^^^^^ Sorry, it shold be m2*p1\-p1\*p2\+m2*ONE*m1 - p2\*m1 ^^^^^ So*n. From cbauer at student.physik.uni-mainz.de Mon Jul 30 15:33:53 2001 From: cbauer at student.physik.uni-mainz.de (Christian Bauer) Date: Mon, 30 Jul 2001 13:33:53 +0000 Subject: GiNaC/ginac clifford.cpp In-Reply-To: References: Message-ID: <20010730133353.A2003@iphcip1.physik.uni-mainz.de> Hi! On Sun, Jul 29, 2001 at 09:19:02PM +0200, Do Hoang Son wrote: > dirac_gamma5()*dirac_ONE() = -dirac_ONE()*dirac_gamma5() No, but it rewrote (a+b)*gamma5 as -gamma5*(a+b). Anyway, this is fixed now. Bye, Christian -- / Coding on PowerPC and proud of it \/ http://www.uni-mainz.de/~bauec002/ From dhson at thep.physik.uni-mainz.de Mon Jul 30 17:27:24 2001 From: dhson at thep.physik.uni-mainz.de (Do Hoang Son) Date: Mon, 30 Jul 2001 17:27:24 +0200 (CEST) Subject: GiNaC/ginac clifford.cpp In-Reply-To: <20010730133353.A2003@iphcip1.physik.uni-mainz.de> Message-ID: On Mon, 30 Jul 2001, Christian Bauer wrote: > Hi! > > On Sun, Jul 29, 2001 at 09:19:02PM +0200, Do Hoang Son wrote: > > dirac_gamma5()*dirac_ONE() = -dirac_ONE()*dirac_gamma5() > > No, but it rewrote (a+b)*gamma5 as -gamma5*(a+b). Anyway, this is fixed now. It seems OK now, at least the t -> bH decay now can be completely writen in C++ with GiNaC and Xloops-GiNaC libs. Thank Christian and cheer up! Son From kreckel at thep.physik.uni-mainz.de Mon Jul 30 19:15:30 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Mon, 30 Jul 2001 19:15:30 +0200 (CEST) Subject: A move away from CLN? In-Reply-To: <200107242120.HAA03406@jander.fl.net.au> Message-ID: On Wed, 25 Jul 2001, Duraid Madina wrote: > > Err, I just gave the German distributor a call and this is what they > > claimed: KAI is a frontend-only compiler, it uses GCC or another > > C-compiler as backend. (?) > > Are you sure they're not talking about another KAI product, "Visual KAP". > This is a preprocessor which automatically parallizes source code (and also > performs some source-level optimizations even for serial codes) before > passing it along to another compiler. It's not really a 'compiler' in and of > itself, though. You'd require another compiler as a backend. (That product > is about to be discontinued, by the way.) No, I wasn't talking about Visual KAP. This is from KAI C++ Compiler (KCC) Version 4.0f release notes which by the way lists (the triple-cursed) gcc-2.96-81 as a requirement: : The top level KAI C++ driver, KCC, is intended to be : used as a compiler. Under the top level driver there : are three distinct compilation phases. First, the front- : end parses the source file, performs high-level optimiza- : tions, and generates an intermediate file in standard C. : Next, a C compiler reads in the intermediate C file and : generates an object file. Last of all, a link process : combines the object modules and libraries, and takes care : of template instantiation and static object initialization. Err, am I supposed to be impressed by this? regards -richy. -- Richard Kreckel From kreckel at thep.physik.uni-mainz.de Tue Jul 31 15:20:44 2001 From: kreckel at thep.physik.uni-mainz.de (Richard B. Kreckel) Date: Tue, 31 Jul 2001 15:20:44 +0200 (CEST) Subject: A move away from CLN? In-Reply-To: Message-ID: Talking about KAI's C++ compiler, I just tested it. It definitely is a frontend to the native compiler. Where else do all the notes in the local text sections `00000000 t gcc2_compiled.' come from? Other than that they did a pretty good job in standard conformance. There are two little issues in CLN, namely in the lines cln/src/float/misc/cl_F_leastneg.cc:44 cln/src/float/misc/cl_F_leastpos.cc:44 where you must manually change `&TheLfloat(erg)->data[0]' to `&erg->data[0]'. Dunno what's going wrong -- TheLfloat() is definitely defined at this point of translation. Then, static CLN indeed passed all the tests. This doesn't work for Debian, only for RedHat, because of this gcc-2.96 insanity. For GiNaC, I was less lucky. I don't understand enough about this compiler's template-resolution. I keep getting errors of this form: archive.o(.text+0xbe9): undefined reference to `void std::vector::insert_aux(T2::pointer, const T1 &) [with T1=GiNaC::archive_node::property_info, T2=std::allocator]' The occurrence of `::insert_aux' hints at some problem in their library, since this is not declared in GiNaC or the standard. Duraid, does this ring a bell to you? How do you link your stuff with KAI C++? The very idea of having a C++ compiler produce C code for the system's native compiler appears as deeply anachronistic to me. There are some things that you'll *never* get correct with this approach and they will make life difficult. Consider inline functions which are just expanded at the appropiate place prior to feeding them to the native compiler. If something goes wrong at this point the native compiler will later be faced with unresolved mangled names and you'll see error messages of unbelievable verbosity. Like this one: /data/scratch/KCC/installation/KCC_BASE/include/vector:149: `__T146099832' undeclared (first use in this function) How do you debug this? How do you find the place where the error was triggered? I had to do binary searches over preprocessed source files. Ouuww, sucks... Bottom line: Real C++-compilers like g++ have definite advantages over `transpilers' like KCC. At least on x86 I see no performance gain from KAI's compiler. This is the third commercial C++ `Wunderwaffe' that flunked over here, but your mileage may vary. Bottom line of bottom line: Our efforts are better spent by helping to shake out the bugs of GCC-3. Regards -richy. -- Richard Kreckel From frink at thep.physik.uni-mainz.de Tue Jul 31 21:57:21 2001 From: frink at thep.physik.uni-mainz.de (Alexander Frink) Date: Tue, 31 Jul 2001 21:57:21 +0200 (CEST) Subject: GiNaC 0.9.2 around Message-ID: Hi all, I apologize for the laziness of my co-workers who produced no more than 3 entries in the news section for GiNaC 0.9.2. But ok, it is summer time and very hot in Germany, so we hope to be more productive again in the future... Forced to write something before they let me go to the next beer-garden Alex * Epsilon tensor is more functional. * simplify_indexed() is better at detecting expressions that vanish for symmetry reasons. * Several little bugfixes and consistency enhancements. -- Alexander Frink E-Mail: Alexander.Frink at Uni-Mainz.DE Institut fuer Physik Phone: +49-6131-3923391 Johannes-Gutenberg-Universitaet D-55099 Mainz, Germany