home *** CD-ROM | disk | FTP | other *** search
- .pa
- MATHEMATICAL
-
- These mathematical procedures provide a wide variety of functions that are
- useful for engineering applications. I began developing this library after my
- first experience with SLIM (name changed to protect the guilty) back in 1976.
- So I guess I have them to thank. If you've ever used SLIM perhaps you've had
- a similar experience...
-
- Let's say you want to solve what one would think to be a simple problem of
- simultaneous linear equations, and due to some unfortunate turn of events, you
- are given SLIM with which to do this. You are confronted with no less than 60
- subroutines from which to choose. As it turns out, none will do what you
- want. Anyway, you narrow it down to the Smith-Jones-Simpson-Wilkerson-
- Feinstein-Gidley method for partially symmetric matrices with strange
- sparseness stored in convoluted-compacted form (not to be confused with
- space-saving form) and the Kleidowitz-Yonkers-Lakeville-Masseykins
- modification of the Orwell-Gauss-Newton-Schmeltz algorithm for ill-conditioned
- Hilbert matrices stored in space-saving form (not to be confused with
- convoluted-compacted form). An obvious choice indeed (any fool with 6 Ph.D.s
- in math could see that the latter is the preferred method). After numerous
- attempts to link your program with all 347 of the necessary SLIM libraries you
- are finally ready to run your program. Of course, it bombs out because you
- were reading tuesday afternoon's manual and it's now wednesday morning. Tough
- luck...
-
- Actually, SLIM is so bad that even I can't achieve hyperbole. Language fails
- me. The above could easily be a random selection from one of the six volumes
- that are supposed to be a "user's guide."
-
- Anyway, I have a completely different philosophy of programming. I hope that
- it works well for you: if there is a clearly preferred method of doing
- anything, then use it and discard the others.
-
- An excellent example of this is Gauss quadrature. There are only two methods
- of numerical integration that will (at least in theory) converge given an
- infinite order: trapezoidal rule and Gauss quadrature. Gauss quadrature is
- so far superior in accuracy to any other method (e.g. Newton-Cotes) it's
- practically incredible. Therefore, why have 50 other methods from which to
- choose? An interesting note about this is that, contrary to some rumors 10
- subdivisions of 20-point Gauss quadrature (200 point evaluation) is not as
- accurate as 96-point Gauss quadrature with less than half the function
- evaluations.
-
- Another example is Gauss-Jordan elimination. SLIM has subroutines that
- provide a "high accuracy solution" (If Andy Rooney understood this, he'd
- probably ask, "Who would want a low accuracy solution anyway?"). And of
- course, there's the iterative improvement type solutions. What they don't
- tell you is that the residual must be computed in greater precision than the
- matrix is reduced in (say single to double precision) which is OK; but why
- not just use double precision all the way through? If you are already using
- double precision and that's not good enough for you... well, you're just out
- of luck. If your matrix is ill- conditioned (as is typically the case with
- Vandermondes - that's what you get when you're curve-fitting) about the best
- thing you can use is Gauss-Jordan elimination with full column pivoting. It
- is fast enough and I have used vector emulation throughout. I have tried all
- sorts of things like QR decompositions, Givens rotations, and a whole bunch of
- others; but it all boils down to the precision of the math coprocessor. I
- only use one algorithm for solving full matrices. I have tested it using a
- series of standard Hilbert matrices against SLIM's "high accuracy solution"
- and found mine to be both faster and more accurate.
-
- One last example is the solution of ordinary differential equations. If you
- read numerical mathematics texts, they always give examples of the error in
- such-and-such a method as compared to the exact solution. If you look closely
- at the microscopic print in the margin of the legend under the fly wing that
- got stuck on the copier, you will probably find the pithy little statement
- "exact solution determined using 4-th order Runge-Kutta with 0.0000000001 step
- size." Runge-Kutta is self-starting (which is a bigger pain in the neck than
- you might think if you use a method that isn't), convenient to use, and works
- very well. So why not use it, I ask? Of course there are certain (unknown
- before the fact) cases where other methods are much faster; but I would
- rather spend less time experimenting with zillions of algorithms and let the
- machine sweat a little more. I've been through it once; and once is enough.
- Runge-Kutta is the one for me. A word about automatic stepsize control. It
- has been my experience that this takes so long at each step that you are
- better off to use a smaller step from the start. I had two such subroutines
- that I spent a lot of time on; but I purged them one day in a fit of
- frustration.
-
- One final note: WATCH YOUR VARIABLE TYPES! ESPECIALLY DOUBLE PRECISION!
- I've seen many a programmer bewildered by a program that crashes because they
- have passed "0." instead of "0.D0" to a subroutine expecting a double
- precision value or forgotten to put the "REAL*8" out in front of a function.
-
- REAL*8 FUNCTION DOFX(D)
-
- It is really best to use IMPLICIT statements like
-
- IMPLICIT INTEGER*2 (I-N),REAL*8 (A-H,O-Z)
-
- than to individually type each variable. You're bound to miss one.
-
- Another biggy is "O" instead of "0" or vise versa.
- .ad LIBRY4A.DOC
- .ad LIBRY4B.DOC
- .ad LIBRY4C.DOC