home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.lang.c
- Path: sparky!uunet!spillman!tye
- From: tye@spillman.uucp (E. Tye McQueen)
- Subject: Re: 9th order polynomial
- Message-ID: <1992Dec29.103955.4864@spillman.uucp>
- Date: Tue, 29 Dec 1992 10:39:55 GMT
- References: <1992Dec10.154522.15893@lambda.msfc.nasa.gov> <1992Dec14.214016.149@ondec.lonestar.org>
- Organization: Spillman Data Systems
- Lines: 136
-
- d_north@ondec.lonestar.org (David G. North, CCP) writes:
- )In article <1992Dec10.154522.15893@lambda.msfc.nasa.gov>,
- ) edmondso@lambda.msfc.nasa.gov (Rich Edmondson) writes:
- )> I need to use a 9th order polynomial to calibrate values on a system I
- )> am designing. Does anyone know of an algorithm which is more
- )> efficient than recursive multiplication, given a list of polynomial
- )> coefficients?
- )Define recursive multiplication.
- )
- )How about an O*log2(n) algorithm--->
- )
- )double
- )di_exponentiate(d_src,i_pow)
- ) double d_src;
- ) int i_pow;
- ){
- ) double temp,accum;
- ) int i;
- )
- ) for (temp=d_src, accum=1.0, i=i_pow; i ; i>>=1, temp*=temp)
- ) if (i&1) accum *= temp;
- ) return(accum);
- )}
- [...]
-
- Note that the order in which you perform the operations to
- compute a polynomial value can greatly affect the accuracy
- of your calculation (unless you are doing some non-floating-
- point arithmetic like integer arithmetic -- but then it can
- still affect whether or not you get overflow).
-
- It's been too long so I can't recall whether computing it as
-
- ((((a*x+b)*x+c)*x+d)*x+e)*x+f
-
- was a particularly good or particularly bad method. It does
- have the advantage of reducing the number of multiplications
- required without requiring any extra space to hold intermediate
- values.
-
- Note that if a*x is almost equal to -b then this method
- probably yields very poor results (subtracting nearly equal
- numbers is one of the best ways to *greatly* reduce the
- accuracy of a numeric computation and the rest of the above
- computation is likely to cause such inaccuracies to dominate
- the result -- gee, maybe it hasn't been as long as it
- intially felt).
-
- For example:
- 3.14159287 - 3.14159286 = 0.00000001
- goes from about 9 (or 8) digits of accuracy right down
- to only about one (or zero) digit of accuracy in one
- simple step. Unless this result gets added in to a
- much larger result (adding it into a result even 10^5
- times bigger than it only gets you back to around 4 or
- 5 digits of accuracy), your results may not even remotely
- resemble the correct value. So multiplying the result
- by a possibly large x makes things worse.
-
- Perhaps:
-
- int i;
- double coef[]= { a, b, c, d, e, f };
- double tot= 0, pow= 1;
- for( i= sizeof(coef)/sizeof(coef[0]) - 1; 0 <= i; i-- ) {
- tot += coef[i]*pow;
- pow *= x;
- }
-
- (You can take ", f" out of the second line and change "0" to "f",
- and "1" to "x" in the third if you want to skip a step.)
-
- Assuming x is not close to 1 and the coefficients are relatively
- close to 1, the result should be dominated by the high- or low-
- order terms, reducing the odds of nearly "cancelling" values
- during the addition, possibly increasing your accuracy. Even
- if x *is* close to 1, you don't end up multiplying the (inaccurate)
- result of a "cancelling" addition so the inaccuracies are less
- likely to dominate the final result.
-
- And, of course, results close to 0 are more likely to be inac-
- curate, but there is probably no way around that without avoid the
- computation itself. So, if you are looking for roots, you may be
- surprised how big your tollerance has to be and how little accuracy
- you get. But then you really should differentiate the polynomial
- and use a modified Newton's method in that case (ie. avoid the
- computation itself).
-
- Gee, I think I understand this better now than I did when I took
- those danged courses. Thinking something through for yourself
- can be so enlightening.
-
- I'd suggest you:
-
- Find a good library routine that computes polynomials accurately
- Find documentation for a good suite of numeric routines and read
- how their polynomial computations are made
- Find a good text on numeric(al) methods and read the chapter
- on polynomials (sorry, I don't have one handy)
- Ask your local college's numeric(al) methods expert (the Math
- department should know where she is)
- Take a nice course in numeric(al) methods (of course they'll
- just laugh at polynomials and boldly assume that everything
- is "almost" linear -- gawd I hate that)
- Go check in news.answers for a FAQ that mentions polynomials
- (it is fun and interesting reading anyway)
- Go ask this question in sci.math.___ (sorry, I don't have a full
- list of newsgroups handy and I don't recall what subgroups
- lie under sci.math)
- Relax and not worry about whether your results are even close
- to the proper values (unfortunately, this is probably the
- typical response)
-
- In any case, I think I've given you two answers to your original
- question (on efficiency). Sorry about giving you a harder question
- on accuracy. But check into it, you'll become a rare asset (or
- at least rare).
-
- If you have sparse coeffients and want to cut down on (floating-
- point) multiplies at all costs, you can use power-of-two tricks
- as the previous poster illustrated (though doing these tricks
- over for each power costs more than either of the above methods).
- But for order 9 that is probably not nearly worth it.
-
- Sorry, I can't redirect follow-ups to a more appropriate group
- as our newsfeed is greatly restricted (still working on that
- dial-up PPP line to the local university to replace the long-
- distance call to uunet) and our newsreader won't let me follow-
- up to newgroups that we don't get.
-
- Please direct your follow-ups to an appropriate group.
-
- tye@spillman.com Tye McQueen, E.
- ----------------------------------------------------------
- Nothing is obvious unless you are overlooking something.
- ----------------------------------------------------------
-