EXP(x) approximation in old 1980's computer ROM

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












The presentation :



Old 1980’s ROM (Apple 2e, Commodore 64, ...) uses a Taylor’s series-like to evaluate the exponential function EXP(x) :



EXP(x) = 1 + x + 1/2! x^2 + ... + 1/7! x^7


x is in [-1, 1], thanks to a preliminary work.
Another preliminary work is to multiply x by a correction factor 1/ln(2) (better for the algorithm I suppose).
So the series must take into account this factor and modify the coefficients :



EXP(x) = 1 + ln(2) x + ln(2)^2 1/2! x^2 + ... + ln(2)^7 1/7! x^7
EXP(x) = a0 + a1 x + a2 x^2 + ... a7 x^7

a0 = 1.000000000
a1 = 0.693147181 ln(2)
a2 = 0.240226507
a3 = 0.055504109
a4 = 0.009618129
a5 = 0.001333356
a6 = 0.000154035
a7 = 0.000015253


To reduce the Taylor error, especially near -1 and 1, the old 1980’s algorithm modify the coefficients with the Chebyshev polynomials approximation.
As the comment says :



“TCHEBYSHEV MODIFIED TAYLOR SERIES COEFFICIENTS FOR E**X”

b0 = 1.000000000
b1 = 0.693147186
b2 = 0.240226385
b3 = 0.055505127
b4 = 0.009614017
b5 = 0.001342263
b6 = 0.000143523
b7 = 0.000021499

(see https://github.com/wsoltys/multicomp/blob/master/ROMS/6809/basic.asm line 3852)


Now, the problem :



Where these Chebyshev values come from ?



I did a lot of calculations to retreive these values.
I used the normal method : compute the Chebyshev coefficients for EXP(x) up to 8 terms, then replace the Ti(x) by its x polynomial.
Or by the economization method : compute Taylor EXP(x) up to x^9, and remove a9*T9(x)/2^8 and a8*T8(x)/2^7 (normally equivalent to the first method).
No chance !



The computed Chebyshev coefficients I obtain, in accordance with the literature, are :



c0 = 0.999999801 delta b0 - c0 = 0.000000199
c1 = 0.693147113 b1 - c1 = 0.000000073
c2 = 0.240229556 b2 - c2 = -0.000003171
c3 = 0.055504542 b3 - c3 = 0.000000585
c4 = 0.009610825 b4 - c4 = 0.000003192
c5 = 0.001332605 b5 - c5 = 0.000009658
c6 = 0.000159622 b6 - c6 = -0.000016099
c7 = 0.000015734 b7 - c7 = 0.000005765


Have you an idea ?
Is the ROM coefficients really Chebyshev coefficients ?
Is this another approximation method ?
Am I wrong ?



Thank you for your help.



BR,



see https://retrocomputing.stackexchange.com/questions/7065/1980s-rom-which-expn-algorithm-used in retrocomputing forum







share|cite|improve this question





















  • I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
    – njuffa
    Jul 21 at 18:51






  • 1




    Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
    – WimC
    Jul 21 at 18:52







  • 1




    A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
    – njuffa
    Jul 21 at 19:09







  • 1




    $2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
    – njuffa
    Jul 21 at 19:14







  • 2




    Cross-posted: mathoverflow.net/questions/306550/…
    – user43208
    Jul 21 at 21:02














up vote
2
down vote

favorite












The presentation :



Old 1980’s ROM (Apple 2e, Commodore 64, ...) uses a Taylor’s series-like to evaluate the exponential function EXP(x) :



EXP(x) = 1 + x + 1/2! x^2 + ... + 1/7! x^7


x is in [-1, 1], thanks to a preliminary work.
Another preliminary work is to multiply x by a correction factor 1/ln(2) (better for the algorithm I suppose).
So the series must take into account this factor and modify the coefficients :



EXP(x) = 1 + ln(2) x + ln(2)^2 1/2! x^2 + ... + ln(2)^7 1/7! x^7
EXP(x) = a0 + a1 x + a2 x^2 + ... a7 x^7

a0 = 1.000000000
a1 = 0.693147181 ln(2)
a2 = 0.240226507
a3 = 0.055504109
a4 = 0.009618129
a5 = 0.001333356
a6 = 0.000154035
a7 = 0.000015253


To reduce the Taylor error, especially near -1 and 1, the old 1980’s algorithm modify the coefficients with the Chebyshev polynomials approximation.
As the comment says :



“TCHEBYSHEV MODIFIED TAYLOR SERIES COEFFICIENTS FOR E**X”

b0 = 1.000000000
b1 = 0.693147186
b2 = 0.240226385
b3 = 0.055505127
b4 = 0.009614017
b5 = 0.001342263
b6 = 0.000143523
b7 = 0.000021499

(see https://github.com/wsoltys/multicomp/blob/master/ROMS/6809/basic.asm line 3852)


Now, the problem :



Where these Chebyshev values come from ?



I did a lot of calculations to retreive these values.
I used the normal method : compute the Chebyshev coefficients for EXP(x) up to 8 terms, then replace the Ti(x) by its x polynomial.
Or by the economization method : compute Taylor EXP(x) up to x^9, and remove a9*T9(x)/2^8 and a8*T8(x)/2^7 (normally equivalent to the first method).
No chance !



The computed Chebyshev coefficients I obtain, in accordance with the literature, are :



c0 = 0.999999801 delta b0 - c0 = 0.000000199
c1 = 0.693147113 b1 - c1 = 0.000000073
c2 = 0.240229556 b2 - c2 = -0.000003171
c3 = 0.055504542 b3 - c3 = 0.000000585
c4 = 0.009610825 b4 - c4 = 0.000003192
c5 = 0.001332605 b5 - c5 = 0.000009658
c6 = 0.000159622 b6 - c6 = -0.000016099
c7 = 0.000015734 b7 - c7 = 0.000005765


Have you an idea ?
Is the ROM coefficients really Chebyshev coefficients ?
Is this another approximation method ?
Am I wrong ?



Thank you for your help.



BR,



see https://retrocomputing.stackexchange.com/questions/7065/1980s-rom-which-expn-algorithm-used in retrocomputing forum







share|cite|improve this question





















  • I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
    – njuffa
    Jul 21 at 18:51






  • 1




    Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
    – WimC
    Jul 21 at 18:52







  • 1




    A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
    – njuffa
    Jul 21 at 19:09







  • 1




    $2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
    – njuffa
    Jul 21 at 19:14







  • 2




    Cross-posted: mathoverflow.net/questions/306550/…
    – user43208
    Jul 21 at 21:02












up vote
2
down vote

favorite









up vote
2
down vote

favorite











The presentation :



Old 1980’s ROM (Apple 2e, Commodore 64, ...) uses a Taylor’s series-like to evaluate the exponential function EXP(x) :



EXP(x) = 1 + x + 1/2! x^2 + ... + 1/7! x^7


x is in [-1, 1], thanks to a preliminary work.
Another preliminary work is to multiply x by a correction factor 1/ln(2) (better for the algorithm I suppose).
So the series must take into account this factor and modify the coefficients :



EXP(x) = 1 + ln(2) x + ln(2)^2 1/2! x^2 + ... + ln(2)^7 1/7! x^7
EXP(x) = a0 + a1 x + a2 x^2 + ... a7 x^7

a0 = 1.000000000
a1 = 0.693147181 ln(2)
a2 = 0.240226507
a3 = 0.055504109
a4 = 0.009618129
a5 = 0.001333356
a6 = 0.000154035
a7 = 0.000015253


To reduce the Taylor error, especially near -1 and 1, the old 1980’s algorithm modify the coefficients with the Chebyshev polynomials approximation.
As the comment says :



“TCHEBYSHEV MODIFIED TAYLOR SERIES COEFFICIENTS FOR E**X”

b0 = 1.000000000
b1 = 0.693147186
b2 = 0.240226385
b3 = 0.055505127
b4 = 0.009614017
b5 = 0.001342263
b6 = 0.000143523
b7 = 0.000021499

(see https://github.com/wsoltys/multicomp/blob/master/ROMS/6809/basic.asm line 3852)


Now, the problem :



Where these Chebyshev values come from ?



I did a lot of calculations to retreive these values.
I used the normal method : compute the Chebyshev coefficients for EXP(x) up to 8 terms, then replace the Ti(x) by its x polynomial.
Or by the economization method : compute Taylor EXP(x) up to x^9, and remove a9*T9(x)/2^8 and a8*T8(x)/2^7 (normally equivalent to the first method).
No chance !



The computed Chebyshev coefficients I obtain, in accordance with the literature, are :



c0 = 0.999999801 delta b0 - c0 = 0.000000199
c1 = 0.693147113 b1 - c1 = 0.000000073
c2 = 0.240229556 b2 - c2 = -0.000003171
c3 = 0.055504542 b3 - c3 = 0.000000585
c4 = 0.009610825 b4 - c4 = 0.000003192
c5 = 0.001332605 b5 - c5 = 0.000009658
c6 = 0.000159622 b6 - c6 = -0.000016099
c7 = 0.000015734 b7 - c7 = 0.000005765


Have you an idea ?
Is the ROM coefficients really Chebyshev coefficients ?
Is this another approximation method ?
Am I wrong ?



Thank you for your help.



BR,



see https://retrocomputing.stackexchange.com/questions/7065/1980s-rom-which-expn-algorithm-used in retrocomputing forum







share|cite|improve this question













The presentation :



Old 1980’s ROM (Apple 2e, Commodore 64, ...) uses a Taylor’s series-like to evaluate the exponential function EXP(x) :



EXP(x) = 1 + x + 1/2! x^2 + ... + 1/7! x^7


x is in [-1, 1], thanks to a preliminary work.
Another preliminary work is to multiply x by a correction factor 1/ln(2) (better for the algorithm I suppose).
So the series must take into account this factor and modify the coefficients :



EXP(x) = 1 + ln(2) x + ln(2)^2 1/2! x^2 + ... + ln(2)^7 1/7! x^7
EXP(x) = a0 + a1 x + a2 x^2 + ... a7 x^7

a0 = 1.000000000
a1 = 0.693147181 ln(2)
a2 = 0.240226507
a3 = 0.055504109
a4 = 0.009618129
a5 = 0.001333356
a6 = 0.000154035
a7 = 0.000015253


To reduce the Taylor error, especially near -1 and 1, the old 1980’s algorithm modify the coefficients with the Chebyshev polynomials approximation.
As the comment says :



“TCHEBYSHEV MODIFIED TAYLOR SERIES COEFFICIENTS FOR E**X”

b0 = 1.000000000
b1 = 0.693147186
b2 = 0.240226385
b3 = 0.055505127
b4 = 0.009614017
b5 = 0.001342263
b6 = 0.000143523
b7 = 0.000021499

(see https://github.com/wsoltys/multicomp/blob/master/ROMS/6809/basic.asm line 3852)


Now, the problem :



Where these Chebyshev values come from ?



I did a lot of calculations to retreive these values.
I used the normal method : compute the Chebyshev coefficients for EXP(x) up to 8 terms, then replace the Ti(x) by its x polynomial.
Or by the economization method : compute Taylor EXP(x) up to x^9, and remove a9*T9(x)/2^8 and a8*T8(x)/2^7 (normally equivalent to the first method).
No chance !



The computed Chebyshev coefficients I obtain, in accordance with the literature, are :



c0 = 0.999999801 delta b0 - c0 = 0.000000199
c1 = 0.693147113 b1 - c1 = 0.000000073
c2 = 0.240229556 b2 - c2 = -0.000003171
c3 = 0.055504542 b3 - c3 = 0.000000585
c4 = 0.009610825 b4 - c4 = 0.000003192
c5 = 0.001332605 b5 - c5 = 0.000009658
c6 = 0.000159622 b6 - c6 = -0.000016099
c7 = 0.000015734 b7 - c7 = 0.000005765


Have you an idea ?
Is the ROM coefficients really Chebyshev coefficients ?
Is this another approximation method ?
Am I wrong ?



Thank you for your help.



BR,



see https://retrocomputing.stackexchange.com/questions/7065/1980s-rom-which-expn-algorithm-used in retrocomputing forum









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 22 at 16:54
























asked Jul 21 at 16:49









jpcohet

142




142











  • I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
    – njuffa
    Jul 21 at 18:51






  • 1




    Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
    – WimC
    Jul 21 at 18:52







  • 1




    A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
    – njuffa
    Jul 21 at 19:09







  • 1




    $2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
    – njuffa
    Jul 21 at 19:14







  • 2




    Cross-posted: mathoverflow.net/questions/306550/…
    – user43208
    Jul 21 at 21:02
















  • I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
    – njuffa
    Jul 21 at 18:51






  • 1




    Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
    – WimC
    Jul 21 at 18:52







  • 1




    A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
    – njuffa
    Jul 21 at 19:09







  • 1




    $2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
    – njuffa
    Jul 21 at 19:14







  • 2




    Cross-posted: mathoverflow.net/questions/306550/…
    – user43208
    Jul 21 at 21:02















I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
– njuffa
Jul 21 at 18:51




I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
– njuffa
Jul 21 at 18:51




1




1




Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
– WimC
Jul 21 at 18:52





Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
– WimC
Jul 21 at 18:52





1




1




A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
– njuffa
Jul 21 at 19:09





A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
– njuffa
Jul 21 at 19:09





1




1




$2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
– njuffa
Jul 21 at 19:14





$2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
– njuffa
Jul 21 at 19:14





2




2




Cross-posted: mathoverflow.net/questions/306550/…
– user43208
Jul 21 at 21:02




Cross-posted: mathoverflow.net/questions/306550/…
– user43208
Jul 21 at 21:02















active

oldest

votes











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858662%2fexpx-approximation-in-old-1980s-computer-rom%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes










 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858662%2fexpx-approximation-in-old-1980s-computer-rom%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Relationship between determinant of matrix and determinant of adjoint?

Color the edges and diagonals of a regular polygon