EXP(x) approximation in old 1980's computer ROM
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
The presentation :
Old 1980’s ROM (Apple 2e, Commodore 64, ...) uses a Taylor’s series-like to evaluate the exponential function EXP(x) :
EXP(x) = 1 + x + 1/2! x^2 + ... + 1/7! x^7
x is in [-1, 1], thanks to a preliminary work.
Another preliminary work is to multiply x by a correction factor 1/ln(2) (better for the algorithm I suppose).
So the series must take into account this factor and modify the coefficients :
EXP(x) = 1 + ln(2) x + ln(2)^2 1/2! x^2 + ... + ln(2)^7 1/7! x^7
EXP(x) = a0 + a1 x + a2 x^2 + ... a7 x^7
a0 = 1.000000000
a1 = 0.693147181 ln(2)
a2 = 0.240226507
a3 = 0.055504109
a4 = 0.009618129
a5 = 0.001333356
a6 = 0.000154035
a7 = 0.000015253
To reduce the Taylor error, especially near -1 and 1, the old 1980’s algorithm modify the coefficients with the Chebyshev polynomials approximation.
As the comment says :
“TCHEBYSHEV MODIFIED TAYLOR SERIES COEFFICIENTS FOR E**Xâ€Â
b0 = 1.000000000
b1 = 0.693147186
b2 = 0.240226385
b3 = 0.055505127
b4 = 0.009614017
b5 = 0.001342263
b6 = 0.000143523
b7 = 0.000021499
(see https://github.com/wsoltys/multicomp/blob/master/ROMS/6809/basic.asm line 3852)
Now, the problem :
Where these Chebyshev values come from ?
I did a lot of calculations to retreive these values.
I used the normal method : compute the Chebyshev coefficients for EXP(x) up to 8 terms, then replace the Ti(x) by its x polynomial.
Or by the economization method : compute Taylor EXP(x) up to x^9, and remove a9*T9(x)/2^8 and a8*T8(x)/2^7 (normally equivalent to the first method).
No chance !
The computed Chebyshev coefficients I obtain, in accordance with the literature, are :
c0 = 0.999999801 delta b0 - c0 = 0.000000199
c1 = 0.693147113 b1 - c1 = 0.000000073
c2 = 0.240229556 b2 - c2 = -0.000003171
c3 = 0.055504542 b3 - c3 = 0.000000585
c4 = 0.009610825 b4 - c4 = 0.000003192
c5 = 0.001332605 b5 - c5 = 0.000009658
c6 = 0.000159622 b6 - c6 = -0.000016099
c7 = 0.000015734 b7 - c7 = 0.000005765
Have you an idea ?
Is the ROM coefficients really Chebyshev coefficients ?
Is this another approximation method ?
Am I wrong ?
Thank you for your help.
BR,
see https://retrocomputing.stackexchange.com/questions/7065/1980s-rom-which-expn-algorithm-used in retrocomputing forum
approximation
 |Â
show 6 more comments
up vote
2
down vote
favorite
The presentation :
Old 1980’s ROM (Apple 2e, Commodore 64, ...) uses a Taylor’s series-like to evaluate the exponential function EXP(x) :
EXP(x) = 1 + x + 1/2! x^2 + ... + 1/7! x^7
x is in [-1, 1], thanks to a preliminary work.
Another preliminary work is to multiply x by a correction factor 1/ln(2) (better for the algorithm I suppose).
So the series must take into account this factor and modify the coefficients :
EXP(x) = 1 + ln(2) x + ln(2)^2 1/2! x^2 + ... + ln(2)^7 1/7! x^7
EXP(x) = a0 + a1 x + a2 x^2 + ... a7 x^7
a0 = 1.000000000
a1 = 0.693147181 ln(2)
a2 = 0.240226507
a3 = 0.055504109
a4 = 0.009618129
a5 = 0.001333356
a6 = 0.000154035
a7 = 0.000015253
To reduce the Taylor error, especially near -1 and 1, the old 1980’s algorithm modify the coefficients with the Chebyshev polynomials approximation.
As the comment says :
“TCHEBYSHEV MODIFIED TAYLOR SERIES COEFFICIENTS FOR E**Xâ€Â
b0 = 1.000000000
b1 = 0.693147186
b2 = 0.240226385
b3 = 0.055505127
b4 = 0.009614017
b5 = 0.001342263
b6 = 0.000143523
b7 = 0.000021499
(see https://github.com/wsoltys/multicomp/blob/master/ROMS/6809/basic.asm line 3852)
Now, the problem :
Where these Chebyshev values come from ?
I did a lot of calculations to retreive these values.
I used the normal method : compute the Chebyshev coefficients for EXP(x) up to 8 terms, then replace the Ti(x) by its x polynomial.
Or by the economization method : compute Taylor EXP(x) up to x^9, and remove a9*T9(x)/2^8 and a8*T8(x)/2^7 (normally equivalent to the first method).
No chance !
The computed Chebyshev coefficients I obtain, in accordance with the literature, are :
c0 = 0.999999801 delta b0 - c0 = 0.000000199
c1 = 0.693147113 b1 - c1 = 0.000000073
c2 = 0.240229556 b2 - c2 = -0.000003171
c3 = 0.055504542 b3 - c3 = 0.000000585
c4 = 0.009610825 b4 - c4 = 0.000003192
c5 = 0.001332605 b5 - c5 = 0.000009658
c6 = 0.000159622 b6 - c6 = -0.000016099
c7 = 0.000015734 b7 - c7 = 0.000005765
Have you an idea ?
Is the ROM coefficients really Chebyshev coefficients ?
Is this another approximation method ?
Am I wrong ?
Thank you for your help.
BR,
see https://retrocomputing.stackexchange.com/questions/7065/1980s-rom-which-expn-algorithm-used in retrocomputing forum
approximation
I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
– njuffa
Jul 21 at 18:51
1
Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
– WimC
Jul 21 at 18:52
1
A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
– njuffa
Jul 21 at 19:09
1
$2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
– njuffa
Jul 21 at 19:14
2
Cross-posted: mathoverflow.net/questions/306550/…
– user43208
Jul 21 at 21:02
 |Â
show 6 more comments
up vote
2
down vote
favorite
up vote
2
down vote
favorite
The presentation :
Old 1980’s ROM (Apple 2e, Commodore 64, ...) uses a Taylor’s series-like to evaluate the exponential function EXP(x) :
EXP(x) = 1 + x + 1/2! x^2 + ... + 1/7! x^7
x is in [-1, 1], thanks to a preliminary work.
Another preliminary work is to multiply x by a correction factor 1/ln(2) (better for the algorithm I suppose).
So the series must take into account this factor and modify the coefficients :
EXP(x) = 1 + ln(2) x + ln(2)^2 1/2! x^2 + ... + ln(2)^7 1/7! x^7
EXP(x) = a0 + a1 x + a2 x^2 + ... a7 x^7
a0 = 1.000000000
a1 = 0.693147181 ln(2)
a2 = 0.240226507
a3 = 0.055504109
a4 = 0.009618129
a5 = 0.001333356
a6 = 0.000154035
a7 = 0.000015253
To reduce the Taylor error, especially near -1 and 1, the old 1980’s algorithm modify the coefficients with the Chebyshev polynomials approximation.
As the comment says :
“TCHEBYSHEV MODIFIED TAYLOR SERIES COEFFICIENTS FOR E**Xâ€Â
b0 = 1.000000000
b1 = 0.693147186
b2 = 0.240226385
b3 = 0.055505127
b4 = 0.009614017
b5 = 0.001342263
b6 = 0.000143523
b7 = 0.000021499
(see https://github.com/wsoltys/multicomp/blob/master/ROMS/6809/basic.asm line 3852)
Now, the problem :
Where these Chebyshev values come from ?
I did a lot of calculations to retreive these values.
I used the normal method : compute the Chebyshev coefficients for EXP(x) up to 8 terms, then replace the Ti(x) by its x polynomial.
Or by the economization method : compute Taylor EXP(x) up to x^9, and remove a9*T9(x)/2^8 and a8*T8(x)/2^7 (normally equivalent to the first method).
No chance !
The computed Chebyshev coefficients I obtain, in accordance with the literature, are :
c0 = 0.999999801 delta b0 - c0 = 0.000000199
c1 = 0.693147113 b1 - c1 = 0.000000073
c2 = 0.240229556 b2 - c2 = -0.000003171
c3 = 0.055504542 b3 - c3 = 0.000000585
c4 = 0.009610825 b4 - c4 = 0.000003192
c5 = 0.001332605 b5 - c5 = 0.000009658
c6 = 0.000159622 b6 - c6 = -0.000016099
c7 = 0.000015734 b7 - c7 = 0.000005765
Have you an idea ?
Is the ROM coefficients really Chebyshev coefficients ?
Is this another approximation method ?
Am I wrong ?
Thank you for your help.
BR,
see https://retrocomputing.stackexchange.com/questions/7065/1980s-rom-which-expn-algorithm-used in retrocomputing forum
approximation
The presentation :
Old 1980’s ROM (Apple 2e, Commodore 64, ...) uses a Taylor’s series-like to evaluate the exponential function EXP(x) :
EXP(x) = 1 + x + 1/2! x^2 + ... + 1/7! x^7
x is in [-1, 1], thanks to a preliminary work.
Another preliminary work is to multiply x by a correction factor 1/ln(2) (better for the algorithm I suppose).
So the series must take into account this factor and modify the coefficients :
EXP(x) = 1 + ln(2) x + ln(2)^2 1/2! x^2 + ... + ln(2)^7 1/7! x^7
EXP(x) = a0 + a1 x + a2 x^2 + ... a7 x^7
a0 = 1.000000000
a1 = 0.693147181 ln(2)
a2 = 0.240226507
a3 = 0.055504109
a4 = 0.009618129
a5 = 0.001333356
a6 = 0.000154035
a7 = 0.000015253
To reduce the Taylor error, especially near -1 and 1, the old 1980’s algorithm modify the coefficients with the Chebyshev polynomials approximation.
As the comment says :
“TCHEBYSHEV MODIFIED TAYLOR SERIES COEFFICIENTS FOR E**Xâ€Â
b0 = 1.000000000
b1 = 0.693147186
b2 = 0.240226385
b3 = 0.055505127
b4 = 0.009614017
b5 = 0.001342263
b6 = 0.000143523
b7 = 0.000021499
(see https://github.com/wsoltys/multicomp/blob/master/ROMS/6809/basic.asm line 3852)
Now, the problem :
Where these Chebyshev values come from ?
I did a lot of calculations to retreive these values.
I used the normal method : compute the Chebyshev coefficients for EXP(x) up to 8 terms, then replace the Ti(x) by its x polynomial.
Or by the economization method : compute Taylor EXP(x) up to x^9, and remove a9*T9(x)/2^8 and a8*T8(x)/2^7 (normally equivalent to the first method).
No chance !
The computed Chebyshev coefficients I obtain, in accordance with the literature, are :
c0 = 0.999999801 delta b0 - c0 = 0.000000199
c1 = 0.693147113 b1 - c1 = 0.000000073
c2 = 0.240229556 b2 - c2 = -0.000003171
c3 = 0.055504542 b3 - c3 = 0.000000585
c4 = 0.009610825 b4 - c4 = 0.000003192
c5 = 0.001332605 b5 - c5 = 0.000009658
c6 = 0.000159622 b6 - c6 = -0.000016099
c7 = 0.000015734 b7 - c7 = 0.000005765
Have you an idea ?
Is the ROM coefficients really Chebyshev coefficients ?
Is this another approximation method ?
Am I wrong ?
Thank you for your help.
BR,
see https://retrocomputing.stackexchange.com/questions/7065/1980s-rom-which-expn-algorithm-used in retrocomputing forum
approximation
edited Jul 22 at 16:54
asked Jul 21 at 16:49
jpcohet
142
142
I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
– njuffa
Jul 21 at 18:51
1
Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
– WimC
Jul 21 at 18:52
1
A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
– njuffa
Jul 21 at 19:09
1
$2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
– njuffa
Jul 21 at 19:14
2
Cross-posted: mathoverflow.net/questions/306550/…
– user43208
Jul 21 at 21:02
 |Â
show 6 more comments
I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
– njuffa
Jul 21 at 18:51
1
Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
– WimC
Jul 21 at 18:52
1
A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
– njuffa
Jul 21 at 19:09
1
$2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
– njuffa
Jul 21 at 19:14
2
Cross-posted: mathoverflow.net/questions/306550/…
– user43208
Jul 21 at 21:02
I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
– njuffa
Jul 21 at 18:51
I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
– njuffa
Jul 21 at 18:51
1
1
Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
– WimC
Jul 21 at 18:52
Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
– WimC
Jul 21 at 18:52
1
1
A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
– njuffa
Jul 21 at 19:09
A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
– njuffa
Jul 21 at 19:09
1
1
$2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
– njuffa
Jul 21 at 19:14
$2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
– njuffa
Jul 21 at 19:14
2
2
Cross-posted: mathoverflow.net/questions/306550/…
– user43208
Jul 21 at 21:02
Cross-posted: mathoverflow.net/questions/306550/…
– user43208
Jul 21 at 21:02
 |Â
show 6 more comments
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858662%2fexpx-approximation-in-old-1980s-computer-rom%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
I consider it impossible to reverse engineer how coefficients were computed. Forcing leading coefficient to 1 for computing $exp(x)$ is common, compare my work using minimax approximation here. One candidate is Chebyshev economization, also referred to as telescoping. Use of the Remez algorithm is another likely candidate.
– njuffa
Jul 21 at 18:51
1
Those coefficients $b_n$ seem to be intended to approximate $2^x$ on the interval $[0, 1]$ (not on $[-1,1]$) and give the correct value at both ends. It doesn’t seem to minimize the maximum error. Just good enough to get $9$ significant digits.
– WimC
Jul 21 at 18:52
1
A quick application of the Remez algorithm to $2^x-1$ on $[0,1]$, minimizing absolute error, yields this set of coefficients (quite close to what you see in the code): $0.693147200, 0.240226190, 0.055506134, 0.009611454, 0.001345703, 0.000141190, 0.000022130$
– njuffa
Jul 21 at 19:09
1
$2^x-1$ on $[0,1]$, minimizing relative error: $0.693147180, 0.240226488, 0.055504414, 0.009616270, 0.001338690, 0.000146290, 0.000020667$
– njuffa
Jul 21 at 19:14
2
Cross-posted: mathoverflow.net/questions/306550/…
– user43208
Jul 21 at 21:02