#include %26lt;stdio.h%26gt;
#include %26lt;math.h%26gt;
int main()
{
int x;
double y, e;
printf("How much precision?\n");
scanf("%d", %26amp;x);
getchar();
y = 1 + (1 / x);
e = pow(y, x);
printf("e = %f\n", e);
getchar();
}
I am using the limit function to find e, which is (1+(1/n))^n, not the series function which is 1/n!. What is wrong with this code? It makes perfect sense to me.
I keep getting e = 1.0000000000 when I try to approximate e with the following C code in Dev-C++:?
Try changing:
y = 1 + (1 / x);
to:
y = 1.0 + (1.0 / x);
and declaring x as a double precision variable and seeing if it makes a difference. You might have a automatic casting problem. The following code worked for me:
1 #include %26lt;stdio.h%26gt;
2 #include %26lt;math.h%26gt;
3
4 int main(void)
5 {
6
7 double x, y, e;
8
9 printf("How much precision?\n");
10 scanf("%lf", %26amp;x);
11 getchar();
12
13 y = 1 + (1 / x);
14 e = pow(y, x);
15
16 printf("e = %f\n", e);
17 getchar();
18 }
The problem is integer division. If I declare
int x;
and then do something like:
y = 1 + 1/x;
the expression 1/x evaluates to zero. To that we add one, then we cast y as a double precision value so it is displayed as a floating point number.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment