Proving a general statement about sequences and continous functions.
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
It is fairly common in a first course on Analysis to prove that if $a_i$, $b_i$ are infinite sequences $ℕ→â„Â$ that converge to the points $a, b in â„Â$ respectively then the sequence $a_i+b_i$ converges to the point $a+b in â„Â$. The same idea works with subtraction, multiplication, division under non-zero elements, etc.
I believe that there might be a theorem that can work as a tool to prove all the previous statements in a general manner. If I'm correct, it would go something like this:
Let $(X, d)$ be a metric space,
let $f_1, ldots, f_n$ be a set of convergent sequences $ℕ→X$ such that $f_i(j)in Y subseteq X$ for all $i le n, j ge N in ℕ$ and $lim limits_x to ∞f_i(x)=F_i in Y$,
and let $g:X^n→X^m$ be a function continuous on $Y^n$.
Then $$lim limits_x_1 to ∞ , ldots, x_n to ∞g(f_1(x_1), ldots, f_n(x_n)) = g(F_1, ldots, F_n)$$
However since I'm fairly new to Analysis I do not know how to prove such general claim, and I was wondering if anyone knew of a complete formal proof of it.
Any help/thoughts would be really appreciated.
real-analysis complex-analysis limits metric-spaces proof-writing
add a comment |Â
up vote
2
down vote
favorite
It is fairly common in a first course on Analysis to prove that if $a_i$, $b_i$ are infinite sequences $ℕ→â„Â$ that converge to the points $a, b in â„Â$ respectively then the sequence $a_i+b_i$ converges to the point $a+b in â„Â$. The same idea works with subtraction, multiplication, division under non-zero elements, etc.
I believe that there might be a theorem that can work as a tool to prove all the previous statements in a general manner. If I'm correct, it would go something like this:
Let $(X, d)$ be a metric space,
let $f_1, ldots, f_n$ be a set of convergent sequences $ℕ→X$ such that $f_i(j)in Y subseteq X$ for all $i le n, j ge N in ℕ$ and $lim limits_x to ∞f_i(x)=F_i in Y$,
and let $g:X^n→X^m$ be a function continuous on $Y^n$.
Then $$lim limits_x_1 to ∞ , ldots, x_n to ∞g(f_1(x_1), ldots, f_n(x_n)) = g(F_1, ldots, F_n)$$
However since I'm fairly new to Analysis I do not know how to prove such general claim, and I was wondering if anyone knew of a complete formal proof of it.
Any help/thoughts would be really appreciated.
real-analysis complex-analysis limits metric-spaces proof-writing
Note my edits to the question for proper MathJax usage. In particular in expressions like $$ a_i, b_i $$ the $textcurly braces$ should be within MathJax. Also, if you use actual LaTeX, as opposed to MathJax, you will find that a,...,z and a,ldots,z look different from each other, in that the latter is rendered as $a,ldots,z$ and the former as $a,text...,z.$ $$ beginalign & a,ldots,z \ & a,text...,z endalign $$
– Michael Hardy
Aug 2 at 17:52
@Arnaud Mortier. Would it be clearer if I change "convergent functions" by "convergent sequences"?
– Leo
Aug 2 at 18:01
@Leo Yes, and also remove the "$(x)$" since $f_i$ is a function here while $f_i(x)$ is an evaluation of that function at input $x$. Note also that $x$ is perhaps not a very well chosen letter anyway for the argument of $f_i$ since $X$ is the codomain, not the domain, and since $x$ usually denotes a real number as opposed to $n,m,p,q,$ etc. which are all fine to denote a natural number.
– Arnaud Mortier
Aug 2 at 18:24
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
It is fairly common in a first course on Analysis to prove that if $a_i$, $b_i$ are infinite sequences $ℕ→â„Â$ that converge to the points $a, b in â„Â$ respectively then the sequence $a_i+b_i$ converges to the point $a+b in â„Â$. The same idea works with subtraction, multiplication, division under non-zero elements, etc.
I believe that there might be a theorem that can work as a tool to prove all the previous statements in a general manner. If I'm correct, it would go something like this:
Let $(X, d)$ be a metric space,
let $f_1, ldots, f_n$ be a set of convergent sequences $ℕ→X$ such that $f_i(j)in Y subseteq X$ for all $i le n, j ge N in ℕ$ and $lim limits_x to ∞f_i(x)=F_i in Y$,
and let $g:X^n→X^m$ be a function continuous on $Y^n$.
Then $$lim limits_x_1 to ∞ , ldots, x_n to ∞g(f_1(x_1), ldots, f_n(x_n)) = g(F_1, ldots, F_n)$$
However since I'm fairly new to Analysis I do not know how to prove such general claim, and I was wondering if anyone knew of a complete formal proof of it.
Any help/thoughts would be really appreciated.
real-analysis complex-analysis limits metric-spaces proof-writing
It is fairly common in a first course on Analysis to prove that if $a_i$, $b_i$ are infinite sequences $ℕ→â„Â$ that converge to the points $a, b in â„Â$ respectively then the sequence $a_i+b_i$ converges to the point $a+b in â„Â$. The same idea works with subtraction, multiplication, division under non-zero elements, etc.
I believe that there might be a theorem that can work as a tool to prove all the previous statements in a general manner. If I'm correct, it would go something like this:
Let $(X, d)$ be a metric space,
let $f_1, ldots, f_n$ be a set of convergent sequences $ℕ→X$ such that $f_i(j)in Y subseteq X$ for all $i le n, j ge N in ℕ$ and $lim limits_x to ∞f_i(x)=F_i in Y$,
and let $g:X^n→X^m$ be a function continuous on $Y^n$.
Then $$lim limits_x_1 to ∞ , ldots, x_n to ∞g(f_1(x_1), ldots, f_n(x_n)) = g(F_1, ldots, F_n)$$
However since I'm fairly new to Analysis I do not know how to prove such general claim, and I was wondering if anyone knew of a complete formal proof of it.
Any help/thoughts would be really appreciated.
real-analysis complex-analysis limits metric-spaces proof-writing
edited Aug 2 at 18:32
asked Aug 2 at 17:15


Leo
687416
687416
Note my edits to the question for proper MathJax usage. In particular in expressions like $$ a_i, b_i $$ the $textcurly braces$ should be within MathJax. Also, if you use actual LaTeX, as opposed to MathJax, you will find that a,...,z and a,ldots,z look different from each other, in that the latter is rendered as $a,ldots,z$ and the former as $a,text...,z.$ $$ beginalign & a,ldots,z \ & a,text...,z endalign $$
– Michael Hardy
Aug 2 at 17:52
@Arnaud Mortier. Would it be clearer if I change "convergent functions" by "convergent sequences"?
– Leo
Aug 2 at 18:01
@Leo Yes, and also remove the "$(x)$" since $f_i$ is a function here while $f_i(x)$ is an evaluation of that function at input $x$. Note also that $x$ is perhaps not a very well chosen letter anyway for the argument of $f_i$ since $X$ is the codomain, not the domain, and since $x$ usually denotes a real number as opposed to $n,m,p,q,$ etc. which are all fine to denote a natural number.
– Arnaud Mortier
Aug 2 at 18:24
add a comment |Â
Note my edits to the question for proper MathJax usage. In particular in expressions like $$ a_i, b_i $$ the $textcurly braces$ should be within MathJax. Also, if you use actual LaTeX, as opposed to MathJax, you will find that a,...,z and a,ldots,z look different from each other, in that the latter is rendered as $a,ldots,z$ and the former as $a,text...,z.$ $$ beginalign & a,ldots,z \ & a,text...,z endalign $$
– Michael Hardy
Aug 2 at 17:52
@Arnaud Mortier. Would it be clearer if I change "convergent functions" by "convergent sequences"?
– Leo
Aug 2 at 18:01
@Leo Yes, and also remove the "$(x)$" since $f_i$ is a function here while $f_i(x)$ is an evaluation of that function at input $x$. Note also that $x$ is perhaps not a very well chosen letter anyway for the argument of $f_i$ since $X$ is the codomain, not the domain, and since $x$ usually denotes a real number as opposed to $n,m,p,q,$ etc. which are all fine to denote a natural number.
– Arnaud Mortier
Aug 2 at 18:24
Note my edits to the question for proper MathJax usage. In particular in expressions like $$ a_i, b_i $$ the $textcurly braces$ should be within MathJax. Also, if you use actual LaTeX, as opposed to MathJax, you will find that a,...,z and a,ldots,z look different from each other, in that the latter is rendered as $a,ldots,z$ and the former as $a,text...,z.$ $$ beginalign & a,ldots,z \ & a,text...,z endalign $$
– Michael Hardy
Aug 2 at 17:52
Note my edits to the question for proper MathJax usage. In particular in expressions like $$ a_i, b_i $$ the $textcurly braces$ should be within MathJax. Also, if you use actual LaTeX, as opposed to MathJax, you will find that a,...,z and a,ldots,z look different from each other, in that the latter is rendered as $a,ldots,z$ and the former as $a,text...,z.$ $$ beginalign & a,ldots,z \ & a,text...,z endalign $$
– Michael Hardy
Aug 2 at 17:52
@Arnaud Mortier. Would it be clearer if I change "convergent functions" by "convergent sequences"?
– Leo
Aug 2 at 18:01
@Arnaud Mortier. Would it be clearer if I change "convergent functions" by "convergent sequences"?
– Leo
Aug 2 at 18:01
@Leo Yes, and also remove the "$(x)$" since $f_i$ is a function here while $f_i(x)$ is an evaluation of that function at input $x$. Note also that $x$ is perhaps not a very well chosen letter anyway for the argument of $f_i$ since $X$ is the codomain, not the domain, and since $x$ usually denotes a real number as opposed to $n,m,p,q,$ etc. which are all fine to denote a natural number.
– Arnaud Mortier
Aug 2 at 18:24
@Leo Yes, and also remove the "$(x)$" since $f_i$ is a function here while $f_i(x)$ is an evaluation of that function at input $x$. Note also that $x$ is perhaps not a very well chosen letter anyway for the argument of $f_i$ since $X$ is the codomain, not the domain, and since $x$ usually denotes a real number as opposed to $n,m,p,q,$ etc. which are all fine to denote a natural number.
– Arnaud Mortier
Aug 2 at 18:24
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
I will use the following two definitions.
If $(X,d)$ is a metric space, then $(X^n,d^n)$ is the metric space whose points are the cartesian product $Xtimes Xtimes cdots times X$, where $d^n((x_1,dots,x_n),(y_1,dots,y_n))=max_i d(x_i,y_i)$.
Let $h:mathbb N^mto X$. We say $lim_x_1toinfty,dots,x_mtoinftyh(x_1,dots,x_m)=L$ if for all $epsilon>0$ there exists a $Nin mathbb N$ so $min_i x_ige N$ implies $|h(x_1,dots,x_n)-L|<epsilon$.
$g$ being continuous at $(F_1,dots,F_n)$ means that for all $epsilon>0$, there is a $delta>0$ so $$max_i |y_i-F|<delta implies max_j|g(y_1,dots,y_n)_j-g(F_1,dots,F_n)_j|<epsilon.$$
So, given $epsilon>0$, choose such a $delta$, and then for each $1le i le n$, choose as index $N_i$ so that $$xge N_i implies |f_i(x)-F_i|<delta.$$
Letting $N=max_i N_i$, then combining the last two paragraphs (with $y_i=f_i(x_i)$) shows
$$
min_i x_ige Nimplies |g(f_1(x_1),dots,f_n(x_n))_j-g(F_1,dots,F_n)_j|<epsilon
$$
This is precisely the definition of $lim_x_1,x_2,dots,x_ntoinfty g(f_1(x_1),dots,f_n(x_n))=g(F_1,dots,F_n)$.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
I will use the following two definitions.
If $(X,d)$ is a metric space, then $(X^n,d^n)$ is the metric space whose points are the cartesian product $Xtimes Xtimes cdots times X$, where $d^n((x_1,dots,x_n),(y_1,dots,y_n))=max_i d(x_i,y_i)$.
Let $h:mathbb N^mto X$. We say $lim_x_1toinfty,dots,x_mtoinftyh(x_1,dots,x_m)=L$ if for all $epsilon>0$ there exists a $Nin mathbb N$ so $min_i x_ige N$ implies $|h(x_1,dots,x_n)-L|<epsilon$.
$g$ being continuous at $(F_1,dots,F_n)$ means that for all $epsilon>0$, there is a $delta>0$ so $$max_i |y_i-F|<delta implies max_j|g(y_1,dots,y_n)_j-g(F_1,dots,F_n)_j|<epsilon.$$
So, given $epsilon>0$, choose such a $delta$, and then for each $1le i le n$, choose as index $N_i$ so that $$xge N_i implies |f_i(x)-F_i|<delta.$$
Letting $N=max_i N_i$, then combining the last two paragraphs (with $y_i=f_i(x_i)$) shows
$$
min_i x_ige Nimplies |g(f_1(x_1),dots,f_n(x_n))_j-g(F_1,dots,F_n)_j|<epsilon
$$
This is precisely the definition of $lim_x_1,x_2,dots,x_ntoinfty g(f_1(x_1),dots,f_n(x_n))=g(F_1,dots,F_n)$.
add a comment |Â
up vote
1
down vote
I will use the following two definitions.
If $(X,d)$ is a metric space, then $(X^n,d^n)$ is the metric space whose points are the cartesian product $Xtimes Xtimes cdots times X$, where $d^n((x_1,dots,x_n),(y_1,dots,y_n))=max_i d(x_i,y_i)$.
Let $h:mathbb N^mto X$. We say $lim_x_1toinfty,dots,x_mtoinftyh(x_1,dots,x_m)=L$ if for all $epsilon>0$ there exists a $Nin mathbb N$ so $min_i x_ige N$ implies $|h(x_1,dots,x_n)-L|<epsilon$.
$g$ being continuous at $(F_1,dots,F_n)$ means that for all $epsilon>0$, there is a $delta>0$ so $$max_i |y_i-F|<delta implies max_j|g(y_1,dots,y_n)_j-g(F_1,dots,F_n)_j|<epsilon.$$
So, given $epsilon>0$, choose such a $delta$, and then for each $1le i le n$, choose as index $N_i$ so that $$xge N_i implies |f_i(x)-F_i|<delta.$$
Letting $N=max_i N_i$, then combining the last two paragraphs (with $y_i=f_i(x_i)$) shows
$$
min_i x_ige Nimplies |g(f_1(x_1),dots,f_n(x_n))_j-g(F_1,dots,F_n)_j|<epsilon
$$
This is precisely the definition of $lim_x_1,x_2,dots,x_ntoinfty g(f_1(x_1),dots,f_n(x_n))=g(F_1,dots,F_n)$.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
I will use the following two definitions.
If $(X,d)$ is a metric space, then $(X^n,d^n)$ is the metric space whose points are the cartesian product $Xtimes Xtimes cdots times X$, where $d^n((x_1,dots,x_n),(y_1,dots,y_n))=max_i d(x_i,y_i)$.
Let $h:mathbb N^mto X$. We say $lim_x_1toinfty,dots,x_mtoinftyh(x_1,dots,x_m)=L$ if for all $epsilon>0$ there exists a $Nin mathbb N$ so $min_i x_ige N$ implies $|h(x_1,dots,x_n)-L|<epsilon$.
$g$ being continuous at $(F_1,dots,F_n)$ means that for all $epsilon>0$, there is a $delta>0$ so $$max_i |y_i-F|<delta implies max_j|g(y_1,dots,y_n)_j-g(F_1,dots,F_n)_j|<epsilon.$$
So, given $epsilon>0$, choose such a $delta$, and then for each $1le i le n$, choose as index $N_i$ so that $$xge N_i implies |f_i(x)-F_i|<delta.$$
Letting $N=max_i N_i$, then combining the last two paragraphs (with $y_i=f_i(x_i)$) shows
$$
min_i x_ige Nimplies |g(f_1(x_1),dots,f_n(x_n))_j-g(F_1,dots,F_n)_j|<epsilon
$$
This is precisely the definition of $lim_x_1,x_2,dots,x_ntoinfty g(f_1(x_1),dots,f_n(x_n))=g(F_1,dots,F_n)$.
I will use the following two definitions.
If $(X,d)$ is a metric space, then $(X^n,d^n)$ is the metric space whose points are the cartesian product $Xtimes Xtimes cdots times X$, where $d^n((x_1,dots,x_n),(y_1,dots,y_n))=max_i d(x_i,y_i)$.
Let $h:mathbb N^mto X$. We say $lim_x_1toinfty,dots,x_mtoinftyh(x_1,dots,x_m)=L$ if for all $epsilon>0$ there exists a $Nin mathbb N$ so $min_i x_ige N$ implies $|h(x_1,dots,x_n)-L|<epsilon$.
$g$ being continuous at $(F_1,dots,F_n)$ means that for all $epsilon>0$, there is a $delta>0$ so $$max_i |y_i-F|<delta implies max_j|g(y_1,dots,y_n)_j-g(F_1,dots,F_n)_j|<epsilon.$$
So, given $epsilon>0$, choose such a $delta$, and then for each $1le i le n$, choose as index $N_i$ so that $$xge N_i implies |f_i(x)-F_i|<delta.$$
Letting $N=max_i N_i$, then combining the last two paragraphs (with $y_i=f_i(x_i)$) shows
$$
min_i x_ige Nimplies |g(f_1(x_1),dots,f_n(x_n))_j-g(F_1,dots,F_n)_j|<epsilon
$$
This is precisely the definition of $lim_x_1,x_2,dots,x_ntoinfty g(f_1(x_1),dots,f_n(x_n))=g(F_1,dots,F_n)$.
answered Aug 2 at 18:52


Mike Earnest
14.7k11644
14.7k11644
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2870297%2fproving-a-general-statement-about-sequences-and-continous-functions%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Note my edits to the question for proper MathJax usage. In particular in expressions like $$ a_i, b_i $$ the $textcurly braces$ should be within MathJax. Also, if you use actual LaTeX, as opposed to MathJax, you will find that a,...,z and a,ldots,z look different from each other, in that the latter is rendered as $a,ldots,z$ and the former as $a,text...,z.$ $$ beginalign & a,ldots,z \ & a,text...,z endalign $$
– Michael Hardy
Aug 2 at 17:52
@Arnaud Mortier. Would it be clearer if I change "convergent functions" by "convergent sequences"?
– Leo
Aug 2 at 18:01
@Leo Yes, and also remove the "$(x)$" since $f_i$ is a function here while $f_i(x)$ is an evaluation of that function at input $x$. Note also that $x$ is perhaps not a very well chosen letter anyway for the argument of $f_i$ since $X$ is the codomain, not the domain, and since $x$ usually denotes a real number as opposed to $n,m,p,q,$ etc. which are all fine to denote a natural number.
– Arnaud Mortier
Aug 2 at 18:24