What is the motivation behind defining tensor product?
Clash Royale CLAN TAG#URR8PPP
up vote
3
down vote
favorite
In my undergraduate math course we have tensor calculus. I am not getting the motivations of defining such thing, definition of tensor product and feeling lack of interest in the topic. Can anyone explain why tensor product is defined as it is? or, provide a link.
I am using the definition of Tensor product as it is defined here.
calculus multivariable-calculus differential-geometry tensor-products tensors
add a comment |Â
up vote
3
down vote
favorite
In my undergraduate math course we have tensor calculus. I am not getting the motivations of defining such thing, definition of tensor product and feeling lack of interest in the topic. Can anyone explain why tensor product is defined as it is? or, provide a link.
I am using the definition of Tensor product as it is defined here.
calculus multivariable-calculus differential-geometry tensor-products tensors
well, there is more than one defnition possible. Maybe you give us a hint with which one you are working?
â Thomas
Jul 25 at 15:39
3
It represents an interesting functor....
â Lord Shark the Unknown
Jul 25 at 15:40
It is possible to think about tensors without using tensor products. However, tensor products allow us to encode "multilinear" tensors as a linear maps.
â Omnomnomnom
Jul 25 at 15:40
1
See here.
â Pedro Tamaroffâ¦
Jul 25 at 16:08
Thank you @PedroTamaroff
â Ken Ono
Jul 25 at 16:24
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
In my undergraduate math course we have tensor calculus. I am not getting the motivations of defining such thing, definition of tensor product and feeling lack of interest in the topic. Can anyone explain why tensor product is defined as it is? or, provide a link.
I am using the definition of Tensor product as it is defined here.
calculus multivariable-calculus differential-geometry tensor-products tensors
In my undergraduate math course we have tensor calculus. I am not getting the motivations of defining such thing, definition of tensor product and feeling lack of interest in the topic. Can anyone explain why tensor product is defined as it is? or, provide a link.
I am using the definition of Tensor product as it is defined here.
calculus multivariable-calculus differential-geometry tensor-products tensors
edited Jul 25 at 15:56
asked Jul 25 at 15:33
Ken Ono
404
404
well, there is more than one defnition possible. Maybe you give us a hint with which one you are working?
â Thomas
Jul 25 at 15:39
3
It represents an interesting functor....
â Lord Shark the Unknown
Jul 25 at 15:40
It is possible to think about tensors without using tensor products. However, tensor products allow us to encode "multilinear" tensors as a linear maps.
â Omnomnomnom
Jul 25 at 15:40
1
See here.
â Pedro Tamaroffâ¦
Jul 25 at 16:08
Thank you @PedroTamaroff
â Ken Ono
Jul 25 at 16:24
add a comment |Â
well, there is more than one defnition possible. Maybe you give us a hint with which one you are working?
â Thomas
Jul 25 at 15:39
3
It represents an interesting functor....
â Lord Shark the Unknown
Jul 25 at 15:40
It is possible to think about tensors without using tensor products. However, tensor products allow us to encode "multilinear" tensors as a linear maps.
â Omnomnomnom
Jul 25 at 15:40
1
See here.
â Pedro Tamaroffâ¦
Jul 25 at 16:08
Thank you @PedroTamaroff
â Ken Ono
Jul 25 at 16:24
well, there is more than one defnition possible. Maybe you give us a hint with which one you are working?
â Thomas
Jul 25 at 15:39
well, there is more than one defnition possible. Maybe you give us a hint with which one you are working?
â Thomas
Jul 25 at 15:39
3
3
It represents an interesting functor....
â Lord Shark the Unknown
Jul 25 at 15:40
It represents an interesting functor....
â Lord Shark the Unknown
Jul 25 at 15:40
It is possible to think about tensors without using tensor products. However, tensor products allow us to encode "multilinear" tensors as a linear maps.
â Omnomnomnom
Jul 25 at 15:40
It is possible to think about tensors without using tensor products. However, tensor products allow us to encode "multilinear" tensors as a linear maps.
â Omnomnomnom
Jul 25 at 15:40
1
1
See here.
â Pedro Tamaroffâ¦
Jul 25 at 16:08
See here.
â Pedro Tamaroffâ¦
Jul 25 at 16:08
Thank you @PedroTamaroff
â Ken Ono
Jul 25 at 16:24
Thank you @PedroTamaroff
â Ken Ono
Jul 25 at 16:24
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
2
down vote
accepted
I would like to add a real example of relevant tensor-product spaces (from quantum theory, but simplified). Maybe it is a bit too complicated, but
for me, it shows the difference between cartesian products and tensor products in the best way!
A long introduction.
We want to work with continuous functions $f, g in C^0(mathbb R)$.
You might want to consider those functions to be some probability densities, which say something like "the probability of a quantum particle to be at point $x$ is $f(x)$". (In this example $C^0$ is the 'vector space' and later we will see how $C^0 otimes C^0$ looks like.)
Now, if $f$ and $g$ are different densities for different systems, say A and B, we might want to ask for the probability of system A to be at state $x$ and system B to be at state $y$ at the same time. This probability will be given as $f(x) cdot g(y)$.
Now how many different density distributions for (A, B) exist?
If A and B are independet, then we simply use something like $C^0(mathbb R) times C^0(mathbb R)$ to describe the densities as two splited functions $f, g$. This space would include two-dimensional densities which
are the product $f(x) cdot g(y)$ of two functions, for example function like in the following picture.
But there are more interesting two-dimensional densities, like this one:
This function is not the product like $f(x) cdot g(y)$, instead it is more something like $f_1(x) cdot g_1(y) + f_2(x) cdot g_2(y) notin C^0(mathbb R) times C^0(mathbb R)$.
Finally, Tensor-product spaces!
This matches perfectly with the definition of tensor-product spaces:
You take vectors from the individual spaces, (here $f_i, g_j in C^0(mathbb R)$)
and you combine them to a new 'abstract' vector $f_iotimes g_i in C^0(mathbb R) otimes C^0(mathbb R)$.
In this new abstract tensor-product space, you also can add two pure vectors and get more complicated vectors, for example like in the second plot
$$ f_1 otimes g_1 + f_2 otimes g_2 in C^0(mathbb R) otimes C^0(mathbb R) approx C^0(mathbb R^2)$$.
Going further.
This example is kind of trivial and only captures a specific situation. But there are many similar, but non-trivial cases, where interesting spaces can be seen as tensor-product spaces. Applications are plenty and can be found (for example) in differential geometry, numerical analysis, computer graphics, measure theory and functional analysis.
Of course, abstract objects, like the tensor-product, are more complicated and is requires some training to use them in practical situations... Like often in math, there is always a trade-off between learning a general theory and how to apply it to concrete examples versus learning only the tools you really need and risk too learn the same stuff twice in different settings without noticing it. Both approaches are understandable.
add a comment |Â
up vote
0
down vote
I apologize for the avoided details, but I think this is a good starting picture for a undergrad (an explanation through geometry).
We generalize a $k$-linear map to a $k$-tensor because typically in a problem you must switch between coordinate systems, metrics, etc. Speaking from a geometer's perspective, linear maps will hold geometric information, but since these maps are typically defined over vector spaces, you now have to figure out a basis which makes your problem doable.
Once decided, it would be nice if your map also changed appropriately with respect to your transformation i.e you would like to not have to start the whole problem over again, picking the correct basis and so forth. The start over process is long due to the fact that sometimes geometric information is contained in linear maps which are pairings of vectors and their duals. These things change differently and so starting over is double work.
Thus, Tensor calculus emerges as a field which links notation with computation; as well as allowing you to do much longer computations at a very fast rate. To explain which tensors are interesting (i.e which things you would like to "tensorize") is a different question, but I hope this helps.
I'll also amend a funny joke that was told to me, "Differential Geometry started off as the study of things which are invariant under rotation, but has turned into the study of things which are invariant under notation."
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
I would like to add a real example of relevant tensor-product spaces (from quantum theory, but simplified). Maybe it is a bit too complicated, but
for me, it shows the difference between cartesian products and tensor products in the best way!
A long introduction.
We want to work with continuous functions $f, g in C^0(mathbb R)$.
You might want to consider those functions to be some probability densities, which say something like "the probability of a quantum particle to be at point $x$ is $f(x)$". (In this example $C^0$ is the 'vector space' and later we will see how $C^0 otimes C^0$ looks like.)
Now, if $f$ and $g$ are different densities for different systems, say A and B, we might want to ask for the probability of system A to be at state $x$ and system B to be at state $y$ at the same time. This probability will be given as $f(x) cdot g(y)$.
Now how many different density distributions for (A, B) exist?
If A and B are independet, then we simply use something like $C^0(mathbb R) times C^0(mathbb R)$ to describe the densities as two splited functions $f, g$. This space would include two-dimensional densities which
are the product $f(x) cdot g(y)$ of two functions, for example function like in the following picture.
But there are more interesting two-dimensional densities, like this one:
This function is not the product like $f(x) cdot g(y)$, instead it is more something like $f_1(x) cdot g_1(y) + f_2(x) cdot g_2(y) notin C^0(mathbb R) times C^0(mathbb R)$.
Finally, Tensor-product spaces!
This matches perfectly with the definition of tensor-product spaces:
You take vectors from the individual spaces, (here $f_i, g_j in C^0(mathbb R)$)
and you combine them to a new 'abstract' vector $f_iotimes g_i in C^0(mathbb R) otimes C^0(mathbb R)$.
In this new abstract tensor-product space, you also can add two pure vectors and get more complicated vectors, for example like in the second plot
$$ f_1 otimes g_1 + f_2 otimes g_2 in C^0(mathbb R) otimes C^0(mathbb R) approx C^0(mathbb R^2)$$.
Going further.
This example is kind of trivial and only captures a specific situation. But there are many similar, but non-trivial cases, where interesting spaces can be seen as tensor-product spaces. Applications are plenty and can be found (for example) in differential geometry, numerical analysis, computer graphics, measure theory and functional analysis.
Of course, abstract objects, like the tensor-product, are more complicated and is requires some training to use them in practical situations... Like often in math, there is always a trade-off between learning a general theory and how to apply it to concrete examples versus learning only the tools you really need and risk too learn the same stuff twice in different settings without noticing it. Both approaches are understandable.
add a comment |Â
up vote
2
down vote
accepted
I would like to add a real example of relevant tensor-product spaces (from quantum theory, but simplified). Maybe it is a bit too complicated, but
for me, it shows the difference between cartesian products and tensor products in the best way!
A long introduction.
We want to work with continuous functions $f, g in C^0(mathbb R)$.
You might want to consider those functions to be some probability densities, which say something like "the probability of a quantum particle to be at point $x$ is $f(x)$". (In this example $C^0$ is the 'vector space' and later we will see how $C^0 otimes C^0$ looks like.)
Now, if $f$ and $g$ are different densities for different systems, say A and B, we might want to ask for the probability of system A to be at state $x$ and system B to be at state $y$ at the same time. This probability will be given as $f(x) cdot g(y)$.
Now how many different density distributions for (A, B) exist?
If A and B are independet, then we simply use something like $C^0(mathbb R) times C^0(mathbb R)$ to describe the densities as two splited functions $f, g$. This space would include two-dimensional densities which
are the product $f(x) cdot g(y)$ of two functions, for example function like in the following picture.
But there are more interesting two-dimensional densities, like this one:
This function is not the product like $f(x) cdot g(y)$, instead it is more something like $f_1(x) cdot g_1(y) + f_2(x) cdot g_2(y) notin C^0(mathbb R) times C^0(mathbb R)$.
Finally, Tensor-product spaces!
This matches perfectly with the definition of tensor-product spaces:
You take vectors from the individual spaces, (here $f_i, g_j in C^0(mathbb R)$)
and you combine them to a new 'abstract' vector $f_iotimes g_i in C^0(mathbb R) otimes C^0(mathbb R)$.
In this new abstract tensor-product space, you also can add two pure vectors and get more complicated vectors, for example like in the second plot
$$ f_1 otimes g_1 + f_2 otimes g_2 in C^0(mathbb R) otimes C^0(mathbb R) approx C^0(mathbb R^2)$$.
Going further.
This example is kind of trivial and only captures a specific situation. But there are many similar, but non-trivial cases, where interesting spaces can be seen as tensor-product spaces. Applications are plenty and can be found (for example) in differential geometry, numerical analysis, computer graphics, measure theory and functional analysis.
Of course, abstract objects, like the tensor-product, are more complicated and is requires some training to use them in practical situations... Like often in math, there is always a trade-off between learning a general theory and how to apply it to concrete examples versus learning only the tools you really need and risk too learn the same stuff twice in different settings without noticing it. Both approaches are understandable.
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
I would like to add a real example of relevant tensor-product spaces (from quantum theory, but simplified). Maybe it is a bit too complicated, but
for me, it shows the difference between cartesian products and tensor products in the best way!
A long introduction.
We want to work with continuous functions $f, g in C^0(mathbb R)$.
You might want to consider those functions to be some probability densities, which say something like "the probability of a quantum particle to be at point $x$ is $f(x)$". (In this example $C^0$ is the 'vector space' and later we will see how $C^0 otimes C^0$ looks like.)
Now, if $f$ and $g$ are different densities for different systems, say A and B, we might want to ask for the probability of system A to be at state $x$ and system B to be at state $y$ at the same time. This probability will be given as $f(x) cdot g(y)$.
Now how many different density distributions for (A, B) exist?
If A and B are independet, then we simply use something like $C^0(mathbb R) times C^0(mathbb R)$ to describe the densities as two splited functions $f, g$. This space would include two-dimensional densities which
are the product $f(x) cdot g(y)$ of two functions, for example function like in the following picture.
But there are more interesting two-dimensional densities, like this one:
This function is not the product like $f(x) cdot g(y)$, instead it is more something like $f_1(x) cdot g_1(y) + f_2(x) cdot g_2(y) notin C^0(mathbb R) times C^0(mathbb R)$.
Finally, Tensor-product spaces!
This matches perfectly with the definition of tensor-product spaces:
You take vectors from the individual spaces, (here $f_i, g_j in C^0(mathbb R)$)
and you combine them to a new 'abstract' vector $f_iotimes g_i in C^0(mathbb R) otimes C^0(mathbb R)$.
In this new abstract tensor-product space, you also can add two pure vectors and get more complicated vectors, for example like in the second plot
$$ f_1 otimes g_1 + f_2 otimes g_2 in C^0(mathbb R) otimes C^0(mathbb R) approx C^0(mathbb R^2)$$.
Going further.
This example is kind of trivial and only captures a specific situation. But there are many similar, but non-trivial cases, where interesting spaces can be seen as tensor-product spaces. Applications are plenty and can be found (for example) in differential geometry, numerical analysis, computer graphics, measure theory and functional analysis.
Of course, abstract objects, like the tensor-product, are more complicated and is requires some training to use them in practical situations... Like often in math, there is always a trade-off between learning a general theory and how to apply it to concrete examples versus learning only the tools you really need and risk too learn the same stuff twice in different settings without noticing it. Both approaches are understandable.
I would like to add a real example of relevant tensor-product spaces (from quantum theory, but simplified). Maybe it is a bit too complicated, but
for me, it shows the difference between cartesian products and tensor products in the best way!
A long introduction.
We want to work with continuous functions $f, g in C^0(mathbb R)$.
You might want to consider those functions to be some probability densities, which say something like "the probability of a quantum particle to be at point $x$ is $f(x)$". (In this example $C^0$ is the 'vector space' and later we will see how $C^0 otimes C^0$ looks like.)
Now, if $f$ and $g$ are different densities for different systems, say A and B, we might want to ask for the probability of system A to be at state $x$ and system B to be at state $y$ at the same time. This probability will be given as $f(x) cdot g(y)$.
Now how many different density distributions for (A, B) exist?
If A and B are independet, then we simply use something like $C^0(mathbb R) times C^0(mathbb R)$ to describe the densities as two splited functions $f, g$. This space would include two-dimensional densities which
are the product $f(x) cdot g(y)$ of two functions, for example function like in the following picture.
But there are more interesting two-dimensional densities, like this one:
This function is not the product like $f(x) cdot g(y)$, instead it is more something like $f_1(x) cdot g_1(y) + f_2(x) cdot g_2(y) notin C^0(mathbb R) times C^0(mathbb R)$.
Finally, Tensor-product spaces!
This matches perfectly with the definition of tensor-product spaces:
You take vectors from the individual spaces, (here $f_i, g_j in C^0(mathbb R)$)
and you combine them to a new 'abstract' vector $f_iotimes g_i in C^0(mathbb R) otimes C^0(mathbb R)$.
In this new abstract tensor-product space, you also can add two pure vectors and get more complicated vectors, for example like in the second plot
$$ f_1 otimes g_1 + f_2 otimes g_2 in C^0(mathbb R) otimes C^0(mathbb R) approx C^0(mathbb R^2)$$.
Going further.
This example is kind of trivial and only captures a specific situation. But there are many similar, but non-trivial cases, where interesting spaces can be seen as tensor-product spaces. Applications are plenty and can be found (for example) in differential geometry, numerical analysis, computer graphics, measure theory and functional analysis.
Of course, abstract objects, like the tensor-product, are more complicated and is requires some training to use them in practical situations... Like often in math, there is always a trade-off between learning a general theory and how to apply it to concrete examples versus learning only the tools you really need and risk too learn the same stuff twice in different settings without noticing it. Both approaches are understandable.
answered Jul 28 at 23:55
Steffen Plunder
45829
45829
add a comment |Â
add a comment |Â
up vote
0
down vote
I apologize for the avoided details, but I think this is a good starting picture for a undergrad (an explanation through geometry).
We generalize a $k$-linear map to a $k$-tensor because typically in a problem you must switch between coordinate systems, metrics, etc. Speaking from a geometer's perspective, linear maps will hold geometric information, but since these maps are typically defined over vector spaces, you now have to figure out a basis which makes your problem doable.
Once decided, it would be nice if your map also changed appropriately with respect to your transformation i.e you would like to not have to start the whole problem over again, picking the correct basis and so forth. The start over process is long due to the fact that sometimes geometric information is contained in linear maps which are pairings of vectors and their duals. These things change differently and so starting over is double work.
Thus, Tensor calculus emerges as a field which links notation with computation; as well as allowing you to do much longer computations at a very fast rate. To explain which tensors are interesting (i.e which things you would like to "tensorize") is a different question, but I hope this helps.
I'll also amend a funny joke that was told to me, "Differential Geometry started off as the study of things which are invariant under rotation, but has turned into the study of things which are invariant under notation."
add a comment |Â
up vote
0
down vote
I apologize for the avoided details, but I think this is a good starting picture for a undergrad (an explanation through geometry).
We generalize a $k$-linear map to a $k$-tensor because typically in a problem you must switch between coordinate systems, metrics, etc. Speaking from a geometer's perspective, linear maps will hold geometric information, but since these maps are typically defined over vector spaces, you now have to figure out a basis which makes your problem doable.
Once decided, it would be nice if your map also changed appropriately with respect to your transformation i.e you would like to not have to start the whole problem over again, picking the correct basis and so forth. The start over process is long due to the fact that sometimes geometric information is contained in linear maps which are pairings of vectors and their duals. These things change differently and so starting over is double work.
Thus, Tensor calculus emerges as a field which links notation with computation; as well as allowing you to do much longer computations at a very fast rate. To explain which tensors are interesting (i.e which things you would like to "tensorize") is a different question, but I hope this helps.
I'll also amend a funny joke that was told to me, "Differential Geometry started off as the study of things which are invariant under rotation, but has turned into the study of things which are invariant under notation."
add a comment |Â
up vote
0
down vote
up vote
0
down vote
I apologize for the avoided details, but I think this is a good starting picture for a undergrad (an explanation through geometry).
We generalize a $k$-linear map to a $k$-tensor because typically in a problem you must switch between coordinate systems, metrics, etc. Speaking from a geometer's perspective, linear maps will hold geometric information, but since these maps are typically defined over vector spaces, you now have to figure out a basis which makes your problem doable.
Once decided, it would be nice if your map also changed appropriately with respect to your transformation i.e you would like to not have to start the whole problem over again, picking the correct basis and so forth. The start over process is long due to the fact that sometimes geometric information is contained in linear maps which are pairings of vectors and their duals. These things change differently and so starting over is double work.
Thus, Tensor calculus emerges as a field which links notation with computation; as well as allowing you to do much longer computations at a very fast rate. To explain which tensors are interesting (i.e which things you would like to "tensorize") is a different question, but I hope this helps.
I'll also amend a funny joke that was told to me, "Differential Geometry started off as the study of things which are invariant under rotation, but has turned into the study of things which are invariant under notation."
I apologize for the avoided details, but I think this is a good starting picture for a undergrad (an explanation through geometry).
We generalize a $k$-linear map to a $k$-tensor because typically in a problem you must switch between coordinate systems, metrics, etc. Speaking from a geometer's perspective, linear maps will hold geometric information, but since these maps are typically defined over vector spaces, you now have to figure out a basis which makes your problem doable.
Once decided, it would be nice if your map also changed appropriately with respect to your transformation i.e you would like to not have to start the whole problem over again, picking the correct basis and so forth. The start over process is long due to the fact that sometimes geometric information is contained in linear maps which are pairings of vectors and their duals. These things change differently and so starting over is double work.
Thus, Tensor calculus emerges as a field which links notation with computation; as well as allowing you to do much longer computations at a very fast rate. To explain which tensors are interesting (i.e which things you would like to "tensorize") is a different question, but I hope this helps.
I'll also amend a funny joke that was told to me, "Differential Geometry started off as the study of things which are invariant under rotation, but has turned into the study of things which are invariant under notation."
edited Jul 26 at 1:38
answered Jul 25 at 17:54
Faraad Armwood
7,4292619
7,4292619
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2862535%2fwhat-is-the-motivation-behind-defining-tensor-product%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
well, there is more than one defnition possible. Maybe you give us a hint with which one you are working?
â Thomas
Jul 25 at 15:39
3
It represents an interesting functor....
â Lord Shark the Unknown
Jul 25 at 15:40
It is possible to think about tensors without using tensor products. However, tensor products allow us to encode "multilinear" tensors as a linear maps.
â Omnomnomnom
Jul 25 at 15:40
1
See here.
â Pedro Tamaroffâ¦
Jul 25 at 16:08
Thank you @PedroTamaroff
â Ken Ono
Jul 25 at 16:24