Week Eight – Tensor Properties II: Decompositions & Orthogonal Tensors

This week we continue our study of tensor properties with additive and spectral decompositions of tensors. We shall also look at orthogonal tensors. The slides are presented in the video

Here are the actual slides we will use in class. It begins with a repeat of some of last week’s outstanding issues that you ought to have gone through on your own. These are: Products of determinants, trace of compositions and scalar product of tensors. You must be current on these in order to understand this weeks menu.
The five topics covered here include:
1. The tensor set as a Euclidean Vector Space,
2. Additive Decompositions
3. The Cofactor Tensor, its geometric interpretation
4. Orthogonal Tensors
5. The Axial Vector
These are vocalized in the above Vimeo video and the downloadable slides are here

Download (PDF, 1.38MB)

17 comments on “Week Eight – Tensor Properties II: Decompositions & Orthogonal Tensors

  1. Adebara Ayomide, 160404011 says:

    Wooow, thanks so much for the slide videos , it makes learning easier, my God continue to bless. ❤️❤️❤️

  2. Ali Abayomi 170404521 says:

    Is there a relationship between orthogonal tensor and identity tensor since it transforms a vector to the same vector

    • oafak says:

      Yes it is orthogonal – in fact a proper orthogonal tensor – a rotation through angle  0, 2\pi, 4\pi, ... , about every axis. Furthermore, its three eigenvalues,  \lambda_1 =\lambda_2 =\lambda_3=1, Every vector is an eigenvector to the Identity Tensor as,  \bold{Iv=v}, ~\forall \bold{v} \in \mathbb{E} and its unique eigen-projectors are made of dyads from itself:  \bold{I=e}_1\otimes\bold{e}_1+\bold{e}_2\otimes\bold{e}_2+\bold{e}_3\otimes\bold{e}_3 .

  3. Since  \mathrm{tr~sym~}\bold{S}=\frac{1}{2}\left( \mathrm{tr~}\bold{S}+\textrm{tr~}\bold{S} \right)
    Can we as well say that that  \mathrm{tr~sym~}\bold{S}=\mathrm{tr~}\bold{S}
    As in  \frac{1}{2} \times 2 \mathrm{~tr~}\bold{S}

  4. Adingupu Stephen 170407506 says:

    Sir on the last part about the axial vector -2dkaVk = EaijQij
    where d is the Kronecker delta
    E is the Levi civita
    Q is omega
    You arrived at vi = -1/2EijkQjk
    I understand that the Kronecker delta substituted the k for a(alpha) and it is a dummy, I also understand how the opposite side came to be but how did Q(omega) come to have jk
    Sorry for the poor typing sir

    • oafak says:

      S1.31: It was established that e_{rjk}e_{ijk}=2\delta_{ri}.
      S2.33: We established the fact that \bold{e}_i\times\bold{e}_j=e_{ijk}\bold{e}_k
      S4.3: We transferred the LeviCivita to the other side in the above equation and showed that, \bold{e}_i=\frac{1}{2}e_{ijk}\bold{e}_j\times\bold{e}_k.
      This is essentially the same argument as the one here.

  5. Damilare Agosu says:

        \[e_{ijk} \cdot e_{rsk}\]

    The solution for this,shown in week 3 pg29 using the levi civita.
    I have checked that any 2×2 matrix taken from the Original 3×3 matrix. Its determinant(of the 2×2) is the final answer. I tested it using its application in Week 8 pg27. And i also tested that if there are two shared indexes it works. Since you would use your kronecker substitution.

    So does this rule apply generally??

    • oafak says:

      The rules that apply generally are in the summation convention. The matrix rules you are developing are not correct. The actual matrix representation of e_{ijk} is 3\times 3\times 3 which gives 27 components! Not 3\times 3! The 2D matrix representation you are trying to imagine does not apply to tensors of order than three or higher! The equation you have above here are also not correct in that you cannot take the dot product e_{ijk}\cdot e_{rsk} because the two operands you are working with are scalar components of the Levi-Civita tensor. They are NOT vectors!
      This is a course beginning with vector and tensor analysis. Spend your effort wisely in understanding the principles you are being taught before you begin to look for patterns to match. You are yet to understand something as simple as the volume of a parellelpiped that was taught in the first class! This, and other basic ideas, should be mastered first!

  6. Areo Ajibola says:

    Please sir how do I go about solving this:
    ( \mathbf{u_1} \otimes \mathbf{v_1}) : ( \mathbf{u_2} \otimes \mathbf{v_2} )

    Areo Ajibola
    Systems Engineering

    • oafak says:

      The product of two or more dyads is treated in S7.17. The scalar product of any two tensors \bold{S} and \bold{T} can be found:

      (1)    \begin{align*} \bold{S:T}=\textrm{tr}\left(\bold{ST}^{\mathsf{T}}\right)=\textrm{tr}\left(\bold{S^{\mathsf{T}}T}\right). \end{align*}

      Accordingly,

      (2)    \begin{align*} (\bold{u}_1\otimes\bold{v}_1):(\bold{u}_2\otimes\bold{v}_2)&=\textrm{tr}\left( (\bold{u}_1\otimes\bold{v}_1)(\bold{u}_2\otimes\bold{v}_2)^\mathsf{T}\right)\\ &=\textrm{tr}\left((\bold{u}_1\otimes\bold{v}_1)(\bold{v}_2\otimes\bold{u}_2)\right)\\ &=\textrm{tr}(\bold{u}_1\otimes\bold{u}_2)(\bold{v}_1\cdot\bold{v}_2)\\ &=(\bold{u}_1\cdot\bold{u}_2)(\bold{v}_1\cdot\bold{v}_2) \end{align*}

  7. Iroanya Japheth 160407004 says:

    Good day sir.
    In page 19 of Chapter 3, there was an expression:
    c
    : ( ⊗ ) =

    I don’t understand how the inner product, which is meant to be the trace of the transpose of(
    c multiplied by the dyad) equaled the constant

    • oafak says:

      Your question is not clear. You did not write a meaningful expression looking like anything on page 19, Chapter 3. I will be able to help you once you are clearer and specific.
      It is quite possible you are thinking of page 19 of Chapter 2. In that case I have the following to say:
      Start from the component wise expression for the cofactor of tensor \bold{T}, we have, \bold{T}^{\textsf{c}}={T^c}_{\alpha\beta}\bold{e}_{\alpha}\otimes\bold{e}_{\beta}. Now take the inner product of both sides with \bold{e}_{i}\otimes\bold{e}_{j} you immediately have,\bold{T}^{\textsf{c}} \colon \bold{e}_{i}\otimes\bold{e}_{j} =(T^{\textsf{c}}_{\alpha \beta}\bold{e}_{\alpha}\otimes\bold{e}_{\beta}) \colon (\bold{e}_{i}\otimes\bold{e}_{j}) Which is the same as \textrm{tr}((T^{\textsf{c}}_{\alpha\beta}\bold{e}_{\alpha}\otimes\bold{e}_{\beta})(\bold{e}_{j}\otimes\bold{e}_{i})). It can be further simplified to \textrm{tr}\left(T^\textsf{c}}_{\alpha\beta}\bold{e}_{\alpha}\otimes\bold{e}_{i}\right)\delta_{\beta j}. And, finally, T^\textsf{c}}_{\alpha\beta}\delta_{\alpha i}\delta_{\beta j}=T^\textsf{c}}_{ij}.
      There is more information about the rest of this issue in my response to Abdulazeez Opeyemi Lawal in Week Nine. Also check out S8.24-27 for more on this.

  8. Ororho Maxwell Omorhode 160404070 says:

    Sir, the trace of a deviatric and skew tensor is zero, does this mean all it’s diagonal elements are zero or can be a combination like (0,-1,1)?

    • oafak says:

      It does not mean that every element is zero. Trace is the sum of elements. For a skew tensor, each element is zero; for a deviatoric tensor, only the sum is zero. In both cases, the trace is zero.

Leave a Reply to Onuoha ebuka 170404520 Cancel reply

Your email address will not be published. Required fields are marked *