Week Ten: Tensor Analysis

We proceed to Differential and Integral Calculus wit tensor valued functions and tensor arguments. In the first set of slides, we extend out knowledge of scalar differentiation to larger objects. The examples we begin with are simple and easy to understand. The lecture concludes with the Gateaux extension to our elementary knowledge of differentials
[gview file=”http://oafak.com/wp-content/uploads/2019/10/Week-Ten-4.pdf”]

In one more week, we shall cover the book up to the point of Integral Theorems.
[gview file=”http://oafak.com/wp-content/uploads/2019/10/Chapter-3.-Tensor-Calculus-1-3.pdf”]

67 comments on “Week Ten: Tensor Analysis

  1. Okwesilieze Uwadoka says:

    Good morning sir,
    Is there a direct relationship between (ST) and (TS) as regards the product of S and T when both tensors are switched?
    Because on the 12th page in the week 10 slide, I didn’t understand what happened under the word “Consequently”.

    • Hussein Yekini-ajayi says:

      Sorry to interrupt, but basically i feel ts and st is related by adding the transpose to either t or s also not to forget a skew tensor equals the negative of its transpose

    • ADEDOKUN ISRAEL 170404527 says:

      Good morning. It has been clearly stated in chapter 2 that ST can be changed to T(S transpose), so that was basically what happened there..

    • oafak says:

      If the two tensors are symmetric, then the reversed products give the same result. If they are skew, the reversed products are negatives. There are more relationships for special tensors. There are not generally applicable rules.

  2. Adebayo John 160404010 says:

    From the constancy of identity tensor
    If i transpose AB
    Would i get A transpose B transpose
    B transpose A transpose
    BA transpose

  3. Majekodunmi fidipe 170404529 says:

    From the constancy of the identity: orthogonal tensor section page 12
    If for example i want to do atranspose of AB will i get A transpose B transpose or B transpose A transpose or B then A transpose

  4. Oyefeso Olaitan says:

    @ Uwadoka

    Recall; the transpose of AB i.e (AB)^T is equals to BA^T i.e B multiplied by the transpose of A

    I hope this explains your question.
    Let’s expect the lecturer shed more light on it

  5. Hussein Yekini-ajayi says:

    St is the same as t transpose s since t is switching sides and vice versa

    • oafak says:

      Too careless with symbols and notations. Hussein, pay a little attention to the way things are typed in the slides and notes. You will need to look at these patiently. We use bold capitals for tensors, bold small letters for vectors and italics small letters for scalars. Virtually ALL you write completely ignore these simple conventions and make even the smallest thing you write illegible.

  6. Chiedozie Chika-Umeh says:

    Good morning sir,
    I think I understand how -QdQ^T = dQQ^T. That is how the positioning of the Tensors changed. Let Q be T and detQ^T = S. Let T and S act on a vector v. That is TSv.
    w = Sv
    Tw = v
    w = S(Tw)
    What I am trying to say is
    QdQ^T = dQ^TQ^-T = (dQQ^-1)^T
    =(dQQ^T)^T, as Q is orthogonal.
    This is my thought.

    Chiedozie Chika-Umeh
    160404046

    • Chiedozie Chika-Umeh says:

      Good evening sir,
      Please this comment wasn’t replied. i want to know if i am in the right path.

    • oafak says:

      You will have to reformat your work and make it legible for me to read. For example, I do not know what you mean by

      (1)   \begin{align*} \bold{-Q} d \bold{Q}^\textsf{T} = d\bold{QQ}^\textsf{T}\end{align*}

      If you want to differentiate the product \bold{QQ}^\textsf{T}=\bold{I}, then you do it this way:

      (2)   \begin{align*}\frac{d}{dt}\left( \bold{QQ}^\textsf T \right) &=\frac{d\bold Q}{dt}\bold{Q}^\textsf T +\bold{Q}\frac{d\bold Q^\textsf T}{dt}\\ &=\frac{d\bold I}{d t}=\bold O. \end{align*}

      Rearranging, we have,

      (3)   \begin{align*} \frac{d\bold Q}{dt}\bold{Q}^\textsf T =-\bold{Q}\frac{d\bold Q^\textsf T}{dt} \end{align*}

      You will have to reformat your work and make it legible for me to read. For example, I do not know what you mean by

      \bold{-Q} d \bold{Q}^\textrm{T} = d\bold{QQ}^\textrm{T}
      If you want to differentiate the product \bold{QQ}^\textrm{T}=\bold{I}, then you do it this way:
      \frac{d}{dt}\left( \bold{QQ}^\textsf T \right) =\frac{d\bold Q}{dt}\bold{Q}^\textsf T +\bold{Q}\frac{d\bold Q^\textsf T}{dt}=\frac{d\bold I}{d t}=\bold O.

      Rearranging, we have,

      \frac{d\bold Q}{dt}\bold{Q}^\textsf T =-\bold{Q}\frac{d\bold Q^\textsf T}{dt}

      • Chiedozie Chika-Umeh says:

        proof:~\left(\frac{d\bold{Q}}{dt} \bold{Q}^{-1} \right)^\textrm{T}=\frac{d\bold{Q}}{dt} \bold{Q}^{-\textrm{T}
        that is  \bold{Q}^{\textrm{T}} \frac{d\bold{Q}}{dt}=\frac{d\bold{Q}}{dt}^{\textrm{T}} \bold{Q}^{-{\textrm{T}}}=\left( \frac{d\bold{Q}}{dt} \bold{Q}^{-1} \right)^{\textrm{T}}=\left(\frac{d\bold{Q}}{dt} \bold{Q}^{\textrm{T}} \right)^{\textrm{T}} as \bold{Q} is orthogonal.
        .

        • oafak says:

          Your typing is quite confusing. If you really want assistance, you may have to scan your intention to me because I really do not understand what you are getting at. If the editing to latex is correct, I will say this: Your initial premise is in error. The transpose of the product,

          (1)    \begin{align*} \left(\frac{d\bold{Q}}{dt} \bold{Q}^{-1} \right)^\textsf{T}\neq\frac{d\bold{Q}}{dt} \bold{Q}^{-\textsf{T}} \end{align*}

          The correct thing is,

          (2)   \begin{align*}~\left(\frac{d\bold{Q}}{dt} \bold{Q}^{-1} \right)^\textsf{T}=\bold{Q}^{-\textsf{T}} \left(\frac{d\bold{Q}}{dt}\right)^{\textsf{T}}\end{align*}

          The issue you appear to be trying to resolve is already explained in my previous post.

  7. Eguzoro Chimamaka says:

    Good morning sir, please can you explain how
    \frac{d } {d \bold T}(\log \det (\bold{T}^{-1}) = -\bold{T}^{-\textrm{T}}

    • oafak says:

      Found an easy way that does not involve a fourth-order tensor:
      Use the chain rule and note we are differentiating scalars to scalars except at the last derivative where we are using the result of the derivative of a determinant to its tensor.

      (1)    \begin{align*} \frac{d}{d\bold{T}}  \log \left( \det (\bold T^ \textrm{-1}) \right) & = \frac{d \log \det (\bold T^ \textrm{-1})}{d  \det (\bold T ^\textrm {-1})} \frac{d \det ( \bold T ^\textrm {-1})}{d \bold T}\\ & =\frac{1}{\det (\bold T ^\textrm {-1})}\frac{d \frac{1}{\det \bold T}}{d \det \bold T}\frac{d \det ( \bold T ^)}{d \bold T}\\ & =-\frac{\det \bold T}{(\det \bold T) ^\textrm {2}} \bold T ^\textrm c\\ & =-\bold T^\textrm {-T} \end{align*}

    • oafak says:

      Use the chain rule and note we are differentiating scalars to scalars except at the last derivative where we are using the result of the derivative of a determinant to its tensor.

      (1)    \begin{align*} \frac{d}{d\bold{T}}  \log \left( \det (\bold T^ {-\textrm{1}}) \right) & = \frac{d \log \det (\bold T^ {-\textrm{1}})}{d  \det (\bold T ^{-\textrm{1}})} \frac{d \det ( \bold T ^{-\textrm{1}})}{d \bold T}\\ & =\frac{1}{\det (\bold T ^{-\textrm{1}})}\frac{d \frac{1}{\det \bold T}}{d \det \bold T}\frac{d \det ( \bold T )}{d \bold T}\\ & =-\frac{\det \bold T}{(\det \bold T) ^\textrm {2}} \bold T ^\textrm c\\ & =-\bold T^{-\textsf{T}} . \end{align*}

      The answer could be more obvious, and faster if we remember that

      (2)   \begin{align*} \frac{d}{d\bold{T}}  \log \left( \det (\bold T^{-\textrm{1}}) \right)=-\frac{d}{d\bold{T}}  \log \left( \det \bold T \right) . \end{align*}

      • Eguzoro Chimamaka says:

        Thank you very much sir. I now understand

      • oafak says:

        (1)   \begin{align*} \frac{d}{d\bold{T}}  \log \left( \det (\bold T^ {-\textrm{1}}) \right) & = \frac{d \log \det (\bold T^ {-\textrm{1}})}{d  \det (\bold T ^{-\textrm{1}})} \frac{d \det ( \bold T ^{-\textrm{1}})}{d \bold T}\\ & =\frac{1}{\det (\bold T ^{-\textrm{1}})}\frac{d \frac{1}{\det \bold T}}{d \det \bold T}\frac{d \det ( \bold T )}{d \bold T}\\ & =-\frac{\det \bold T}{(\det \bold T) ^\textrm {2}} \bold T ^\textrm c\\ & =-\bold T^{-\textrm{T}} . \end{align*}

        The answer could be more obvious, and faster if we remember that

        \frac{d}{d\bold{T}} \log \left( \det (\bold T^{-\textrm{1}}) \right)=-\frac{d}{d\bold{T}} \log \left( \det \bold T \right) .

  8. Majekodunmi Fidipe says:

    Good morning sir
    From the constancy of the identity : orthogonal tensors page 12
    Due to my former understanding when there’s a relationship between for example  \bold{(AB)} when I find the transpose is it supposed to be  \bold B^\textrm T \bold A or  \bold A^\textrm T \bold B^\textrm T or  \bold A^\textrm T \bold B cause I didn’t really get the section when -\bold Q \frac{d\bold Q ^\textrm T}{d t} changes to - (\frac{d\bold Q }{d t}\bold Q ^\textrm T )^\textrm T

    • oafak says:

      The way you transpose the composition of two tensors is the same irrespective of any relationship between them: (\bold{AB})^\textsf{T}=\bold{ B}^\textsf{T}\bold{ A}^\textsf{T}. The role the relationship plays here is that the product of an orthogonal tensor and its transpose is the Identity. This means that \bold{QQ}^\textsf T =\bold I. Note that the product is not transposed here; only the tensor on the right. Differentiating the product with respect to scalar t,

      (1)    \begin{align*} \frac{d}{dt}\left( \bold{QQ}^\textsf T \right) &=\frac{d\bold Q}{dt}\bold{Q}^\textsf T +\bold{Q}\frac{d\bold Q^\textsf T}{dt}\\ &=\frac{d\bold I}{d t}=\bold O. \end{align*}

      Rearranging, we have,

      (2)    \begin{align*} \frac{d\bold Q}{dt}\bold{Q}^\textsf T =-\bold{Q}\frac{d\bold Q^\textsf T}{dt} \end{align*}

      The expression on the Right Hand Side is the same as, -\left( \frac{d\bold Q}{dt}\bold{Q}^\textsf T \right)^\textrm T and because it is negative of the transpose of what we have on the LHS, we conclude that the product \frac{d\bold Q}{dt}\bold{Q}^\textsf T is skew.

  9. Babalola Hussein says:

    I replace ST with AB. Say we want to write (AB) as (BA)(i.e B before A), we get
    AB = ((B^T)(A^T))^T, where ^T denotes transpose. This is same as
    (AB)^T =((B^T)(A^T)), so when we have
    (A^T)B, and A(B^T)
    their equivalents are
    ((B^T)(A))^T and (B(A^T))^T respectively. You could confirm those by using values a1, a2….b1, b2
    So we have
    dQ/dt(Q^T) = ((Q dQ^T/dt)^T),
    Which is same as
    (dQ/dt(Q^T))^T. = -((Q dQ^T/dt)
    since if A^T = B, then B^T = A
    (with a minus because its skewed).Thank you.

    Babalola Hussein
    170404526

    • oafak says:

      Looks OK to me; I found it difficult to follow because you did not type it properly in Latex. But everything looks ok. It is not correct though to say that the tensor is “skewed”. It is correct to say that the tensor is skew. “skew” here is an adjective rather than a verb.

  10. George ssg says:

    QQ^T = I

    I just understand stood this at last after reading your slide

  11. Lawanson saheed says:

    The first invariant was referred to as a linear operator because (d/dt)trA = tr(dA/dt)……..but in the case of the second invariant where:
    A°= inv-transp(A)det(A)
    (d/dt)tr(A°)
    =tr(d[inv-transp(A)det(A)]/dt)
    =tr(dA°/dt)
    Why isn’t it a linear operator/function

    Key:
    Inv-transp -> inverse transpose
    A° -> cofactor of A

    • oafak says:

      Trace is a linear operator because of the following:
      1. Trace of a sum equals the sum of traces:  \textrm{tr}(\bold{AB})=\textrm{tr}\bold{A}+\textrm{tr}\bold{B},
      2. Trace of a scalar multiple is the scalar multiple of the trace:  \textrm {tr} (\alpha \bold{A})=\alpha ~\textrm{tr}\bold{A}, and as a consequence of these two, we can show that,
      3. Trace of a weighted sum,   \textrm {tr} (\alpha \bold{A}+\beta \bold{B})=\alpha ~\textrm{tr}\bold{A}+\beta ~\textrm{tr}\bold{B}.
      Remember that the derivative operation is also linear such that the derivative of a weighted sum is the weighted sum of the derivatives. This means that,

      (1)    \begin{align*} \frac{d}{d\bold S} \textrm {tr} (\alpha \bold{A}+\beta \bold{B})&=\frac{d}{d\bold S}\left( \alpha ~\textrm{tr}\bold{A}+\beta ~\textrm{tr}\bold{B}\right)\\ &= \textrm {tr}\frac{d}{d\bold S} \left( \alpha \bold{A}+\beta \bold{B} \right) \end{align*}

      The derivative is the limit of a weighted sum (difference is the addition of a negative, division by a scalar is the multiplication by the inverse of the scalar.
      The second and third invariants do not obey these three rules and hence, they are not linear.

  12. NWANKITI UGOCHUKWU says:

    Good day sir, please I have a question.

    We know that for angular velocity,
    r(t) =R(t)râ—‹
    Why is it that when we differentiate r(t),
    We do not get dR(t)râ—‹/dt + Rdrâ—‹/dt which is the normal rule..
    Rdrâ—‹/dt does not future in the resulting answer , is it because the differentiation of original position râ—‹ will give zero??

    • oafak says:

      Vector  \bold{r}_0 is the original position of a point on the body. It should have been better if I did not put the parentheses (t) in front of it because it is independent of time. You are correct, its derivative is zero.

  13. Onikoyi Biliqis says:

    Hello sir/ma. On page 17, I dont really get how the cofactor of A is equal to transpose of the inverse of \bold{A} multiplied by \det \bold{A} i.e  \bold{A}^\textrm{c}=\bold{A}^{-\textrm{T}}\textrm{det}~\bold{A}
    And from this expression can we say that \det \bold{A}=\bold{A}^{\textrm{T}}\bold{A}^\textrm{c} ~\textrm{i.e.}\det \bold{A} equals transpose of A times cofactor of A?
    Onikoyi Biliqis
    160404002

    • oafak says:

      Bilqis, I formally welcome you to this forum that you have finally, reluctantly joined. I will respond to your post in three steps:
      1. Don’t make me keep cleaning up after you. Here is the correct way to put your question:
      Hello sir/ma. On page 17, I dont really get how the cofactor of A is equal to transpose of the inverse of \bold{A} multiplied by \det \bold{A} i.e \bold{A}^\textrm{c}=\bold{A}^{-\textrm{T}}\textrm{det}~\bold{A}
      And from this expression can we say that \det \bold{A}=\bold{A}^{\textrm{T}}\bold{A}^\textrm{c} \textrm{i.e.}\det \bold{A} equals transpose of A times cofactor of A?
      2. Your question arose out of two major neglects of duty: 1. You forgot your matrices. How do you compute the inverse of a matrix? Is it not the transpose of the cofactor divided by the determinant? Besides that equipollent result that we could have used, we went ahead to actually establish this result (S8.27-8.29) for tensors in Week 8! That should clear matters immediately!
      3. Your last question should not arise. It is not efficient to first compute the cofactor of a tensor, transpose it, if all you want is the determinant. The answer to your question is “Yes” but “Why”?

  14. Eguzoro Chimamaka 160407042 says:

    Sir we know that
    \textrm{div} \left( \bold{T}^\textrm{T}\bold{v} \right) = T_{ji} v_{j,k} \delta_{ik} + T_{j i,k} v_j \delta_{ik}

    I don’t understand how
     T_{ji} v_{j,k} \delta_{ik} + T_{ji,k} v_{j} \delta_{ik} =\left( \textrm{div}\bold{T}\right) . \bold{v} + \bold{T} : \grad\bold{v}

    • oafak says:

      1. I begin by commending you effort to write legibly in Latex. But you will need to wrap each latex statement with the begin and end latex codes. Your statement will look like this (I deliberately included spaces to prevent the code from compiling:
      Sir we know that
      [late x]\textrm{div} \left( \bold{T}^\textsf{T}\bold{v} \right) = T_{ij} v_{j},_{k} \delta_{ik} + T_{j i},_{_k} v_j \delta_{ik}[/late x]

      I don’t understand how
      [late x] T_{ij} v_{j,k} \delta_{ik} + T_{ji,k} v_{j} \delta_{ik} =\left( \textrm{div}\bold{T}\right) . \bold{v} + \bold{T} : \grad\bold{v}[/late x]
      2. Please include the equations referenced to make my intervention faster. Here you are looking at Q&A3.44
      3. You omitted the important intermediate result of activating the substitution symbols to obtain,

      (1)    \begin{align*} T_{ji} v_{j},_{k} \delta_{ik} + T_{ji},_{k} v_{j} \delta_{ik}&= T_{ji} v_{j},_{i}  + T_{jk},_k v_{j} \\ &=\left( \textrm{div~}\bold{T}\right) \cdot \bold{v} + \bold{T} : \grad\bold{v} \end{align*}

      The thing to do when such things are not immediately obvious is to go in reverse direction and write  \textrm{div~}\bold{T}\right) \cdot \bold{v} + \bold{T} : \grad\bold{v}~ in component form. You will see that it equals the expression left of it. After a while, you will get used to it. But must try first:

      (2)    \begin{align*} \bold{T}^\textsf{T}\textrm{grad} \bold{v}&=\left( T_{ij} \bold{e}_j\otimes\bold{e}_i\right)\left( v_\alpha ,_\beta  \bold{e}_\alpha\otimes\bold{e}_\beta\right)\\ &= T_{ij} v_\alpha ,_\beta \bold{e}_j\otimes\bold{e}_\beta \delta_{i\alpha}\\ &= T_{ij} v_i ,_\beta \bold{e}_j\otimes\bold{e}_\beta \\ \textrm{div}\left(\bold{T}^\textsf{T}\textrm{grad} \bold{v}\right)&= T_{ij} v_i,_j \end{align*}

      Furthermore,

      (3)    \begin{align*} \textrm{div\,} \bold{T}&=T_{ij} ,_j\bold{e}_i\\ \left( \textrm{div\,} \bold{T}\right)\cdot\bold{v}&=T_{ij} ,_j\bold{e}_i\cdot v_k\bold{e}_k\\ &=T_{ij} ,_j v_k \delta_{ik} = T_{ij} ,_j v_i \end{align*}

  15. oafak says:

    This did not take

    (1)    \begin{align*} \bold{T}^textrm{T}~\grad \bold{v}&=\left( T_ij \bold{e}_j\otimes\bold{e}_i\right)\left( v_\alpha ,_\beta  \bold{e}_alpha\otimes\bold{e}_beta\right)\\ &= T_ij v_\alpha ,_beta \bold{e}_j\otimes\bold{e}_\beta \delta_{i\alpha}\\ &= T_ij v_i ,_beta \bold{e}_j\otimes\bold{e}_\beta \\ \textrm{div}\left\bold{T}^textrm{T}~\grad \bold{v}\right)&= T_ij v_i ,j [end{align*} Furthermore, \begin{align*} \textrm{div} \bold{T}&=T_ij ,_j\bold{e}_i\\ \left \textrm{div} \bold{T}(\right\cdot\bold{v}&=T_ij ,_j\bold{e}_i\cdot v_k\bold{e}_k\\ &=T_ij ,_j v_k \delta_ik = T_ij ,_j v_i \end{align*} \begin{align*} \textrm{div} \bold{T}&=T_ij ,_j\bold{e}_i\\ \left \textrm{div} \bold{T}(\right\cdot\bold{v}&=T_ij ,_j\bold{e}_i\cdot v_k\bold{e}_k\\ &=T_ij ,_j v_k \delta_ik = T_ij ,_j v_i \end{align*}

  16. Pelumi Balogun Johnson 160407028 says:

    What do you mean by large tensor object?

    • Lawal Azeez 170407520 says:

      Large objects simply means when one is considering spaces(set) of large components. For example a scalars are of real set with just one component like time, distance,speed, temperature e.t.c all these are of the real set and has just one component. When it comes to vectors or second order tensor they have three components and nine components respectively. I hope my explanation is clear enough.

    • oafak says:

      A tensor object is large compared to a scalar because, each vector (a tensor of order 1) has three scalars. When you get to higher orders, you have 9, 27, 81, 243, … for tensors of order 2, 3, 4, 5,… These look larger than one to me!

  17. Eruse Oghenefega 160407016 says:

    In the table in page 11 of chapter 3, what is the difference between the “contraction product” operation and the other operations there

  18. Good Afternoon sir on page 7 week 10 slide you said the derivative of alphaT wasthe limit as h tends to 0 alpha(t+h)T(t+h)- alpha(t)T(t)/t
    can it be written as limit as h tends to 0 of
    alphaT(t+h) -alphaT(t)/t since alpha is a scalar

    • oafak says:

      First, there was an error in the denominator of the expression you are referring to. The error has been corrected, please avail yourself of the corrected version.
      Second, I don’t think it is correct to be doing a “you said” in an intellectual discussion. If you are not convinced that an expression is correct, do not accept it. Once you accept it as correct it is NO LONGER a “you said” issue. It becomes a demonstrated issue that we are all convinced to be a correct equation.
      Third. The meat of your question: The derivative of a tensor valued function of a scalar variable is DEFINED as

      (1)    \begin{align*} \frac{d}{dt}\bold{S}(t)=\lim_{h \to 0}\frac{\bold{S}(t+h)-\bold{S}(t)}{h} \end{align*}

      Remember that the scalar multiplication of a tensor produces a tensor. If we have a scalar valued function \alpha(t)\in\mathbb{R} multiplying \bold{F}(t) such that \bold{S}(t)=\alpha(t)\bold{F}(t) we can find the derivative of this product by evaluating the function at the points t and point t+h. What is the product evaluated at t? It is \bold{S}(t)=\alpha(t)\bold{F}(t). And the product evaluated at t+h? It certainly is, \bold{S}(t+h)=\alpha(t+h)\bold{F}(t+h)!
      The fact that multiplier is a scalar does not remove the fact that IT IS a scalar-valued FUNCTION of a scalar argument! To obtain it, you need its scalar argument at the evaluation point! Consequently, the derivative we seek is:

      (2)    \begin{align*} \frac{d}{dt}\bold{S}(t)&=\lim_{h \to 0}\frac{\bold{S}(t+h)-\bold{S}(t)}{h}\\ &=\lim_{h \to 0}\frac{\alpha(t+h)\bold{F}(t+h)-\alpha(t)\bold{F}(t)}{h} \end{align*}

      You cannot just pick an arbitrary value!
      The latex coded post that produced this is shown below. There was a deliberate error in the spelling of a keyword to prevent it from executing so you can learn how to properly post equations…

    • oafak says:

      First, there was an error in the denominator of the expression you are referring to. The error has been corrected, please avail yourself of the corrected version.
      Second, I don’t think it is correct to be doing a “you said” in an intellectual discussion. If you are not convinced that an expression is correct, do not accept it. Once you accept it as correct it is NO LONGER a “you said” issue. It becomes a demonstrated issue that we are all convinced to be a correct equation.
      Third. The meat of your question: The derivative of a tensor valued function of a scalar variable is DEFINED as
      [late x]

      (1)   \begin{align*} \frac{d}{dt}\bold{S}(t)=\lim_{h \to 0}\frac{\bold{S}(t+h)-\bold{S}(t)}{h} \end{align*}

      [/late x]
      Remember that the scalar multiplication of a tensor produces a tensor. If we have a scalar valued function [late x]\alpha(t)\in\mathbb{R}[/late x] multiplying [late x]\bold{F}(t)[/late x] such that [late x]\bold{S}(t)=\alpha(t)\bold{F}(t)[/late x] we can find the derivative of this product by evaluating the function at the points t and point [late x]t+h[/late x]. What is the product evaluated at [late x]t[/late x]? It is [late x]\bold{S}(t)=\alpha(t)\bold{F}(t)[/late x]. And the product evaluated at [late x]t+h[/late x]? It certainly is, [late x]\bold{S}(t+h)=\alpha(t+h)\bold{F}(t+h)[/late x]!
      The fact that multiplier is a scalar does not remove the fact that IT IS a scalar-valued FUNCTION of a scalar argument! To obtain it, you need its scalar argument at the evaluation point! Consequently, the derivative we seek is:
      [late x]

      (2)   \begin{align*} \frac{d}{dt}\bold{S}(t)&=\lim_{h \to 0}\frac{\bold{S}(t+h)-\bold{S}(t)}{h}\\ &=\lim_{h \to 0}\frac{\alpha(t+h)\bold{F}(t+h)-\alpha(t)\bold{F}(t)}{h} \end{align*}

      [/late x]
      You cannot just pick an arbitrary value!

  19. Sarumi Abdulkarim 170407507 says:

    Sir you said gateaux differential is a one dimensional calculation, but a tensor is 3 dimensional. How does the calculation work sir?

    • oafak says:

      It works similar to the way a directional derivative works: A surface is two dimensional; A directional derivative is a one dimensional approximation computed by taking a particular direction as a cutting plane. This essentially turns its edge into a plane curve as shown in figure S11.5. Take another direction, you have another plane curve. In any particular point, you have infinite number of directional derivatives. Here is the formula for a directional derivative:

      (1)    \begin{align*} D_\bold{u}f(\bold{x})=\lim_{\alpha\to 0}\frac{f\left(\bold{x}+\alpha\bold{u}\right)-f\left(\bold{x}\right)}{\alpha} \end{align*}

      where \bold{x} is the position vector at the point in question, and \bold{u} is the unit vector specifying the direction. Similarly, each tensor differential is with respect to a particular tensor, \bold{h} of the same order as the tensor argument, \bold{x}. Instead of the unit vector telling us about a chosen direction, we are now taking the differential with respect to an arbitrary tensor in the Gateaux differential:

      (2)    \begin{align*} D\bold{F}(\bold{x,h})=\lim_{\alpha\to 0}\frac{\bold{F}\left(\bold{x}+\alpha\bold{h}\right)-\bold{F}\left(\bold{x}\right)}{\alpha} \end{align*}

      Both \bold{x} and \bold{h} are now tensors of any order and we no longer have the direction interpretation. The function value, \bold{F} may also be a tensor of any order and can differ in order from \bold{x} and \bold{h} even though the latter two must be of the same order. Yet the idea remains the same and we are able to compute differentials with respect to \bold{h} as we did with \bold{u} previously. This generalization of the directional derivative and its consequences were explained from slides S11.5 onwards.

  20. Dedeku oghenemairo zoe says:

    Dedeku Oghenemairo Zoe
    160404054
    Mechanical engineering
    As regards week 10 slides, where do we make an arbitrary tensor our argument when finding the differential of the magnitude of a tensor with respect to the tensor itself?

  21. ONIYIDE OLUWASEGUN AKHIGBE 160404052 says:

    Good morning sir, please sir Q3.55 i don’t understand how curl = , and how it was used in the question

  22. Adikwu Sunday 160404053 says:

    Good afternoon sir, sir please I want to know if it is possible to disrupt the property of a vector no matter the scaling factor used in as much as it is within the vector space

  23. Adikwu Sunday 160404053 says:

    Also sir, in the chapter 1 lecture slides, it was starting that two vectors if they have the same magnitude and moving in the same direction and also possess the same sense, is it possible to work with such vectors or say forces and achieve a constant and balanced situation.

    • oafak says:

      You have asked a very important question. Two vector forces are equal if they have the same magnitude, direction and sense. The turning effect of a vector, is another vector – an axial vector that depends on two things: A vector force, and the vector distance from the center of action. It is the cross product of the two. Two vectors, of equal magnitude, direction and sense will produce different moments if they are applied at different distances from the same centre. It is even possible for them to produce opposing moments. It is not caused by the vectors being different in magnitude, direction or sense; it is caused because the new vector, the moment, requires, not only the vector force, but additional issue of its line of action to produce its value. Mathematically, what you dealing with is as follows:
      \bold{m}=\bold{r}\times \bold{f}. Two equal values of \bold{f} do not produce the same value of \bold{m}. The value of \bold{r} involved will influence that result. You do not conclude that the forces are not equal because they produce different moments!
      A deeper way to look at what is happening here is to remember that there is a tensor, \bold{T}_1=\bold{r}_1\times. If you have two of such tensors, say, \bold{T}_1 and \bold{T}_2. Supply the same vector force to them, you are going to have two different vector results. These are the moments. They are caused by the difference in the values of the tensors transforming them. It is not a difference in the vectors themselves.

Leave a Reply to AWOYALE FELIX 160407011 Cancel reply

Your email address will not be published. Required fields are marked *