We proceed to Differential and Integral Calculus wit tensor valued functions and tensor arguments. In the first set of slides, we extend out knowledge of scalar differentiation to larger objects. The examples we begin with are simple and easy to understand. The lecture concludes with the Gateaux extension to our elementary knowledge of differentials

In one more week, we shall cover the book up to the point of Integral Theorems.

Good morning sir,

Is there a direct relationship between (ST) and (TS) as regards the product of S and T when both tensors are switched?

Because on the 12th page in the week 10 slide, I didn’t understand what happened under the word “Consequently”.

Sorry to interrupt, but basically i feel ts and st is related by adding the transpose to either t or s also not to forget a skew tensor equals the negative of its transpose

Too careless. Type with some care to capitals and proper notation

Good morning. It has been clearly stated in chapter 2 that ST can be changed to T(S transpose), so that was basically what happened there..

Where was that stated as a general rule?

If the two tensors are symmetric, then the reversed products give the same result. If they are skew, the reversed products are negatives. There are more relationships for special tensors. There are not generally applicable rules.

Oh! Thanks Sir

From the constancy of identity tensor

If i transpose AB

Would i get A transpose B transpose

B transpose A transpose

BA transpose

Constancy of the identity: orthogonal tensors

Slide 12

You’ll get (AB)transpose = Btranspose.Atranspose.

If you use the knowledge of matrices manually.

A = (1 2;2 3) B = (1 4;2 1)

Use this

I hate to use matrices for this purpose. There is more flexibility using the tensors directly.

The rule transpose products has nothing to do with the constancy of the identity tensor

From the constancy of the identity: orthogonal tensor section page 12

If for example i want to do atranspose of AB will i get A transpose B transpose or B transpose A transpose or B then A transpose

Look at several answered questions in this thread on the transpose of a product

@ Uwadoka

Recall; the transpose of AB i.e (AB)^T is equals to BA^T i.e B multiplied by the transpose of A

I hope this explains your question.

Let’s expect the lecturer shed more light on it

I just posted a comment but not sure I was successful because it says “awaiting moderation”

The transpose,

St is the same as t transpose s since t is switching sides and vice versa

Too careless with symbols and notations. Hussein, pay a little attention to the way things are typed in the slides and notes. You will need to look at these patiently. We use bold capitals for tensors, bold small letters for vectors and italics small letters for scalars. Virtually ALL you write completely ignore these simple conventions and make even the smallest thing you write illegible.

Good morning sir,

I think I understand how -QdQ^T = dQQ^T. That is how the positioning of the Tensors changed. Let Q be T and detQ^T = S. Let T and S act on a vector v. That is TSv.

w = Sv

Tw = v

w = S(Tw)

What I am trying to say is

QdQ^T = dQ^TQ^-T = (dQQ^-1)^T

=(dQQ^T)^T, as Q is orthogonal.

This is my thought.

Chiedozie Chika-Umeh

160404046

Good evening sir,

Please this comment wasn’t replied. i want to know if i am in the right path.

You will have to reformat your work and make it legible for me to read. For example, I do not know what you mean by

(1)

If you want to differentiate the product , then you do it this way:

(2)

Rearranging, we have,

(3)

You will have to reformat your work and make it legible for me to read. For example, I do not know what you mean by

\bold{-Q} d \bold{Q}^\textrm{T} = d\bold{QQ}^\textrm{T}

If you want to differentiate the product \bold{QQ}^\textrm{T}=\bold{I}, then you do it this way:

\frac{d}{dt}\left( \bold{QQ}^\textrm T \right) =\frac{d\bold Q}{dt}\bold{Q}^\textrm T +\bold{Q}\frac{d\bold Q^\textrm T}{dt}=\frac{d\bold I}{d t}=\bold O.

Rearranging, we have,

\frac{d\bold Q}{dt}\bold{Q}^\textrm T =-\bold{Q}\frac{d\bold Q^\textrm T}{dt}

proof:

that is as is orthogonal.

.

Your typing is quite confusing. If you really want assistance, you may have to scan your intention to me because I really do not understand what you are getting at. If the editing to latex is correct, I will say this: Your initial premise is in error. The transpose of the product,

The correct thing is,

The issue you appear to be trying to resolve is already explained in my previous post.

Good morning sir, please can you explain how

Use the chain rule and note we are differentiating scalars to scalars except at the last derivative where we are using the result of the derivative of a determinant to its tensor.

(1)

The answer could be more obvious, and faster if we remember that

Thank you very much sir. I now understand

\begin{align}

\frac{d}{d\bold{T}} \log \left( \det (\bold T^ {-\textrm{1}}) \right) & = \frac{d \log \det (\bold T^ {-\textrm{1}})}{d \det (\bold T ^{-\textrm{1}})} \frac{d \det ( \bold T ^{-\textrm{1}})}{d \bold T}\\

& =\frac{1}{\det (\bold T ^{-\textrm{1}})}\frac{d \frac{1}{\det \bold T}}{d \det \bold T}\frac{d \det ( \bold T )}{d \bold T}\\

& =-\frac{\det \bold T}{(\det \bold T) ^\textrm {2}} \bold T ^\textrm c\\

& =-\bold T^{-\textrm{T}} .

\end{align}

The answer could be more obvious, and faster if we remember that

\frac{d}{d\bold{T}} \log \left( \det (\bold T^{-\textrm{1}}) \right)=-\frac{d}{d\bold{T}} \log \left( \det \bold T \right) .

Good morning sir

From the constancy of the identity : orthogonal tensors page 12

Due to my former understanding when there’s a relationship between for example when I find the transpose is it supposed to be or or cause I didn’t really get the section when changes to

The way you transpose the composition of two tensors is the same irrespective of any relationship between them: . The role the relationship plays here is that the product of an orthogonal tensor and its transpose is the Identity. This means that . Note that the product is not transposed here; only the tensor on the right. Differentiating the product with respect to scalar ,

Rearranging, we have,

The expression on the Right Hand Side is the same as, and because it is negative of the transpose of what we have on the LHS, we conclude that the product is skew.

I replace ST with AB. Say we want to write (AB) as (BA)(i.e B before A), we get

AB = ((B^T)(A^T))^T, where ^T denotes transpose. This is same as

(AB)^T =((B^T)(A^T)), so when we have

(A^T)B, and A(B^T)

their equivalents are

((B^T)(A))^T and (B(A^T))^T respectively. You could confirm those by using values a1, a2….b1, b2

So we have

dQ/dt(Q^T) = ((Q dQ^T/dt)^T),

Which is same as

(dQ/dt(Q^T))^T. = -((Q dQ^T/dt)

since if A^T = B, then B^T = A

(with a minus because its skewed).Thank you.

Babalola Hussein

170404526

Looks OK to me; I found it difficult to follow because you did not type it properly in Latex. But everything looks ok. It is not correct though to say that the tensor is “skewed”. It is correct to say that the tensor is skew. “skew” here is an adjective rather than a verb.

QQ^T = I

I just understand stood this at last after reading your slide

Good

The first invariant was referred to as a linear operator because (d/dt)trA = tr(dA/dt)……..but in the case of the second invariant where:

A°= inv-transp(A)det(A)

(d/dt)tr(A°)

=tr(d[inv-transp(A)det(A)]/dt)

=tr(dA°/dt)

Why isn’t it a linear operator/function

Key:

Inv-transp -> inverse transpose

A° -> cofactor of A

Trace is a linear operator because of the following:

1. Trace of a sum equals the sum of traces: ,

2. Trace of a scalar multiple is the scalar multiple of the trace: , and as a consequence of these two, we can show that,

3. Trace of a weighted sum, .

Remember that the derivative operation is also linear such that the derivative of a weighted sum is the weighted sum of the derivatives. This means that,

(1)

The derivative is the limit of a weighted sum (difference is the addition of a negative, division by a scalar is the multiplication by the inverse of the scalar.

The second and third invariants do not obey these three rules and hence, they are not linear.

Good day sir, please I have a question.

We know that for angular velocity,

r(t) =R(t)r○

Why is it that when we differentiate r(t),

We do not get dR(t)r○/dt + Rdr○/dt which is the normal rule..

Rdr○/dt does not future in the resulting answer , is it because the differentiation of original position r○ will give zero??

Vector is the original position of a point on the body. It should have been better if I did not put the parentheses (t) in front of it because it is independent of time. You are correct, its derivative is zero.

Hello sir/ma. On page 17, I dont really get how the cofactor of A is equal to transpose of the inverse of multiplied by i.e

And from this expression can we say that equals transpose of A times cofactor of A?

Onikoyi Biliqis

160404002

Bilqis, I formally welcome you to this forum that you have finally, reluctantly joined. I will respond to your post in three steps:

1. Don’t make me keep cleaning up after you. Here is the correct way to put your question:

Hello sir/ma. On page 17, I dont really get how the cofactor of A is equal to transpose of the inverse of \bold{A} multiplied by \det \bold{A} i.e \bold{A}^\textrm{c}=\bold{A}^{-\textrm{T}}\textrm{det}~\bold{A}

And from this expression can we say that \det \bold{A}=\bold{A}^{\textrm{T}}\bold{A}^\textrm{c} \textrm{i.e.}\det \bold{A} equals transpose of A times cofactor of A?

2. Your question arose out of two major neglects of duty: 1. You forgot your matrices. How do you compute the inverse of a matrix? Is it not the transpose of the cofactor divided by the determinant? Besides that equipollent result that we could have used, we went ahead to actually establish this result (S8.27-8.29) for tensors in Week 8! That should clear matters immediately!

3. Your last question should not arise. It is not efficient to first compute the cofactor of a tensor, transpose it, if all you want is the determinant. The answer to your question is “Yes” but “Why”?

Sir we know that

I don’t understand how

1. I begin by commending you effort to write legibly in Latex. But you will need to wrap each latex statement with the begin and end latex codes. Your statement will look like this (I deliberately included spaces to prevent the code from compiling:

Sir we know that

[late x]\textrm{div} \left( \bold{T}^\textsf{T}\bold{v} \right) = T_{ij} v_{j},_{k} \delta_{ik} + T_{j i},_{_k} v_j \delta_{ik}[/late x]

I don’t understand how

[late x] T_{ij} v_{j,k} \delta_{ik} + T_{ji,k} v_{j} \delta_{ik} =\left( \textrm{div}\bold{T}\right) . \bold{v} + \bold{T} : \grad\bold{v}[/late x]

2. Please include the equations referenced to make my intervention faster. Here you are looking at Q&A3.44

3. You omitted the important intermediate result of activating the substitution symbols to obtain,

(1)

The thing to do when such things are not immediately obvious is to go in reverse direction and write in component form. You will see that it equals the expression left of it. After a while, you will get used to it. But must try first:

(2)

Furthermore,

(3)

(1)

Okay sir. Thank you

(1)

This did not take

(1)

What do you mean by large tensor object?

Large objects simply means when one is considering spaces(set) of large components. For example a scalars are of real set with just one component like time, distance,speed, temperature e.t.c all these are of the real set and has just one component. When it comes to vectors or second order tensor they have three components and nine components respectively. I hope my explanation is clear enough.

Tell him, Lawal!

A tensor object is large compared to a scalar because, each vector (a tensor of order 1) has three scalars. When you get to higher orders, you have 9, 27, 81, 243, … for tensors of order 2, 3, 4, 5,… These look larger than one to me!

In the table in page 11 of chapter 3, what is the difference between the “contraction product” operation and the other operations there

Good Afternoon sir on page 7 week 10 slide you said the derivative of alphaT wasthe limit as h tends to 0 alpha(t+h)T(t+h)- alpha(t)T(t)/t

can it be written as limit as h tends to 0 of

alphaT(t+h) -alphaT(t)/t since alpha is a scalar

First, there was an error in the denominator of the expression you are referring to. The error has been corrected, please avail yourself of the corrected version.

Second, I don’t think it is correct to be doing a “you said” in an intellectual discussion. If you are not convinced that an expression is correct, do not accept it. Once you accept it as correct it is NO LONGER a “you said” issue. It becomes a demonstrated issue that we are all convinced to be a correct equation.

Third. The meat of your question: The derivative of a tensor valued function of a scalar variable is DEFINED as

(1)

Remember that the scalar multiplication of a tensor produces a tensor. If we have a scalar valued function multiplying such that we can find the derivative of this product by evaluating the function at the points and point . What is the product evaluated at ? It is . And the product evaluated at ? It certainly is, !

The fact that multiplier is a scalar does not remove the fact that IT IS a scalar-valued FUNCTION of a scalar argument! To obtain it, you need its scalar argument at the evaluation point! Consequently, the derivative we seek is:

(2)

You cannot just pick an arbitrary value!

The latex coded post that produced this is shown below. There was a deliberate error in the spelling of a keyword to prevent it from executing so you can learn how to properly post equations…

Okay Sir, Thank you very much sir

First, there was an error in the denominator of the expression you are referring to. The error has been corrected, please avail yourself of the corrected version.

Second, I don’t think it is correct to be doing a “you said” in an intellectual discussion. If you are not convinced that an expression is correct, do not accept it. Once you accept it as correct it is NO LONGER a “you said” issue. It becomes a demonstrated issue that we are all convinced to be a correct equation.

Third. The meat of your question: The derivative of a tensor valued function of a scalar variable is DEFINED as

[late x]

\begin{align}

\frac{d}{dt}\bold{S}(t)=\lim_{h \to 0}\frac{\bold{S}(t+h)-\bold{S}(t)}{h}

\end{align}

[/late x]

Remember that the scalar multiplication of a tensor produces a tensor. If we have a scalar valued function [late x]\alpha(t)\in\mathbb{R}[/late x] multiplying [late x]\bold{F}(t)[/late x] such that [late x]\bold{S}(t)=\alpha(t)\bold{F}(t)[/late x] we can find the derivative of this product by evaluating the function at the points and point [late x]t+h[/late x]. What is the product evaluated at [late x]t[/late x]? It is [late x]\bold{S}(t)=\alpha(t)\bold{F}(t)[/late x]. And the product evaluated at [late x]t+h[/late x]? It certainly is, [late x]\bold{S}(t+h)=\alpha(t+h)\bold{F}(t+h)[/late x]!

The fact that multiplier is a scalar does not remove the fact that IT IS a scalar-valued FUNCTION of a scalar argument! To obtain it, you need its scalar argument at the evaluation point! Consequently, the derivative we seek is:

[late x]

\begin{align}

\frac{d}{dt}\bold{S}(t)&=\lim_{h \to 0}\frac{\bold{S}(t+h)-\bold{S}(t)}{h}\\

&=\lim_{h \to 0}\frac{\alpha(t+h)\bold{F}(t+h)-\alpha(t)\bold{F}(t)}{h}

\end{align}

[/late x]

You cannot just pick an arbitrary value!

Sir you said gateaux differential is a one dimensional calculation, but a tensor is 3 dimensional. How does the calculation work sir?

It works similar to the way a directional derivative works: A surface is two dimensional; A directional derivative is a one dimensional approximation computed by taking a particular direction as a cutting plane. This essentially turns its edge into a plane curve as shown in figure S11.5. Take another direction, you have another plane curve. In any particular point, you have infinite number of directional derivatives. Here is the formula for a directional derivative:

(1)

where is the position vector at the point in question, and is the unit vector specifying the direction. Similarly, each tensor differential is with respect to a particular tensor, of the same order as the tensor argument, . Instead of the unit vector telling us about a chosen direction, we are now taking the differential with respect to an arbitrary tensor in the Gateaux differential:

(2)

Both and are now tensors of any order and we no longer have the direction interpretation. The function value, may also be a tensor of any order and can differ in order from and even though the latter two must be of the same order. Yet the idea remains the same and we are able to compute differentials with respect to as we did with previously. This generalization of the directional derivative and its consequences were explained from slides S11.5 onwards.

Thank you Sir

Dedeku Oghenemairo Zoe

160404054

Mechanical engineering

As regards week 10 slides, where do we make an arbitrary tensor our argument when finding the differential of the magnitude of a tensor with respect to the tensor itself?

What specific equation are you finding difficulty with?