\[\newcommand{\u}[1]{\boldsymbol{\mathsf{#1}}} \renewcommand{\b}[1]{\boldsymbol{#1}} \newcommand{\t}[1]{\textsf{#1}} \newcommand{\m}[1]{\mathbb{#1}} \def\RR{\bf R} \def\bold#1{\bf #1} \def\mbf#1{\mathbf #1} \def\uv#1{\hat{\usf {#1}}} \def\dl#1{\underline{\underline{#1}}} \newcommand{\usf}[1]{\boldsymbol{\mathsf{#1}}} \def\bs#1{\usf #1}\]

[edit]

  1. Import .stp file into Abaqus and create parts. https://www.intrinsys.com/blog/2016/importing-parts-into-abaqus-cae

Overview about Paclet https://www.wolframcloud.com/objects/b3m2a1/tutorial/package-usage-and-development/basics/packages-in-mathematica.html

  1. Open a new repository in the GitHub account.
  2. Clone your new repository https://services.github.com/on-demand/github-cli/clone-repo-cli
  3. Configure the repository https://www.wolframcloud.com/objects/b3m2a1/home/creating-a-mathematica-package-repository.html#main-content

Field description for acceleration and orientation prediction of a rigid body

Outline :

  1. As we don’t have any knowledge about the location of the injury a priori, the question arises where should we mount a sensor. The necessity of field is thus realized. But, there is no established method reported in the literature for determining field associated with brain injury.
  2. In the existing approaches, there is no appreciation of finite rotations and non-uniqueness while solving for acceleration of head.

Introduction

As the mild trauma brain injury (MTBI) has become the major concern of clinicians dealing with sports related brain injuries, finding out the specific reason for the MTBI has become one of the most important topic of contemporary research interests. But, identification of specific reason for MTBI has been very challenging due to deficiency of a reasonable animal or numerical model. However, acceleration of the head under impact has been recognised to be the most likely cause of MTBI. Hence, assessment of acceleration magnitude and impact location is extremly useful to find a relationship between head acceleration and MTBI.

Rigid Body Dynamics

Consider a rigid body, free to move in \(\mathbb{R}^3\). A reference configuration of the body is the closure of an open set in \(\mathbb{R}^3\) at time \(t = 0\). Let \(\mathbb{E}^3\) be the three dimensional Euclidean vector space corresponding to the point space in \(\mathbb{R}^3\), such that each point \(\mathcal{P}\in\mathbb{B}_0\subset\mathbb{E}^3\) has associated with it a vector \(\boldsymbol{X}\in\mathbb{E}^3\), such that \(\mathcal{O}+\boldsymbol{X} = \mathcal{P}\). A configuration of the body \(\mathbb{B}_0\) is a time-dependent family of mappings, \(\phi_t : \mathbb{B}_0\rightarrow\mathbb{E}^3\). The set of all configurations of \(\mathbb{B}_0\) is denoted as \(\mathcal{C}\). The motion of a rigid solid is defined through the mapping characterized as,

\[\begin{equation} \label{MappingRefToDef} \boldsymbol{x} = \boldsymbol{Q}(t)\boldsymbol{X} +\boldsymbol{c}(t) \end{equation}\]

where \({x}\in \mathbb{E}^3\) and \(\boldsymbol{Q}:\mathbb{E}^3\rightarrow\mathbb{E}^3\) is a proper orthogonal tensor. The symbol \(\boldsymbol{c}\) denotes an arbitrary vector. Given, configuration of the body \(\mathbb{B}_0\) is a time-dependent family of mappings, \(\phi_t : \mathbb{B}_0\rightarrow\mathbb{E}^3\). This implies that if \(\boldsymbol{X}\in \mathbb{B}_0\) and \(\boldsymbol{x}\in \mathbb{E}^3\) then \(\phi_t(\boldsymbol{X})=\boldsymbol{x}\) or \({\phi_t}^{-1}(\boldsymbol{x}) = \boldsymbol{X}\). The motion of the rigid solid is defined through the mapping given by (Eq. (\ref{MappingRefToDef})) with \(\boldsymbol{Q}:\mathbb{E}^3\rightarrow\mathbb{E}^3\) is a proper orthogonal tensor and \(\boldsymbol{c}\) denoting an arbitrary vector. From (Eq. (\ref{MappingRefToDef})), the velocity \(\boldsymbol{v}\) and the acceleration \(\boldsymbol{a}\) can be computed as,

\[\begin{equation} \label{velocity} \boldsymbol{v} = \dot{\boldsymbol{Q}}(t)\boldsymbol{X} + \dot{\boldsymbol{c}}(t) \;\;\;\;\;\; (as \;\; \dot{\boldsymbol{X}} = \boldsymbol{0}) \end{equation}\] \[\begin{equation} \label{acceleration} \boldsymbol{a} = \ddot{\boldsymbol{Q}}(t)\boldsymbol{X} + \ddot{\boldsymbol{c}}(t) \;\;\;\;\;\; (as \;\; \dot{\boldsymbol{X}} = \boldsymbol{0}) \end{equation}\]

Premultiplying both sides of (Eq. (\ref{acceleration})) we have,

\[\begin{equation} \label{StarAcceleration} \left.\boldsymbol{Q}(t)\right.^{\sf T}\boldsymbol{a} = \left.\boldsymbol{Q}(t)\right.^{\sf T}\ddot{\boldsymbol{Q}}(t)\boldsymbol{X} + \left.\boldsymbol{Q}(t)\right.^{\sf T}\ddot{\boldsymbol{c}}(t) \end{equation}\]

Introducing \(\left.\boldsymbol{a}\right.^{\star}:= \left.\boldsymbol{Q}(t)\right.^{\sf T}\boldsymbol{a}\), \(\boldsymbol{q}(t):=\left.\boldsymbol{Q}(t)\right.^{\sf T}\ddot{\boldsymbol{c}}(t)\) and \(\boldsymbol{P}(t):= \left.\boldsymbol{Q}(t)\right.^{\sf T}\ddot{\boldsymbol{Q}}(t)\), we can rewrite (Eq. (\ref{StarAcceleration})) as
$$

\[\begin{equation} \label{ModStarAcceleration} \boldsymbol{a}^{\star} = \boldsymbol{P}(t)\boldsymbol{X} +\boldsymbol{q}(t) \end{equation}\]

From the definition of \(\boldsymbol{P}(t)\), it follows that \(\boldsymbol{Q}(t)\) is determined/characterized by the equation

\[\begin{equation} \label{QGoverning} \ddot{\boldsymbol{Q}}(t)=\boldsymbol{Q}(t) \boldsymbol{P}(t) \end{equation}\]

(Eq. (\ref{QGoverning})) is a second order, linear, homogenous, tensor differential equation with time varying coefficients. Given \(\boldsymbol{P}(t)\), it is in general not difficult to numerically solve (Eq. (\ref{QGoverning})) for \(\boldsymbol{Q}(t)\). However, in our particular case, the solution of (Eq. (\ref{QGoverning})) is complicated by the constraint that its solution \(\boldsymbol{Q}(t)\) needs to be an orthonormal tensor. That, in addition to (Eq. (\ref{QGoverning})) the tensor \(\boldsymbol{Q}(t)\) also needs to satisfy the constraint

\[\begin{equation} \label{Qconstraint} \boldsymbol{Q}^{\textsf{T}}(t)\boldsymbol{Q}(t)=\boldsymbol{I} \end{equation}\]

where \(\boldsymbol{I}\) is the identity tensor. Therefore, instead of directly trying to solve \(\eqref{QGoverning}\) numerically, we pursue a different strategy.

Differentiating (Eq. (\ref{Qconstraint})) with respect to time we get

\[\begin{equation} \label{QConstraintDiff} \dot{\boldsymbol{Q}}^{\textsf{T}}(t)\boldsymbol{Q}(t)+\boldsymbol{Q}^{\textsf{T}}\dot{\boldsymbol{Q}}(t)=\boldsymbol{0} \end{equation}\]

Differentiating (Eq. (\ref{Qconstraint})) twice with respect to time we get

\[\begin{equation} \label{QConstraintSecondDiff} \ddot{\boldsymbol{Q}}^{\textsf{T}}(t)\boldsymbol{Q}(t)+ \dot{\boldsymbol{Q}}^{\textsf{T}}(t)\dot{\boldsymbol{Q}}(t)+ \dot{\boldsymbol{Q}}^{\textsf{T}}(t)\dot{\boldsymbol{Q}}(t)+ \boldsymbol{Q}^{\textsf{T}}\ddot{\boldsymbol{Q}}(t)=\boldsymbol{0} \end{equation}\]

(Eq. (\ref{QConstraintSecondDiff})) can be re-arranged as

\[\begin{equation} \label{ModQConstraintSecondDiff} \boldsymbol{Q}^{\textsf{T}}(t)\ddot{\boldsymbol{Q}}(t)+ \ddot{\boldsymbol{Q}}^{\textsf{T}}(t)\boldsymbol{Q}(t) =-2\dot{\boldsymbol{Q}}^{\textsf{T}}(t)\dot{\boldsymbol{Q}}(t). \end{equation}\]

From the definition of \(\boldsymbol{P}(t)\) that the term \(\boldsymbol{Q}^{\textsf{T}}\ddot{\boldsymbol{Q}}(t)\), which is the the first term on the left hand side of (Eq. (\ref{ModQConstraintSecondDiff})), is in fact \(\boldsymbol{P}(t)\), and the term \(\ddot{\boldsymbol{Q}}^{\textsf{T}}(t)\boldsymbol{Q}(t)\), which is the the second term on the left hand side of (Eq. (\ref{ModQConstraintSecondDiff})), is the transpose of \(\boldsymbol{P}(t)\), we get from (Eq. (\ref{ModQConstraintSecondDiff})) that

\[\begin{equation} \label{Mod2QConstraintSecondDiff} \textsf{sym}\left(\boldsymbol{P}\right)(t)= -\dot{\boldsymbol{Q}}^{\textsf{T}}(t)\dot{\boldsymbol{Q}}(t) \end{equation}\]

where the operator \(\textsf{sym}: \textsf{Lin}\mapsto \text{Sym}\), where \(\textsf{Sym}\) is the set of all symmetric tensors, is defined as \(\text{sym}(\cdot):=\left((\cdot)+(\cdot)^{\textsf{T}}\right)/2\).

Now, we define

\[\begin{equation} \label{EulerEquation} \boldsymbol{\Omega}_0(t) = \boldsymbol{Q}^{\sf T}(t)\dot{\boldsymbol{Q}}(t) \end{equation}\]

It follows then

\[\begin{equation} \label{SkewSymOmegaZeroDef} \boldsymbol{\Omega}_{0}^{\textsf{T}}(t)= -\boldsymbol{\Omega}_{0}(t) \end{equation}\]

That is \(\boldsymbol{\Omega}_{0}(t)\) is a Skew-Symmetric tensor which is basically the Hodge-Dual of the angular velocity vector \(\boldsymbol{\omega}_0(t)\).

\[\begin{equation} \label{AngularVelocity} \boldsymbol{\omega}_0(t) = \star \boldsymbol{\Omega}_{0}(t) = -\frac{1}{2} \boldsymbol{\epsilon} \left[\boldsymbol{Q}^{\sf T}(t)\dot{\boldsymbol{Q}}(t) \right] \end{equation}\]

where \(\star\) inidcates that \(\boldsymbol{\Omega}_0\) is the hodge dual of the angular velocity vector \(\boldsymbol{\omega}_0\). From (Eq. (\ref{EulerEquation})) we have,

\[\begin{equation} \label{ModEulerEquation} \dot{\boldsymbol{Q}}(t) = \boldsymbol{Q}(t)\boldsymbol{\Omega}_{0}(t) \end{equation}\]

Substitution of (Eq. (\ref{ModEulerEquation})) in (Eq. (\ref{Mod2QConstraintSecondDiff})) leads to the following:

\[\begin{equation} \label{Mod3QConstraintSecondDiff} \textsf{sym}\left(\boldsymbol{P}\right)(t) = -\boldsymbol{\Omega}_{0}^{\textsf{T}}(t)\boldsymbol{\Omega}_{0}(t). \end{equation}\]

Using (Eq. (\ref{SkewSymOmegaZeroDef})), (Eq. (\ref{Mod3QConstraintSecondDiff})) can be simplified as

\[\begin{equation} \label{SymP} \textsf{sym}\boldsymbol{P}(t)= \boldsymbol{\Omega}_{0}^{2}(t) \end{equation}\]

where \(\boldsymbol{\Omega}_{0}^{2}(t):= \boldsymbol{\Omega}_{0}(t) \boldsymbol{\Omega}_{0}(t)\). Noting \(\textsf{sym}\left(\boldsymbol{P}\right)(t)\) and that it is possible to define the square root of all symmetric tensors, we can write (Eq. (\ref{SymP})) as

\[\begin{equation} \label{AngularVeclocityTensorAsSqRtSymP} \boldsymbol{\Omega}_{0}(t) =\left(\textsf{sym}\left(\boldsymbol{P}\right)\right)^{1/2}(t) \end{equation}\]

Differentiation of \(\dot{\boldsymbol{Q}}(t)\) given by (Eq. (\ref{ModEulerEquation})) leads to the following:

\[\begin{equation} \label{QdotWithAngularVelocityTensor} \ddot{\boldsymbol{Q}}(t) = \dot{\boldsymbol{Q}}(t)\boldsymbol{\Omega}_0(t) + \boldsymbol{Q}(t)\dot{\boldsymbol{\Omega}}_0(t) \end{equation}\]

Employing (Eq. (\ref{ModEulerEquation})) in (Eq. (\ref{QdotWithAngularVelocityTensor})) we have

\[\begin{equation} \label{QdoubledotWithAngularVelocityTensor} \ddot{\boldsymbol{Q}}(t) = \boldsymbol{Q}(t)\boldsymbol{\Omega}_0^2(t) + \boldsymbol{Q}(t)\dot{\boldsymbol{\Omega}}_0(t) =\boldsymbol{Q}(t)\left(\boldsymbol{\Omega}_0^2(t) + \dot{\boldsymbol{\Omega}}_0(t) \right) \end{equation}\]

Premultiplying by \(\boldsymbol{Q}^{\textsf{T}}(t)\) on the both sides of (Eq. (\ref{QdoubledotWithAngularVelocityTensor})) and using the definition of \(\boldsymbol{P}(t)\) we can arrived at,

\[\begin{equation} \label{ExpressionP} \boldsymbol{P}(t) = \boldsymbol{\Omega}_0^2(t) +\dot{\boldsymbol{\Omega}}_0(t) \end{equation}\]

Substituting (Eq. (\ref{SymP})) in (Eq. (\ref{ExpressionP})), we get

\[\begin{equation} \label{SkewP} \boldsymbol{P}(t)=\textsf{sym}(\boldsymbol{P}(t)) +\dot{\boldsymbol{\Omega}}_0(t) \end{equation}\]

From (Eq. (\ref{SkewP})) we can arrive at the evolution of \(\boldsymbol{\Omega}_{0}\) given by

\[\begin{equation} \label{OmegaFromSkewP} \dot{\boldsymbol{\Omega}}_{0}(t) = \textsf{skew}(\boldsymbol{P}(t)) \end{equation}\]

where \(\textsf{skew}(\boldsymbol{P}(t))\) is the skew-symmetric part of \(\boldsymbol{P}(t)\).

Magnitude of acceleration at any point of the rigid body

Let the accelerometers be glued onto the material points \(\mathcal{P}^{0}\), \(\mathcal{P}^{1}\), \(\mathcal{P}^{2}\), and \(\mathcal{P}^{3}\). We denote the position vectors of the material point \(\mathcal{P}^{\iota}\) in the reference configurationas \(\boldsymbol{X}^{\iota}\), where \(\iota=0,1,2,3\). It follows from (Eq. (\ref{ModStarAcceleration})) that acceleration vector \(\boldsymbol{a}^{\star}\) at the material points \(\mathcal{P}^{\iota}\) is

\[\begin{equation} \label{StarAccelerationAtAccelemeterPosition} (\boldsymbol{a}^{\star})(\boldsymbol{X}^{\iota},t)= \boldsymbol{P}(t)\boldsymbol{X}^{\iota}+\boldsymbol{q}(t) \end{equation}\]

The acceleration \(\left(\boldsymbol{a}^{\star}\right)^{\iota}(t):=(\boldsymbol{a}^{\star})(\boldsymbol{X}^{\iota},t)\) is of course not the true acceleration of the particle \(\mathcal{P}^{\iota}\). It is related with the the material point $\mathcal{P}^{\iota}$’s true acceleration by \(\left(\boldsymbol{a}^{\star}\right)^{\iota}(t) =\boldsymbol{Q}^{\sf T}(t)\boldsymbol{a}^{\iota}(t)\). However, what is quite remarkable is that the magnitude of \(\left(\boldsymbol{a}^{\star}\right)^{\iota}(t)\) equals the magnitude of \(\boldsymbol{a}^{\iota}(t)\). Thus, on knowing \((\boldsymbol{a}^{\star})(\boldsymbol{X},t)\) of any material point $\mathcal{P}$ will allow us to compute the magnitude of any material point \(\mathcal{P}\)’s acceleration.

Let the measurements of the accelorometers at points \(\mathcal{P}^{\iota}\) be \(\alpha^{\iota}_i(t)\). Let these accelerometers point in the directions characterized by the unit vectors \(\hat{\boldsymbol{N}}^{\iota}_{i}\), $i=1,~2,~3$. These vectors are mutually perpendicular to each other, i.e, \(\hat{\boldsymbol{N}}^{\iota}_{i}\cdot\hat{\boldsymbol{N}}^{\iota}_{j}=\delta_{ij}\). When $t>0$, these accelorometers will point in the directions characterized by the unit vectors \(\hat{\boldsymbol{n}}^{\iota}_{i}(t)=\boldsymbol{Q}(t)\hat{\boldsymbol{N}}^{\iota}_{i}\). Note that given any time $t$, the vectors \(\{\hat{\boldsymbol{n}}^{\iota}_{i}\}_{i=1,2,3}\) form an orthonormal set. These measurements give the components of the acceleration vector of the material point \(\mathcal{P}_{\iota}\) in the directions \(\hat{\boldsymbol{n}}^{\iota}_{i}\). That is,

\[\begin{equation} \label{AccelerationAtAccelemeterPosition} \boldsymbol{a}^{\iota}(t) = \alpha^{\iota}_i(t)\hat{\boldsymbol{n}}^{\iota}_{i}(t) \end{equation}\]

Using the relation \(\hat{\boldsymbol{n}}^{\iota}_{i}(t)=\boldsymbol{Q}(t)\hat{\boldsymbol{N}}^{\iota}_{i}\) in (Eq. (\ref{AccelerationAtAccelemeterPosition})) we have

\[\begin{equation} \label{ModAccelerationAtAccelemeterPosition} \boldsymbol{a}^{\iota}(t) =\boldsymbol{Q}(t)\left(\alpha^{\iota}_i(t)\hat{\boldsymbol{N}}^{\iota}_{i}\right) \end{equation}\]

(Eq. (\ref{ModAccelerationAtAccelemeterPosition})) implies that

\[\begin{equation} \boldsymbol{Q}^{\rm T}(t)\boldsymbol{a}^{\iota}(t)= \alpha^{\iota}_i(t)\hat{\boldsymbol{N}}^{\iota}_{i} \end{equation}\]

Since both \(\alpha_i\) and \(\hat{\boldsymbol{N}}^{\iota}_{i}\) are known for all $t$, we know the vector \(\left(\boldsymbol{a}^{\star}\right)^{\iota}(t)\) for any given $t$.

In order to obtain $\boldsymbol{P}(t)$ we define the vectors

\[\begin{equation} \Delta \boldsymbol{X}^{\mathscr{i}}=\boldsymbol{X}^{\mathscr{i}}-\boldsymbol{X}^{\mathbb{0}}, \end{equation}\]

and

\[\begin{equation} \left(\Delta \boldsymbol{a}^{\star}\right)^{\mathscr{i}}(t)= \left(\boldsymbol{a}^{\star}\right)^{\mathscr{i}}(t)- \left(\boldsymbol{a}^{\star}\right)^{0}(t) \end{equation}\]

where $\mathscr{i}=1,2,3$.

If follows from (Eq. (\ref{ModAccelerationAtAccelemeterPosition})) that

\[\begin{equation} \label{DiffStarAccelerationInTermsOfP} \left(\Delta \boldsymbol{a}^{\star}\right)^{\mathscr{i}}(t)=\boldsymbol{P}(t) \Delta\boldsymbol{X}^{\mathscr{i}}(t) \end{equation}\]

Solving the (Eq. (\ref{DiffStarAccelerationInTermsOfP})) for \(\mathscr{i}=1,2,3\) we get

\[\begin{equation} \label{Psol} \boldsymbol{P}(t)= \left(\Delta \boldsymbol{a}^{\star}\right)^{\mathscr{i}}(t)\cdot \Delta\boldsymbol{X}^{\mathscr{j}}(t) \Delta\boldsymbol{Y}^{\mathscr{i}}\otimes \Delta\boldsymbol{Y}^{\mathscr{j}}, \end{equation}\]

where \(\Delta \boldsymbol{Y}^{\mathscr{i}}\), \(\mathscr{i}=1,2,3\) are the reciprocal vectors corresponding to \(\Delta \boldsymbol{X}^{\mathscr{i}}\), \(\mathscr{i}=1,2,3\) \(\begin{align} \Delta\boldsymbol{Y}^{\mathscr{1}}&= \frac{\Delta\boldsymbol{X}^{\mathscr{2}} \times \Delta\boldsymbol{X}^{\mathscr{3}}}{\Delta\boldsymbol{X}^{\mathscr{1}} \cdot \left(\Delta\boldsymbol{X}^{\mathscr{2}}\times \Delta\boldsymbol{X}^{\mathscr{3}}\right)}\\ \Delta\boldsymbol{Y}^{\mathscr{2}}&= \frac{\Delta\boldsymbol{X}^{\mathscr{3}} \times \Delta\boldsymbol{X}^{\mathscr{1}}}{\Delta\boldsymbol{X}^{\mathscr{1}} \cdot \left(\Delta\boldsymbol{X}^{\mathscr{2}}\times \Delta\boldsymbol{X}^{\mathscr{3}}\right)}\\ \Delta\boldsymbol{Y}^{\mathscr{3}}&= \frac{\Delta\boldsymbol{X}^{\mathscr{1}} \times \Delta\boldsymbol{X}^{\mathscr{2}}}{\Delta\boldsymbol{X}^{\mathscr{1}} \cdot \left(\Delta\boldsymbol{X}^{\mathscr{2}}\times \Delta\boldsymbol{X}^{\mathscr{3}}\right)} \end{align}\)

After computing \(\boldsymbol{P}(t)\) from (Eq. (\ref{Psol})), the vector \(\boldsymbol{q}(t)\), defined in (Eq. (\ref{ModAccelerationAtAccelemeterPosition})), can be computed from (Eq. (\ref{ModAccelerationAtAccelemeterPosition})) with, e.g., \(\iota=0,1,2,3\). Substitution of \(\boldsymbol{P}(t)\) and \(\boldsymbol{q}(t)\) in (Eq. (\ref{ModStarAcceleration})) allows us to compute \(\boldsymbol{a}^{\star}(t)\) at any point of the rigid body. As the magnitude of \(\boldsymbol{a}^{\star}(t)\) equals the magnitude of \(\boldsymbol{a}(t)\), we can compute magnitude of acceleration at any point of the rigid body.

Proposed Algorithms to Compute Angular Velocity and Orientation Tensor

Instead of integrating \(\boldsymbol{\Omega}_0\) from the complicated non-linear differential (Eq. (\ref{ExpressionP})), we propose two alternative algorithms which are computationally efficient and theoretically more appealing.

Algorithm I

It is remarkable to uncover that one can compute \(\boldsymbol{\Omega}_0\) from (Eq. (\ref{AngularVeclocityTensorAsSqRtSymP})) without invoking any time integration algorithm.It is very interesting to observe that \(\boldsymbol{\Omega}_0\) is a skew-symmetric tensor and a square root of a symmetric tensor. So, we encounter a symmetric tensor in mechanics which is not a positive definite tensor! As it is known a priori that the symmetric tensor \(\textsf{sym}\boldsymbol{P}(t)\) is a square of a skew-symmetric tensor \(\boldsymbol{\Omega}_{0}(t)\), the tensor’s negative eigenvalues will have even multiplicities. Thus, the only possible eigenvalues of \(\textsf{sym}\boldsymbol{P}(t)\) are \(0\), \(-\lambda\) and \(-\lambda\).From the discussion given in Appendix A and Appendix B, it is clear that \(\boldsymbol{\Omega}_{0}(t)\) is either \(\boldsymbol{V} \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & \mu \\ 0 & -\mu & 0 \end{array} \right) \boldsymbol{V}^{T}\) or \(\boldsymbol{V} \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & -\mu \\ 0 & \mu & 0 \end{array} \right) \boldsymbol{V}^{T}\) where \(\lambda = -\mu^2; \mu\in\mathbb{R}\) and \(\boldsymbol{V}\) is an orthogonal matrix whose columns are eigenvectors corresponding to the eigenvalues \(0\), \(-\lambda\) and \(-\lambda\) of the symmetric tensor \(\textsf{sym}\boldsymbol{P}(t)\). This algorithm is limited by the theoretical truth that there always exists two solutions for the angular velocity which only differ in sign and can not be resolved any further. Hence, using this algorithm we can not seek for unique angular velocity.

Algorithm II

Alternatively, one can compute \(\boldsymbol{\Omega}_0\) by solving relatively simpler (Eq. (\ref{OmegaFromSkewP})) than the originaly complicated (Eq. (\ref{ExpressionP})). Integrating both sides of (Eq. (\ref{OmegaFromSkewP})) will result in

\[\begin{equation} \boldsymbol{\Omega}_{0}(t)=\int_{z=0}^{t}\textsf{skew}(\boldsymbol{P})(z)dz + \boldsymbol{\Omega}_{0}(0) \end{equation}\]

After computing \(\boldsymbol{\Omega}_{0}(t)\) using one of the algorithms discussed above, one can compute orientation tensor \(\boldsymbol{Q}(t)\) by integrating (Eq. (\ref{ModEulerEquation})).

Generating acceleration data and validation of the proposed algorithms

To validate our proposed algorithms, we have generated acceleration data at four non-coplanar points using geometrically consistent time integration algorithm to be consistent with the actual dynamics of a rigid body. Employing (Eq. (\ref{QdoubledotWithAngularVelocityTensor})) and using the relation \(\dot{\boldsymbol{\Omega}}_0(t) \boldsymbol{X} = \dot{\boldsymbol{\omega}}_0(t) \times \boldsymbol{X}\), we have expression for acceleration at any point of a rigid body in terms of angular velocity tensor.

\[\begin{equation} \label{accelerationAfterSubstitutingEulerEq} \boldsymbol{a}(\boldsymbol{X},t) = \boldsymbol{Q}(t)\boldsymbol{\Omega}_0^2(t) \boldsymbol{X} + \boldsymbol{Q}(t)\left(\dot{\boldsymbol{\omega}}_0(t) \times \boldsymbol{X}\right) + \ddot{\boldsymbol{c}}(t) \end{equation}\]

From (Eq. (\ref{accelerationAfterSubstitutingEulerEq})), it can be observed that solving acceleration \(\boldsymbol{a}\) for the centre of mass, angular velocity tensor \(\boldsymbol{\Omega}_0(t)\), angular acceleration vector \(\dot{\boldsymbol{\omega}}_0(t)\) and orientation tensor \(\boldsymbol{Q}(t)\) of a rigid body gives \(\ddot{\boldsymbol{c}}(t)\) for any point of the rigid body.

\[\begin{equation} \label{rateOfRigidTranslation} \ddot{\boldsymbol{c}}(t) = \boldsymbol{a}(\boldsymbol{X}_{CM},t) - \boldsymbol{Q}(t)\boldsymbol{\Omega}_0^2(t) \boldsymbol{X}_{CM} -\boldsymbol{Q}(t)\left(\dot{\boldsymbol{\omega}}_0(t) \times \boldsymbol{X}_{CM}\right) \end{equation}\]

Updating velocity of centre of mass and angular velocity using explicit verlet algorithms

As the most frequently used ordinary differential equations (ODE) time integrators for example, Runge–Kutta and Adams–Moulton schemes do not respect the dynamical characteristics of rigid body rotational motion, neccessity of developing a geometric integration scheme has been realized. One of the most recently developed Lie-group geometric methods is the Explicit Lie-Stormer–Verlet integration scheme which exactly preserves spatial angular momentum and approximately conserves energy of a rigid body. Solve for velocity \(\boldsymbol{v}\) and acceleration \(\boldsymbol{a}\) of centre of mass of the rigid body using verlet integration scheme as given below.

\[\begin{equation} \label{AccelerationNthStep} \left(\boldsymbol{a}_{CM}\right)_n = \frac{\boldsymbol{F}_n}{m} \end{equation}\] \[\begin{equation} \label{VelocityHalfStep} \left(\boldsymbol{v}_{CM}\right)_{n+\frac{1}{2}} = \left(\boldsymbol{v}_{CM}\right)_n + \frac{\Delta t}{2} \left(\boldsymbol{a}_{CM}\right)_{n} \end{equation}\]

where \(\boldsymbol{v}_{CM} = \boldsymbol{v}(\boldsymbol{x}_{CM},t)\) and \(\boldsymbol{a}_{CM} = \boldsymbol{a}(\boldsymbol{x}_{CM},t)\) are velocity and acceleration of the center of mass of the rigid body. \(\boldsymbol{F}_n\) is the external force acting on the rigid body. Update solutions using the following equations

\[\begin{equation} \label{UpdatePositionVector} \left(\boldsymbol{x}_{CM}\right)_{n+1} = \left(\boldsymbol{x}_{CM}\right)_n + \Delta t \left(\boldsymbol{v}_{CM}\right)_{n+\frac{1}{2}} \end{equation}\] \[\begin{equation} \label{UpdateAcceleration} \left(\boldsymbol{a}_{CM}\right)_{n+1} = \frac{\boldsymbol{F}_{n+1}}{m} \end{equation}\] \[\begin{equation} \label{UpdateVelocity} \left(\boldsymbol{v}_{CM}\right)_{n+1} = \left(\boldsymbol{v}_{CM}\right)_{n+\frac{1}{2}} + \frac{\Delta t}{2} \frac{\boldsymbol{F}_{n+1}}{m} \end{equation}\]

Use of explicit Lie-Stormer–Verlet integration scheme provides the update for the angular velocity and orientation tensor.

\[\begin{equation} \label{AngularVelocityHalfStep} \left(\boldsymbol{\omega}_0\right)_{n+\frac{1}{2}} = {\boldsymbol{J}_0}^{-1}\left(e^{-\frac{\Delta t}{2} \left(\boldsymbol{\Omega}_0\right)_n}\left(\boldsymbol{J}_0\left(\boldsymbol{\omega}_0\right)_n + \frac{\Delta t}{2} \boldsymbol{T}_n\right) \right) \end{equation}\]

where \(\boldsymbol{\omega}_0\) and \(\boldsymbol{J}_0\) are the angular velocity and inertia tensor of the rigid body with respect to the center of mass, expressed in the frame attached to the body. \(\boldsymbol{T}_n\) is the external torque acting on the body expressed in the same coordinate system attached to the body.

\[\begin{equation} \label{Update rotation matrix} \boldsymbol{Q}_{n+1} = \boldsymbol{Q}_{n} e^{\Delta t \left(\boldsymbol{\Omega}_0\right)_{n+\frac{1}{2}}} \end{equation}\] \[\begin{equation} \label{Update angular velocity} \left(\boldsymbol{\omega}_0\right)_{n+1}= {\boldsymbol{J}_0}^{-1} e^{-\Delta t \left(\boldsymbol{\Omega}_0\right)_{n+\frac{1}{2}}}\left(\boldsymbol{J}_0\left(\boldsymbol{\omega}_0\right)_n + \frac{\Delta t}{2} \boldsymbol{T}_n \right) + \frac{\Delta t}{2} {\boldsymbol{J}_0}^{-1} \boldsymbol{T}_{n+1} \end{equation}\] \[\begin{equation} \label{UpdateAngularVelocityTensor} \left(\boldsymbol{\Omega}_0\right)_{n+1} = \star \left(\boldsymbol{\omega}_0\right)_{n+1} \end{equation}\]

Acceleration data at four non-coplanar points

Having updation of \(\boldsymbol{\omega}_0\) and \(\boldsymbol{\Omega}_0\), we can compute angular acceleration of the rigid body as given by,

\[\begin{equation} \label{UpdateAngularAccelerationVector} \left(\dot{\boldsymbol{\omega}}_0\right)_{n+1}= {\boldsymbol{J}_0}^{-1} \left(\boldsymbol{T}_{n+1} - \left(\boldsymbol{\Omega}_0\right)_{n+1} \boldsymbol{J}_0 \left(\boldsymbol{\omega}_0\right)_{n+1} \right) \end{equation}\]

Employing (Eq. (\ref{UpdateAcceleration})), (Eq. (\ref{Update rotation matrix})), (Eq. (\ref{UpdateAngularVelocityTensor})) and (Eq. (\ref{UpdateAngularAccelerationVector})) in (\ref{rateOfRigidTranslation})) we can compute \(\ddot{\boldsymbol{c}}_{n+1}\). Then we can generate acceleration data at four non-coplanar points using the following equation:

\[\begin{equation} \label{accelerationDataFourPoints} \left(\boldsymbol{a}^l\right)_{n+1} = \boldsymbol{Q}_{n+1}\left(\boldsymbol{\Omega}_0\right)^2_{n+1} \boldsymbol{X}^l + \boldsymbol{Q}_{n+1}\left(\left(\dot{\boldsymbol{\omega}}_0\right)_{n+1} \times \boldsymbol{X}^l\right) + \ddot{\boldsymbol{c}}_{n+1} \end{equation}\]

Appendix A : Square-root of a general symmetric matrix with no information about its definiteness

Existence

Let \(\boldsymbol{A}\) be a real symmetric \(n\times n\) matrix. Then \(\boldsymbol{A}=\boldsymbol{V}\Lambda\boldsymbol{V}^{T}\) where \(\boldsymbol{V}\in \mathbb{R}^{n\times n}\) such that \(\boldsymbol{V}^{T} \boldsymbol{V} = \boldsymbol{V}\boldsymbol{V}^{T}= \boldsymbol{I}\) and \(\Lambda = diag\left(\lambda_1,\lambda_2,....,\lambda_n\right)\) with \(\lambda_i\in \mathbb{R} \;\; \forall i = 1,2,....,n\).
Claim: Square root of \(\boldsymbol{A}\) exists if and only if \(\Lambda\) has square root.

Proof: Let, \(\boldsymbol{C}\) be a square root of \(\Lambda\) i.e., \(\boldsymbol{C}\) is such that \(\boldsymbol{C}^2 = \Lambda\).
Then the square root of \(\boldsymbol{A}\) will be \(\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\) which can be easily checked as given below.
Define, \(\boldsymbol{B} := \boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\) and note that \(\boldsymbol{B}^2 = \boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}=\boldsymbol{V}\boldsymbol{C}^2\boldsymbol{V}^{T} = \boldsymbol{V}\Lambda\boldsymbol{V}^{T}=\boldsymbol{A}\)
Thus, if \(\boldsymbol{C}\) is a square root of \(\Lambda\) then \(\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\) is a square root of \(\boldsymbol{A}\).
Now, let \(\boldsymbol{B}\) is a square root of \(\boldsymbol{A}\) i.e., \(\boldsymbol{B}^2 =\boldsymbol{A}\)
Then, \(\boldsymbol{V}^{T}\boldsymbol{B}\boldsymbol{V} \boldsymbol{V}^{T}\boldsymbol{B}\boldsymbol{V} = \boldsymbol{V}^{T}\boldsymbol{B}^2\boldsymbol{V}= \boldsymbol{V}^{T}\boldsymbol{A}\boldsymbol{V}= \boldsymbol{V}^{T}\boldsymbol{V}\Lambda\boldsymbol{V}^{T}\boldsymbol{V}=\Lambda\)
So, \(\boldsymbol{V}^{T}\boldsymbol{B}\boldsymbol{V}\) is a square root of \(\Lambda\).
Thus, if \(\boldsymbol{B}\) is a square root of \(\boldsymbol{A}\) then \(\boldsymbol{V}^{T}\boldsymbol{B}\boldsymbol{V}\) is a square root of \(\Lambda\).
Therefore, it is enough to find the square root of \(\Lambda\) which is the real diagonal matrix consisting of the eigenvalues of \(\boldsymbol{A}\).

Remarks:

  1. Real square root of \(\boldsymbol{A}\) exists if and only if \(\Lambda\) has real square root.
  2. There always exists \(n\times n\) complex valued square root matrix \(\boldsymbol{C}\) of a real diagonal matrix \(\Lambda\). Hence, complex valued square root matrix \(\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\) of \(\boldsymbol{A}\) always exists.

Uniqueness/Non-uniqueness

As shown in the previous section, the nature of square roots of a real symmetric matrix \(\boldsymbol{A}\) is completely determined by the nature of square roots of the real diagonal matrix \(\Lambda\) consisting of the eigenvalues of \(\boldsymbol{A}\). Square roots of a real diagonal matrix is not unique until additional conditions on square root matrices are imposed. For example, a real positive definite diagonal matrix has a unique real positive definite square root matrix. But, a real positive definite diagonal matrix has more than one real square root matrix if the positive-definiteness is not imposed on the square root matrix. Similar results can be derived for a general symmetric tensor as discussed below.

Claim: A real symmetric positive definite matrix has a unique real symmetric positive definite square root matrix.
Proof: Before showing the uniqueness, let us first show that there exists a symmetric positive definite square root matrix \(\boldsymbol{B}\) of a symmetric positive definite matrix \(\boldsymbol{A}\).
Since, \(\boldsymbol{A}\) is a real symmetric positive definite, its eigenvalues are real positive numbers.
Let, \(\boldsymbol{A}=\boldsymbol{V}\Lambda\boldsymbol{V}^{T}\) where \(\boldsymbol{V}\in \mathbb{R}^{n\times n}\) such that \(\boldsymbol{V}^{T} \boldsymbol{V} = \boldsymbol{V}\boldsymbol{V}^{T}= \boldsymbol{I}\) and \(\Lambda = diag\left(\lambda_1,\lambda_2,....,\lambda_n\right)\) with \(\lambda_i >0\).
Let, \(\boldsymbol{B} = \boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\) with \(\boldsymbol{C}=diag\left(\sqrt\lambda_1,\sqrt\lambda_2,....,\sqrt\lambda_n\right)\) i.e, \(\boldsymbol{C}^2=\Lambda\).
Then, \(\boldsymbol{B}^{T}=\left(\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\right)^{T}=\left(\boldsymbol{V}^{T}\right)^{T} \boldsymbol{C}^{T}\boldsymbol{V}^{T}=\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}=\boldsymbol{B}\). So, \(\boldsymbol{B}\) is symmetric.
It can be easily seen that \(\boldsymbol{B} = \boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\) is a square root matrix of \(\boldsymbol{A}\) ( as \(\boldsymbol{B}^2= \boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}=\boldsymbol{V}\boldsymbol{C}^2\boldsymbol{V}^{T}=\boldsymbol{A}\)).
Eigenvalues of \(\boldsymbol{B}\) are positive as \(\lambda_i >0\). Hence, \(\boldsymbol{B}\) is a real symmetric positive definite square root of real symmetric positive definite matrix \(\boldsymbol{A}\).

Now, let us show the uniqueness.
Let, \(\boldsymbol{n}\) be a eigenvector associated with a eigenvalue \(\lambda\) of the matrix \(\boldsymbol{A}\).
Then, \(\left(\boldsymbol{A}-\lambda \boldsymbol{I} \right)\boldsymbol{n}=\boldsymbol{0}\)
\(\implies \left(\boldsymbol{B}^2-\lambda \boldsymbol{I} \right)\boldsymbol{n}=\boldsymbol{0}\)
\(\implies \left(\boldsymbol{B}+\sqrt\lambda \boldsymbol{I}\right)\left(\boldsymbol{B}-\sqrt\lambda \boldsymbol{I}) \right)\boldsymbol{n}=\boldsymbol{0}\)
\(\implies \left(\boldsymbol{B}+\sqrt\lambda \boldsymbol{I}\right)\tilde{\boldsymbol{n}}=\boldsymbol{0}\) where \(\tilde{\boldsymbol{n}} = \left(\boldsymbol{B}-\sqrt\lambda \boldsymbol{I} \right)\boldsymbol{n}\)
\(\implies \tilde{\boldsymbol{n}}=\boldsymbol{0}\) as \(\boldsymbol{B}\) is positive definite and hence \(-\sqrt\lambda\) can not be an eigenvalue of \(\boldsymbol{B}\).
Therefore, \(\left(\boldsymbol{B}-\sqrt\lambda \boldsymbol{I} \right)\boldsymbol{n} =\boldsymbol{0}\) i.e, \(\boldsymbol{n}\) is also an eigenvector of \(\boldsymbol{B}\) with associated eigenvalue \(\sqrt\lambda\).
Hence, if \(\boldsymbol{B}^2=\boldsymbol{A}\) then \(\boldsymbol{B}\) must have the form \(\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\) with \(\boldsymbol{C}=diag\left(\sqrt\lambda_1,\sqrt\lambda_2,....,\sqrt\lambda_n\right)\).
This establishes the uniqueness.

Remark: A real symmetric positive semi-definite matrix has a unique real symmetric positive semi-definite square root matrix.

A symmetric positive definite tensor has a unique, symmetric square root. Can we say something equivalent from the results regarding the uniqueness of the skew-symmetric square root?

Response: Note that a symmetric positive definite tensor has more than one symmetric square roots. A symmetric positive definite tensor can have symmetric square roots that are not positive definite. For example, \(\boldsymbol{B}=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right)\) is a non-positive definite symmetric square root of a symmetric positive definite tensor \(\boldsymbol{A}=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)\). It can be easily checked that \(\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)\), \(\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right)\), \(\left( \begin{array}{ccc} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)\) are symmetric square roots of \(\boldsymbol{A}=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)\). Thus, to achieve a unique symmetric square root of a symmetric positive definite tensor, additional conditions such as positive definiteness on the square root matrix has to be imposed.

In the previous section, it has been shown that a symmetric positive definite tensor has a unique symmetric positive definite square root.

However, we can’t say something equivalent for a skew-symmetric square root of a negative definite symmetric tensor. A negative definite symmetric tensor has more than one skew-symmetric square root. For example, both \(\left(\begin{array}{cc} 0 & 1\\ -1 & 0 \end{array}\right)\) and \(\left(\begin{array}{cc} 0 & -1\\ 1 & 0 \end{array}\right)\) are square root of negative definite matrix \(\left(\begin{array}{cc} -1 & 0\\ 0 & -1 \end{array}\right)\).

Is the symmetric tensor should be negative semi definite the necessary condition for having skew symmetric square root?

Response: Yes, the symmetric tensor should be negative semi definite is the necessary condition for having skew symmetric square root.
Let \(\boldsymbol{A}\) be a given symmetric \(n\times n\) real matrix.
Let \(\boldsymbol{B}\) be square root of \(\boldsymbol{A}\) that is \(\boldsymbol{B}^2=\boldsymbol{A}\).
If \(\boldsymbol{B}\) is skew-symmetric and real, then
\(\boldsymbol{B}^T=-\boldsymbol{B}\).
\(\implies \boldsymbol{B}^T \boldsymbol{B}=- \boldsymbol{B}^2\)
\(\implies \boldsymbol{B}^T \boldsymbol{B}=- \boldsymbol{A}\)
Take any \(\boldsymbol{x}\in \mathbb{R}^n\), then \(\boldsymbol{x}^T \boldsymbol{A} \boldsymbol{x} = - \boldsymbol{x}^T \boldsymbol{B}^T \boldsymbol{B}\boldsymbol{x}\)
\(\implies \boldsymbol{x}^T \boldsymbol{A} \boldsymbol{x}=- \left(\boldsymbol{B}\boldsymbol{x}\right)^T\boldsymbol{B}\boldsymbol{x}\)
\(\implies \boldsymbol{x}^T \boldsymbol{A} \boldsymbol{x} =- \|\boldsymbol{B}\boldsymbol{x}\|^2\leq 0\)
Hence, \(\boldsymbol{A}\) must be negative semi-definite if \(\boldsymbol{A}\) has a real skew-symmetric square root.

Remark: A positive definite symmetric matrix can not have a real skew-symmetric square root.

Is the real symmetric tensors’ negative eigen values should have even multiplicities the sufficiency condition for having real square root?

Response: Yes, real symmetric tensors’ negative eigen values should have even multiplicities is the sufficiency condition for having real square root.

The nature of square roots of a real symmetric matrix \(\boldsymbol{A}\) is completely determined by the nature of square roots of the real diagonal matrix $\Lambda$ consisting of the eigenvalues of \(\boldsymbol{A}\).
It is clear that if the square root matrix \(\boldsymbol{C}\) of the real diagonal matrix \(\Lambda\) is real, then the square root matrix \(\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\) of the real symmetric matrix \(\boldsymbol{A}\) is real.
Let the real symmetric matrix \(\boldsymbol{A}\) has both non-negative and negative eigenvalues, with negative eigenvalues having even multiplicities.
If we can show that we can find real square root of each $2\times 2$ diagonal block of same negative eigenvalue of the diagonal matrix \(\Lambda\), then the square root of \(\Lambda\) is real.
It is shown in the Appendix A, that for each \(2\times 2\) diagonal block matrix \(\boldsymbol{D}=\left(\begin{array}{cc} -\lambda & 0 \\ 0 & -\lambda \end{array}\right); \lambda\in\mathbb{R}^{+}\), we can find a real square root \(\left(\begin{array}{cc} 0 & \sqrt{\lambda}\\ -\sqrt{\lambda} & 0 \end{array}\right)\).
Thus, for a general diagonal matrix \(\Lambda = diag ({\lambda}_1,...,{\lambda}_k,-{\lambda}_3,-{\lambda}_3,-{\lambda}_4,-{\lambda}_4,...,-{\lambda}_{k+m},-{\lambda}_{k+m},0,...0_n)\) with \({\lambda}_i\in\mathbb{R}\), we can find a real matrix \(\boldsymbol{C}\) consisting of the real block matrices \(diag ({\lambda}_1,...,{\lambda}_k)\), \(2\times 2\) diagonal blocks \(\left(\begin{array}{cc} 0 & \sqrt{\lambda}_3\\ -\sqrt{\lambda}_3 & 0\end{array}\right),\left(\begin{array}{cc} 0 & \sqrt{\lambda}_4\\ -\sqrt{\lambda}_4 & 0\end{array}\right),..., \left(\begin{array}{cc} 0 & \sqrt{\lambda}_{k+m}\\ -\sqrt{\lambda}_{k+m} & 0 \end{array}\right)\) and \(diag (0_{k+m+1},...,0_n)\) such that \(\boldsymbol{C}^2=\Lambda\).
Hence, \(\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\) is real and satisfies \(\left(\boldsymbol{V}\boldsymbol{C}\boldsymbol{V}^{T}\right)^2=\boldsymbol{A}\).

Remark: If the multiplicity of a negative eigenvalue is odd, there is no real square root as the trivial example \(\boldsymbol{A}=(-1)\) shows.

Is there a way to know a priori when a tensor’s negative eigen values will have even multiplicities?

Response: If it is known a priori that a tensor is a square of a skew-symmetric tensor, then the tensor’s negative eigenvalues will have even multiplicities.

Let, \(\boldsymbol{A}=\boldsymbol{B}^2\) and \(\boldsymbol{B}\) is real skew-symmetric matrix are known a priori.
The eigenvalues of a real skew-symmetric matrix \(\boldsymbol{B}\) consist of zeros (possibly) and pairs of non-zero conjugate pure imaginary numbers.
The non-zero eigenvalues of \(\boldsymbol{B}^2\), being the squares of the non-zero eigenvalues of \(\boldsymbol{B}\), consist of pairs of equal negative real numbers.
Therefore, the distinct non-zero eigenvalues of \(\boldsymbol{A}\) must have even multiplicities.

Appendix B : Square root matrix of a real diagonal matrix

Let \(\boldsymbol{D}\) be a real \(n\times n\) diagonal matrix and \(\boldsymbol{C}\) is any square root matrix of \(\boldsymbol{D}\).

For distinct eigenvalues of \(\boldsymbol{D}\), its square root matrix \(\boldsymbol{C}\) must be diagonal with diagonal elements equal to square roots of the diagonal elements of \(\boldsymbol{D}\).
As we are also interested to find a real skew symmetric square root of the real diagonal matrix \(\boldsymbol{D}\), let us also include the possibilities of two eigenvalues of \(\boldsymbol{D}\) being the same and check whether we can arrive at a real skew-symmetric matrix \(\boldsymbol{C}\) which is a square root of the diagonal matrix \(\boldsymbol{D}\).
For simplicity, let us consider an example of \(2\times 2\) real diagonal matrix \(\boldsymbol{D}=\left(\begin{array}{cc} {\lambda}_1 & 0\\ 0 & {\lambda}_2 \end{array}\right)\) where \({\lambda}_1, {\lambda}_2\in \mathbb{R}\)
Let \(\boldsymbol{C}=\left(\begin{array}{cc} a & c\\ b & d \end{array}\right)\) is the square root of \(\boldsymbol{D}=\left(\begin{array}{cc} {\lambda}_1 & 0\\ 0 & {\lambda}_2 \end{array}\right)\) i.e., \(\boldsymbol{C}^2:=\boldsymbol{C}\boldsymbol{C}= \boldsymbol{D}\) where \(a,b,c,d\in \mathbb{C}\).
Now, \(\left(\begin{array}{cc} a & c\\ b & d \end{array}\right)\left(\begin{array}{cc} a & c\\ b & d \end{array}\right)= \left(\begin{array}{cc} {\lambda}_1 & 0\\ 0 & {\lambda}_2 \end{array}\right)\) which implies

\[\begin{equation} \label{firstTerm2cross2} a^2+ bc = {\lambda}_1 \end{equation}\] \[\begin{equation} \label{secondTerm2cross2} \left(a+d\right)c = 0 \end{equation}\] \[\begin{equation} \label{thirdTerm2cross2} \left(a+d\right)b = 0 \end{equation}\] \[\begin{equation} \label{lastTerm2cross2} bc+d^2 = {\lambda}_2 \end{equation}\]

Subtracting Eq. (\ref{lastTerm2cross2}) from Eq. (\ref{firstTerm2cross2}) we have,

\[\begin{equation} \label{FirstMinusLastTerm2cross2} \left(a+d\right)\left(a-d\right) = {\lambda}_1 - {\lambda}_2 \end{equation}\]

Case I: Distinct Eigenvalues (i.e., \({\lambda}_1\neq {\lambda}_2\))
In this case, \(\left(a+d\right)\neq 0\) and \(a\neq d\). (Using Eq. (\ref{FirstMinusLastTerm2cross2}))
\(\left(a+d\right)\neq 0\) implies \(b=c=0\)(using Eqs. (\ref{secondTerm2cross2}) and (\ref{thirdTerm2cross2})).
Hence, \(a= \pm\sqrt{\lambda}_1\) and \(d = \mp\sqrt{\lambda}_2\) (using Eqs. (\ref{firstTerm2cross2}) and (\ref{lastTerm2cross2})).
So, \(\boldsymbol{C}=\left(\begin{array}{cc} \pm\sqrt{\lambda}_1 & 0\\ 0 & \mp\sqrt{\lambda}_2 \end{array}\right)\) are square roots of \(\boldsymbol{D}=\left(\begin{array}{cc} {\lambda}_1 & 0\\ 0 & {\lambda}_2 \end{array}\right)\)
Clearly, \(\boldsymbol{C}\) will be real if and only if \({\lambda}_1\) and \({\lambda}_2\) are non-negative.
So, for a negative eigenvalue of \(\boldsymbol{D}\), square root matrix \(\boldsymbol{C}\) will be complex valued i.e., \(\boldsymbol{C}\in \mathbb{C}^{2\times 2}\)

Case II: Same Eigenvalues (i.e., \({\lambda}_1 = {\lambda}_2=\lambda\))
In this case, either \(\left(a+d\right)= 0\) or \(a = d\). (Using Eq. (\ref{FirstMinusLastTerm2cross2}))

Case IIA: \(\left(a+d\right)= 0\) and \(a\neq d\)
\(\left(a+d\right)= 0\) implies \(b\) and \(c\) can be chosen arbitrarily. (using Eqs. (\ref{secondTerm2cross2}) and (\ref{thirdTerm2cross2})).
Hence, \(a= \pm\sqrt{\lambda-bc}\) and \(d= \mp\sqrt{\lambda-bc}\)(using Eqs. (\ref{firstTerm2cross2}) and (\ref{lastTerm2cross2})).
So, \(\boldsymbol{C}=\left(\begin{array}{cc} \pm\sqrt{\lambda-bc} & c\\ b & \mp\sqrt{\lambda-bc} \end{array}\right)\) are square roots of \(\boldsymbol{D}=\left(\begin{array}{cc} {\lambda} & 0\\ 0 & {\lambda} \end{array}\right)\)
Clearly, \(\boldsymbol{C}\) will be real if and only if \(\lambda-bc\geq 0\) i.e., combinations of \(b\) and \(c\) are such that \(\lambda\geq bc\).

Case IIB: \(\left(a+d\right)\neq 0\) and \(a= d\)
\(\left(a+d\right)\neq 0\) implies \(b=c=0\) (using Eqs. (\ref{secondTerm2cross2}) and (\ref{thirdTerm2cross2})).
Hence, \(a = d = \pm\sqrt{\lambda}\) (using Eqs. (\ref{firstTerm2cross2}) and (\ref{lastTerm2cross2})).
So, \(\boldsymbol{C}=\left(\begin{array}{cc} \pm\sqrt{\lambda} & 0\\ 0 & \mp\sqrt{\lambda} \end{array}\right)\) are square roots of \(\boldsymbol{D}=\left(\begin{array}{cc} {\lambda} & 0\\ 0 & {\lambda} \end{array}\right)\)
Clearly, \(\boldsymbol{C}\) will be real if and only if \(\lambda\geq 0\).

Case IIC: \(\left(a+d\right)= 0\) and \(a=d\)
\(\left(a+d\right)= 0\) implies \(b\) and \(c\) can be chosen arbitrarily. (using Eqs. (\ref{secondTerm2cross2}) and (\ref{thirdTerm2cross2})).
Hence, \(a=d=\pm\sqrt{\lambda-bc}\) (using Eqs. (\ref{firstTerm2cross2}) and (\ref{lastTerm2cross2})).
So, \(\boldsymbol{C}=\left(\begin{array}{cc} \pm\sqrt{\lambda-bc} & c\\ b & \pm\sqrt{\lambda-bc} \end{array}\right)\) are square roots of \(\boldsymbol{D}=\left(\begin{array}{cc} {\lambda} & 0\\ 0 & {\lambda} \end{array}\right)\)
Clearly, \(\boldsymbol{C}\) will be real if and only if \(\lambda-bc\geq 0\) i.e., combinations of \(b\) and \(c\) are such that \(\lambda\geq bc\).

Now, let us take the special case \(a=d=0\).
Then, \(\lambda=bc\). It is very interesting to observe that for \(\lambda = -\mu^2; \mu\in\mathbb{R}\) choice of \(b=-\mu\) and \(c=\mu\) leads to the real skew symmetric matrix \(\boldsymbol{C}=\left(\begin{array}{cc} 0 & \mu\\ -\mu & 0 \end{array}\right)\). It is clear that we are free to choose \(b\) and \(c\) on the anti-diagonal simply to make their product \(\lambda\). Hence, the square root of the diagonal matrix is not unique. Both \(\left(\begin{array}{cc} 0 & \mu\\ -\mu & 0 \end{array}\right)\) and \(\left(\begin{array}{cc} 0 & -\mu\\ \mu & 0 \end{array}\right)\) are real skew symmetric square root of \(\boldsymbol{D}=\left(\begin{array}{cc} -\mu^2 & 0 \\ 0 & -\mu^2 \end{array}\right)\).

Generalization of the above results lead to the following: For a \(2k\times 2k\) real negative definite diagonal matrix \(\boldsymbol{D} = diag(-{\mu}^2_1,-{\mu}^2_1,-{\mu}^2_2,-{\mu}^2_2,...,-{\mu}^2_k,-{\mu}^2_k)\) with \({\mu}_i\in\mathbb{R} \; \forall i= 1,...,k\)}, there exists a real skew symmetric \(2k\times 2k\) matrix \(\boldsymbol{C}\) consisting of diagonal \(2\times 2\) blocks \(\left(\begin{array}{cc} 0 & {\mu}_1\\ -{\mu}_1 & 0 \end{array}\right),\left(\begin{array}{cc} 0 & {\mu}_2\\ -{\mu}_2 & 0 \end{array}\right),...,\left(\begin{array}{cc} 0 & {\mu}_k\\ -{\mu}_k & 0 \end{array}\right)\) such that \(\boldsymbol{C}^2=\boldsymbol{D}\).

\[\begin{array}{c@{}c@{}c} \mathbb{T}\mathcal{B}_0 & \overset{T\boldsymbol{\varphi}^{-1}}{\longleftarrow} & \mathbb{T}\mathcal{B}_t \\ \boldsymbol{\varphi}^{*}(\boldsymbol{v}){\uparrow} && {\uparrow} \boldsymbol{v} \\ \mathcal{B}_0 & \overset{\boldsymbol{\varphi}}{\longrightarrow} & \mathcal{B}_t \end{array} \\]

The pull-back vector field can be defined as,

\[\begin{align} \boldsymbol{\varphi}^{*}(\boldsymbol{v}) &:= T\boldsymbol{\varphi}^{-1} \circ\boldsymbol{v}\circ\boldsymbol{\varphi} \label{eq:PullBackVectorFieldTangentMap}\\ \end{align}\]

Using coordinate cover of the manifold \(\mathcal{B}_0\) the formula given by Eq. (\ref{eq:PullBackVectorFieldTangentMap}) may be written in component form as,

\[\begin{align} \left(\boldsymbol{\varphi}^{*}(\boldsymbol{v})\right)^i (\boldsymbol{X},t) = \left(\frac{\partial\left(\boldsymbol{\varphi}^{-1}\right)^i}{\partial\boldsymbol{x}^j} \circ\boldsymbol{\varphi}_t\right)(\boldsymbol{X}) \left(\boldsymbol{v}^j\circ\boldsymbol{\varphi}_t\right)(\boldsymbol{X}) = \frac{\partial\left(\boldsymbol{\varphi}^{-1}\right)^i}{\partial\boldsymbol{x}^j} \left(\boldsymbol{x},t\right) \boldsymbol{v}^j(\boldsymbol{x},t) \label{eq:PullBackVectorFieldComponentForm}\\ \end{align}\]

As the acceleration does not belong to the tangent space of the manifold, pull-back or push-forward of it does not make sense.