What is tensor and its brief explanation

Introduction to tensor calculus: print version

Printable version of the book Introduction to tensor calculus
  • This book currently comprises about 61 A4 pages including pictures (as of August 14, 2007).
  • When you print this book or use your browser's print preview, this note will not be visible.
  • To print, click in the “Print / Export” section in the menu bar on the left Download as PDF.
  • For more information about print versions, see Help: Finish / PDF versions.
  • Hints:
    • For a pure text printout, you can deactivate the display of images in the browser:
      • Internet Explorer: Extras> Internet options> Advanced> Show pictures (remove the checkmark and confirm with OK)
      • Mozilla Firefox: Extras> Settings> Content> Load graphics (remove the checkmark and confirm with OK)
      • Opera: View> Pictures> No Pictures
    • Texts in collapsible boxes are not always printed out (depending on the definition). In any case, they have to be unfolded if they are to be printed.
    • The "Download as PDF" function can lead to display errors.

Introduction [edit]

The subject of this treatise is first of all the second level tensors (also called tensors of rank 2). (Note: Scalars are 0 level tensors, vectors are 1 level tensors.)

Second level tensors are mathematical operators which, when applied to a vector, generate another vector that has certain properties. (Examples will follow later.)

After defining (defining, agreeing) a coordinate system - in future referred to as the basis or basic system for short - each tensor can be described by a »matrix«, i. H. by a certain number of numbers in a certain order. For a scalar (tensor 0th level) this matrix consists only of the scalar itself. For a (three-dimensional) vector v (Tensor 1st level) the matrix consists of the three components v1, v2, v3that the vector has with respect to the basis used:

This is a oneline Matrix or in short: a line matrix. However, it is also possible to describe the vector by acracked Matrix (short: column matrix):

Note that these matrices are only simplified descriptions of the corresponding vector, but that they are not identical to the vector. This is important because the same vector in different bases is represented by completely different matrices, while the tensor itself is independent of the base used (it is "invariant to base changes").

A 2nd order tensor can be represented with respect to a basis by a matrix with three rows and three columns, by a so-called 3 x 3 matrix:

Here, too, the following applies: Such a matrix is ​​only one of any number of forms of representation of the tensor.

Since 2nd level tensors always appear together with a vector in calculations and the calculations are always carried out with matrices, the terms and laws of matrix calculation required for this should first be explained.


Basic terms: matrices and matrix representation of vectors [edit]

A matrix of type (m, n) is a rectangular scheme of m·n Sizes that are in m lines (horizontal rows) and n columns (vertical rows) are arranged. These sizes are called elements the matrix. The element aik (also A.ik) the matrix A. is in the i-th line and in the k-th column. The elements can be real or complex numbers, but also other mathematical objects such as vectors, polynomials, differentials and others.

A matrix A. of type (m, n) can be represented like this:


The index (m, n) at A. and (aik) can also be omitted if the type of matrix is ​​either obvious or unimportant.

The descriptions of a vector by a matrix presented in the introduction require the agreement of a basis to which the components of the vector (i.e. the elements of the matrix) are assigned. For the time being, this basis always consists of three mutually perpendicular unit vectors e1, e2, e3. (Another common name for these "basis vectors" is i, j, k.)

The representation of a (physical) vector v with the (scalar) components v1, v2, v3 in the component representation for the basis {e1, e2, e3} = {ei} then reads


An identical representation of this vector by means of two matrices (»matrix representation«) is then possible (using the laws of matrix multiplication explained later):


The matrix representation of a vector is therefore the product of the row matrix of its components and the column matrix of the unit vectors of the base used.

To describe these basis vectors one cannot fall back on another basis, because one would then be faced with the task of describing the basis vectors of this new basis, which would lead to an infinite regression. Therefore one has to state the position of the basis vectors with respect to the respective observer. This can e.g. B. happen by agreeing that the vector e1 lies in the plane of the drawing of the observer and points to the right, the vector e2 also lies in the plane of the drawing and points upwards. This is also the position of the vector e3 since it has to form a legal system with the first two vectors. In the following, we always assume that such an agreement has been made.

The so-called line vector of mathematics (in truth a "line matrix") is only converted into a number of elements through multiplication with the "column matrix" ei (i = 1, 2, 3) a physical vector.

Since the component matrix of a vector only applies to a certain basis, this basis is always assumed to be defined in the following.

We now agree:

1. The quantities referred to in the future as "vectors" are always physical vectors, that is, directed physical quantities.

2. The matrix of the components of a vector v is always called Column matrix written and with (v) = (vi) designated.

3. If for compelling reasons (e.g. when multiplying matrices) the component matrix of a vector is called Row matrix must be written, we use the designation for it

(v)T = (vi)T.

(v)T = (vi)T is called the transposed matrix the original matrix (v) = (vi).


so follows and vice versa.

It follows from equation 1.3:


Note: The vector v remains unaffected by whether it is represented as a row matrix or as a column matrix, but of course the two matrices are not the same: According to the definition of equality (see below), only matrices of the same type (m, n) be equal. It is useful to always clearly distinguish between a vector and its matrix (its matrix representation): A matrix is no vector, and a vector is no matrix. A vector is independent of the coordinate system used (base); its matrix is ​​only one of several forms of representation of the vector, and it is (more precisely: its elements are) dependent on the basis used.


Definitions [edit]

Equality of two matrices: Two matrices are equal if they are of the same type (m, n) and all corresponding elements of the matrices are equal:

Sum (difference) of two matrices: Prerequisite: Both matrices must be of the same type (m, n) be. Corresponding elements of the two matrices are added or subtracted.

Multiplication of a matrixA.with a scalar: All elements of the matrix are with the scalar k multiplied.

Transposed matrix: The transposed matrix A.T is created by swapping the rows and columns.

It follows:

In particular is


A square matrix has as many rows as columns: m = n.

A square matrix A. called symmetrical, if A.T = A. is. Then

A square matrix is ​​called asymmetrically or antisymmetric, if A.T = - A. is. Then

It follows that all elements aii on the "main diagonal" of the matrix (that is, the diagonal running from top left to bottom right) are equal to zero.


Laws of calculation [edit]

Products of two matrices and vector matrix products

The multiplication of two matrices A. and B. assumes that the number of columns of A. equal to the number of lines of B. is. (This “linkability condition” results from the following rule for calculating the product matrix.) The product FROM two matrices is again a matrix.

Calculation rule: the element cik is the scalar product of i-th line of A. with the k-th column of B.. (About the scalar products of two vectors, see below.) When calculating the scalar product, the rows and columns are treated like (physical) vectors, i.e. the elements of the rows and columns are first multiplied by the corresponding unit vectors, then the actual multiplication takes place performed. The relevant laws of vector algebra apply:

e1·e1 = e2·e2 = e3·e3 = 1,     e1·e2 = e2·e3 = e3·e1 = 0.

We are satisfied here with the example of two 3x3 matrices: Let


in which