Matrices and Vectors: A new beginning

From Qunet
Revision as of 12:51, 11 March 2022 by Mbyrd (talk | contribs) (Adding Vectors)
Jump to: navigation, search

Vectors

Here we introduce vectors and the notation that we use for vectors. We then give some facts about real vectors before discussing the complex vectors used in quantum mechanics.

Vectors Defining and Representing

You may have heard of the definition of a vector as a quantity with both magnitude and direction. While this is true and often used in science classes, our purpose is different. So we will simply define a vector as a set of numbers that is written in an row or a column. When the vector is written as a row of numbers, it is called a row vector and when it is written as a set of numbers in a column, it is called a column vector. As we will see, these two can have the same set of numbers, but each can be used in a slightly different way.

Examples

This is an example of a row vector

This is an example of a column vector

Real Vectors

If you are familiar with vectors, the simple definition of a vector --- an object that has magnitude and direction --- is helpful to keep in mind even when dealing with complex and/or abstract vectors as we will here. However, this is not necessary and we will see how to perform all of the operations that we need just using our arrays of numbers. In three dimensional space, a vector is often written as

where the hat () denotes a unit vector and the components , are just numbers. The unit vectors are also known as basis vectors. The unit vectors have magnitude equal to one. (The magnitude is the size, or length of a vector.) So that if the vector is a unit vector, then

Vector Operations

Adding Vectors

When adding vectors, it is important to note that you can only add vectors if they are the same type. (No adding "apples and oranges" so to speak.) So you can add two vectors that are both column vectors and have three entries. You can't add a column vector to a row vector and you can't add a vector with two components to a vector with three components.


Examples

Products of Vectors

The inner product, or dot product, for two real three-dimensional vectors,

can be computed as follows:

For the inner product of with itself, we get the square of the magnitude of , denoted :

If we want a unit vector in the direction of , we can simply divide it by its magnitude:

Now, of course, , which can easily be checked.

There are several ways to represent a vector. The ones we will use most often are column and row vector notations. So, for example, we could write the vector above as

In this case, our unit vectors are represented by the following:

We next turn to the subject of complex vectors and the relevant notation. We will see how to compute the inner product later, since some other definitions are required.

Complex Vectors

For complex vectors in quantum mechanics, Dirac notation is used most often. This notation uses a , called a ket, for a vector. So our vector would be

For qubits, i.e. two-state quantum systems, complex vectors will often be used:


(C.1)

where

are the basis vectors. The two numbers and are complex numbers, so the vector is said to be a complex vector.

Matrices

Basic Definition and Representations

A matrix is an array of numbers of the following form with columns, col. 1, col. 2, etc., and rows, row 1, row 2, etc. The entries for the matrix are labeled by the row and column. So the entry of a matrix will be where is the row and is the column where the number is found. This is how it looks:

Notice that we represent the whole matrix with a capital letter . We could also represent it using all of the entries, this array of numbers seen in the equation above. Another way to represent it is to write it as . By this we mean that it is the array of numbers in the parentheses.

Matrix Addition

Matrix addition is performed by adding each element of one matrix with the corresponding element in another matrix. Let our two matrices be as above, and . To represent these in an array,


The the sum, which we could call is given by

In other words, the sum gives , etc. We add them component by component like we do vectors.


Notation

There are many aspects of linear algebra that are quite useful in quantum mechanics. We will briefly discuss several of these aspects here. First, some definitions and properties are provided that will be useful. Some familiarity with matrices will be assumed, although many basic definitions are also included.


Let us denote some matrix by . The set of all matrices with real entries is . Such matrices are said to be real since they have all real entries. Similarly, the set of complex matrices is . For the set of square complex matrices, we simply write .


We will also refer to the set of matrix elements, , where the first index ( in this case) labels the row and the second labels the column. Thus the element is the element in the second row and third column. A comma is inserted if there is some ambiguity. For example, in a large matrix the element in the 2nd row and 12th column is written as to distinguish between the 21st row and 2nd column.


The Identity Matrix

An identity matrix has the property that when it is multiplied by any matrix, that matrix is unchanged. That is, for any matrix ,

Such an identity matrix always has ones along the diagonal and zeroes everywhere else. For example, the identity matrix is

It is straight-forward to verify that any matrix is not changed when multiplied by the identity matrix.

Complex Conjugate

The complex conjugate of a matrix is the matrix with each element replaced by its complex conjugate. In other words, to take the complex conjugate of a matrix, one takes the complex conjugate of each entry in the matrix. We denote the complex conjugate with a star, like this: . For example,


(C.2)

(Notice that the notation for a matrix is a capital letter, whereas the entries are represented by lower case letters.)

Transpose

The transpose of a matrix is the same set of elements, but now the first row becomes the first column, the second row becomes the second column, and so on. Thus the rows and columns are interchanged. For example, for a square matrix, the transpose is given by


(C.3)

Hermitian Conjugate

The complex conjugate and transpose of a matrix is called the Hermitian conjugate, or simply the dagger of a matrix. It is called the dagger because the symbol used to denote it, ():


(C.4)

For our example,

If a matrix is its own Hermitian conjugate, i.e. , then we call it a Hermitian matrix. (Clearly this is only possible for square matrices.) Hermitian matrices are very important in quantum mechanics since their eigenvalues are real. (See Sec.(Eigenvalues and Eigenvectors).)


The Inverse of a Matrix

Index Notation

Very often we write the product of two matrices and simply as and let . However, it is also quite useful to write this in component form. In this case, if these are matrices, the component form will be

This says that the element in the row and column of the matrix is the sum . The transpose of has elements

Now if we were to transpose and as well, this would read

This gives us a way of seeing the general rule that

It follows that

The Trace

The trace of a matrix is the sum of the diagonal elements and is denoted . So for example, the trace of an matrix is

.

Some useful properties of the trace are the following:

  1. .

Using the first of these results,

This relation is used so often that we state it here explicitly.

The Determinant

For a square matrix, the determinant is quite a useful thing. For example, an matrix is invertible if and only if its determinant is not zero. So let us define the determinant and give some properties and examples.


The determinant of a matrix,


(C.5)

is given by


(C.6)

Higher-order determinants can be written in terms of smaller ones in a recursive way. For example, let

Then


The determinant of a matrix can be also be written in terms of its components as


(C.7)

where the symbol


(C.8)

Let us consider the example of the matrix given above. The determinant can be calculated by

where, explicitly,


(C.9)

so that


(C.10)

Now given the values of in Eq. C.9, this is

The determinant has several properties that are useful to know. A few are listed here:

  1. The determinant of the transpose of a matrix is the same as the determinant of the matrix itself:
  2. The determinant of a product is the product of determinants:

From this last property, another specific property can be derived. If we take the determinant of the product of a matrix and its inverse, we find

since the determinant of the identity is one. This implies that

The Inverse of a Matrix

The inverse of a square matrix is another matrix, denoted , such that

where is the identity matrix consisting of zeroes everywhere except the diagonal, which has ones. For example, the identity matrix is

It is important to note that a matrix is invertible if and only if its determinant is nonzero. Thus one only needs to calculate the determinant to see if a matrix has an inverse or not.

Hermitian Matrices

Hermitian matrices are important for a variety of reasons; primarily, it is because their eigenvalues are real. Thus Hermitian matrices are used to represent density operators and density matrices, as well as Hamiltonians. The density operator is a positive semi-definite Hermitian matrix (it has no negative eigenvalues) that has its trace equal to one. In any case, it is often desirable to represent Hermitian matrices using a real linear combination of a complete set of Hermitian matrices. A set of Hermitian matrices is complete if any Hermitian matrix can be represented in terms of the set. Let be a complete set. Then any Hermitian matrix can be represented by . The set can always be taken to be a set of traceless Hermitian matrices and the identity matrix. This is convenient for the density matrix (its trace is one) because the identity part of an Hermitian matrix is if we take all others in the set to be traceless. For the Hamiltonian, the set consists of a traceless part and an identity part where identity part just gives an overall phase which can often be neglected.

One example of such a set which is extremely useful is the set of Pauli matrices. These are discussed in detail in Chapter 2 and in particular in Section 2.4.

Unitary Matrices

A unitary matrix is one whose inverse is also its Hermitian conjugate, , so that

If the unitary matrix also has determinant one, it is said to be a special unitary matrix. The set of unitary matrices is denoted and the set of special unitary matrices is denoted .

Unitary matrices are particularly important in quantum mechanics because they describe the evolution of quantum states. They have this ability due to the fact that the rows and columns of unitary matrices (viewed as vectors) are orthonormal. (This is made clear in an example below.) This means that when they act on a basis vector of the form


(C.11)

with a single 1, in say the th spot, and zeroes everywhere else, the result is a normalized complex vector. Acting on a set of orthonormal vectors of the form given in Eq.(C.11) will produce another orthonormal set.

Let us consider the example of a unitary matrix,


(C.12)

The inverse of this matrix is the Hermitian conjugate,


(C.13)

provided that the matrix satisfies the constraints


(C.14)

and


(C.15)

Looking at each row as a vector, the constraints in Eq.(C.14) are the orthonormality conditions for the vectors forming the rows. Similarly, the constraints in Eq.(C.15) are the orthonormality conditions for the vectors forming the columns.

Inner and Outer Products

Now that we have a definition for the Hermitian conjugate, we consider the case for a matrix, i.e. a vector. In Dirac notation, this is

The Hermitian conjugate comes up so often that we use the following notation for vectors:

This is a row vector and in Dirac notation is denoted by the symbol , which is called a bra. Let us consider a second complex vector,

The inner product between and is computed as follows:


(C.16)

The outer product between these same two vectors is