Matrices: A beginner's guide

From Qunet
Revision as of 11:40, 12 May 2022 by Mbyrd (talk | contribs) (Examples)
Jump to: navigation, search


Matrices as Operations on Quantum States

Recall that we represent the states of our quantum system as


(m.1)

where and are complex numbers. Our objective is to use these two states to store and manipulate information. Because and are probabilities and must add up to one,


(m.2)

This means that this vector is normalized, i.e. its magnitude (or length) is one. (Appendix B contains a basic introduction to complex numbers.) The basis vectors for such a space are the two vectors and which are called computational basis states. These two basis states are represented by


(m.3)

Thus, the qubit state can be rewritten as


(m.4)


A very common operation in computing is to change a to a and a to a . The operation that does this is denoted a . This operator does both. It changes a and to . So we write,


(m.1)

Notice that this means that acting with again means that you get back the original state. To represent these operations, we use matrices. It turns out that these are the way to represent all of operations in quantum mechanics and this will be show in this section.


(m.1)

(m.2)

(m.3)

(m.4)

Matrices

Basic Definition and Representations

A matrix is an array of numbers of the following form with columns, col. 1, col. 2, etc., and rows, row 1, row 2, etc. The entries for the matrix are labeled by the row and column. So the entry of a matrix will be where is the row and is the column where the number is found. This is how it looks:

Notice that we represent the whole matrix with a capital letter . We could also represent it using all of the entries, this array of numbers seen in the equation above. Another way to represent it is to write it as . By this we mean that it is the array of numbers in the parentheses.


Examples

Add examples here.

Matrix Addition

Matrix addition is performed by adding each element of one matrix with the corresponding element in another matrix. Let our two matrices be as above, and . To represent these in an array,


The the sum, which we could call is given by

In other words, the sum gives , etc. We add them component by component like we do vectors.

Multiplying a Matrix by a Number

When multiplying a matrix by a number, each element of the matrix gets multiplied by that number. Seem familiar? This is what was done for vectors.

Let be some number. Then

Multiplying two Matrices

The the sum, which we could call is given by

Notation

There are many aspects of linear algebra that are quite useful in quantum mechanics. We will briefly discuss several of these aspects here. First, some definitions and properties are provided that will be useful. Some familiarity with matrices will be assumed, although many basic definitions are also included.


Let us denote some matrix by . The set of all matrices with real entries is . Such matrices are said to be real since they have all real entries. Similarly, the set of complex matrices is . For the set of square complex matrices, we simply write .


We will also refer to the set of matrix elements, , where the first index ( in this case) labels the row and the second labels the column. Thus the element is the element in the second row and third column. A comma is inserted if there is some ambiguity. For example, in a large matrix the element in the 2nd row and 12th column is written as to distinguish between the 21st row and 2nd column.


The Identity Matrix

An identity matrix has the property that when it is multiplied by any matrix, that matrix is unchanged. That is, for any matrix ,

Such an identity matrix always has ones along the diagonal and zeroes everywhere else. For example, the identity matrix is

It is straight-forward to verify that any matrix is not changed when multiplied by the identity matrix.

Complex Conjugate

The complex conjugate of a matrix is the matrix with each element replaced by its complex conjugate. In other words, to take the complex conjugate of a matrix, one takes the complex conjugate of each entry in the matrix. We denote the complex conjugate with a star, like this: . For example,


(C.2)

(Notice that the notation for a matrix is a capital letter, whereas the entries are represented by lower case letters.)

Transpose

The transpose of a matrix is the same set of elements, but now the first row becomes the first column, the second row becomes the second column, and so on. Thus the rows and columns are interchanged. For example, for a square matrix, the transpose is given by


(C.3)

Hermitian Conjugate

The complex conjugate and transpose of a matrix is called the Hermitian conjugate, or simply the dagger of a matrix. It is called the dagger because the symbol used to denote it, ():


(C.4)

For our example,

If a matrix is its own Hermitian conjugate, i.e. , then we call it a Hermitian matrix. (Clearly this is only possible for square matrices.) Hermitian matrices are very important in quantum mechanics since their eigenvalues are real. (See Sec.(Eigenvalues and Eigenvectors).)


Index Notation

Very often we write the product of two matrices and simply as and let . However, it is also quite useful to write this in component form. In this case, if these are matrices, the component form will be

This says that the element in the row and column of the matrix is the sum . The transpose of has elements

Now if we were to transpose and as well, this would read

This gives us a way of seeing the general rule that

It follows that

The Trace

The trace of a matrix is the sum of the diagonal elements and is denoted . So for example, the trace of an matrix is

.

Some useful properties of the trace are the following:

  1. .

Using the first of these results,

This relation is used so often that we state it here explicitly.

The Determinant

For a square matrix, the determinant is quite a useful thing. For example, an matrix is invertible if and only if its determinant is not zero. So let us define the determinant and give some properties and examples.


The determinant of a matrix,


(C.5)

is given by


(C.6)

Higher-order determinants can be written in terms of smaller ones in a recursive way. For example, let

Then


The determinant of a matrix can be also be written in terms of its components as


(C.7)

where the symbol


(C.8)

Let us consider the example of the matrix given above. The determinant can be calculated by

where, explicitly,


(C.9)

so that


(C.10)

Now given the values of in Eq. C.9, this is

The determinant has several properties that are useful to know. A few are listed here:

  1. The determinant of the transpose of a matrix is the same as the determinant of the matrix itself:
  2. The determinant of a product is the product of determinants:

From this last property, another specific property can be derived. If we take the determinant of the product of a matrix and its inverse, we find

since the determinant of the identity is one. This implies that

The Inverse of a Matrix

The inverse of an square matrix is another square matrix, denoted , such that

where is the identity matrix consisting of zeroes everywhere except the diagonal, which has ones. For example, the identity matrix is

It is important to note that a matrix is invertible if and only if its determinant is nonzero. Thus one only needs to calculate the determinant to see if a matrix has an inverse or not.

Hermitian Matrices

Hermitian matrices are important for a variety of reasons; primarily, it is because their eigenvalues are real. Thus Hermitian matrices are used to represent density operators and density matrices, as well as Hamiltonians. The density operator is a positive semi-definite Hermitian matrix (it has no negative eigenvalues) that has its trace equal to one. In any case, it is often desirable to represent Hermitian matrices using a real linear combination of a complete set of Hermitian matrices. A set of Hermitian matrices is complete if any Hermitian matrix can be represented in terms of the set. Let be a complete set. Then any Hermitian matrix can be represented by . The set can always be taken to be a set of traceless Hermitian matrices and the identity matrix. This is convenient for the density matrix (its trace is one) because the identity part of an Hermitian matrix is if we take all others in the set to be traceless. For the Hamiltonian, the set consists of a traceless part and an identity part where identity part just gives an overall phase which can often be neglected.

One example of such a set which is extremely useful is the set of Pauli matrices. These are discussed in detail in Chapter 2 and in particular in Section 2.4.

Unitary Matrices

A unitary matrix is one whose inverse is also its Hermitian conjugate, , so that

If the unitary matrix also has determinant one, it is said to be a special unitary matrix. The set of unitary matrices is denoted and the set of special unitary matrices is denoted .

Unitary matrices are particularly important in quantum mechanics because they describe the evolution of quantum states. They have this ability due to the fact that the rows and columns of unitary matrices (viewed as vectors) are orthonormal. (This is made clear in an example below.) This means that when they act on a basis vector of the form


(C.11)

with a single 1, in say the th spot, and zeroes everywhere else, the result is a normalized complex vector. Acting on a set of orthonormal vectors of the form given in Eq.(C.11) will produce another orthonormal set.

Let us consider the example of a unitary matrix,


(C.12)

The inverse of this matrix is the Hermitian conjugate,


(C.13)

provided that the matrix satisfies the constraints


(C.14)

and


(C.15)

Looking at each row as a vector, the constraints in Eq.(C.14) are the orthonormality conditions for the vectors forming the rows. Similarly, the constraints in Eq.(C.15) are the orthonormality conditions for the vectors forming the columns.

Inner and Outer Products

Now that we have a definition for the Hermitian conjugate, we consider the case for a matrix, i.e. a vector. In Dirac notation, this is

The Hermitian conjugate comes up so often that we use the following notation for vectors:

This is a row vector and in Dirac notation is denoted by the symbol , which is called a bra. Let us consider a second complex vector,

The inner product between and is computed as follows:


(C.16)

The outer product between these same two vectors is