Matrices as Operations on Quantum States
The states of a quantum system can be written as
|
|
|
(m.1)
|
where and are complex numbers. These states are used to represent quantum systems that can be used to store information. Because and are probabilities and must add up to one,
|
|
|
(m.2)
|
This means that this vector is normalized, i.e. its magnitude (or length) is one. (Appendix B contains a basic introduction to complex numbers.) The basis vectors for such a space are the two vectors and which are called computational basis states. These two basis states are represented by
|
|
|
(m.3)
|
Thus, the qubit state can be rewritten as
|
|
|
(m.4)
|
A very common operation in computing is to change a to a and a to a . The operation that does this is denoted a . This operator does both. It changes a and to . So we write,
|
|
|
(m.1)
|
Notice that this means that acting with again means that you get back the original state. Matrices, which are arrays of numbers, are the mathematical incarnation of these operations. It turns out that matrices are the way to represent almost all of operations in quantum computing and this will be shown in this section.
Let us list some important matrices that will be used as examples below:
|
|
|
(m.1)
|
|
|
|
(m.2)
|
|
|
|
(m.3)
|
|
|
|
(m.3)
|
These all have the general form
|
|
|
(m.4)
|
where the numbers can be complex numbers.
Matrices
Basic Definition and Representations
A matrix is an array of numbers of the following form with columns, col. 1, col. 2, etc., and rows, row 1, row 2, etc. The entries for the matrix are labeled by the row and column. So the entry of a matrix will be where is the row and is the column where the number is found. This is how it looks:
Notice that we represent the whole matrix with a capital letter . The matrix has rows and columns, so we say that is an matrix. We could also represent it using all of the entries, this array of numbers seen in the equation above. Another way to represent it is to write it as . By this we mean that it is the array of numbers in the parentheses.
Examples
The matrix above,
|
|
|
(m.1)
|
is a matrix.
The matrix
|
|
|
(m.1)
|
is and
|
|
|
(m.1)
|
is a matrix.
Matrix Addition
Matrix addition is performed by adding each element of one matrix with the corresponding element in another matrix. Let our two matrices be as above, and . To represent these in an array,
The the sum, which we could call is given by
In other words, the sum gives , etc. We add them component by component like we do vectors.
Change the font color or type in order to highlight the entries that are being added.
Multiplying a Matrix by a Number
When multiplying a matrix by a number, each element of the matrix gets multiplied by that number. Seem familiar? This is what was done for vectors.
Let be some number. Then
Multiplying two Matrices
The the product, which we could call is given by
Examples
|
|
|
(m.1)
|
Then
|
|
|
(m.1)
|
Let us multiply and from above,
|
|
|
(m.1)
|
It is helpful to notice that this is ; that is .
Notation
There are many aspects of linear algebra that are quite useful in
quantum mechanics. We will briefly discuss several of these aspects here.
First, some definitions and properties are provided that will
be useful. Some familiarity with matrices
will be assumed, although many basic definitions are also included.
Let us denote some matrix by . The set of all matrices with real entries is . Such matrices
are said to be real since they have all real entries. Similarly, the
set of complex matrices is . For the
set of square complex matrices, we simply write
.
We will also refer to the set of matrix elements, , where the
first index ( in this case) labels the row and the second
labels the column. Thus the element is the element in the
second row and third column. A comma is inserted if there is some
ambiguity. For example, in a large matrix the element in the
2nd row and 12th
column is written as to distinguish between the
21st row and 2nd column.
The Identity Matrix
An identity matrix has the property that when it is multiplied by any matrix, that matrix is unchanged. That is, for any matrix ,
Such an identity matrix always has ones along the diagonal and zeroes everywhere else. For example, the identity matrix is
It is straight-forward to verify that any matrix is not changed when multiplied by the identity matrix.
Complex Conjugate
The complex conjugate of a matrix
is the matrix with each element replaced by its complex conjugate. In
other words, to take the complex conjugate of a matrix, one takes the
complex conjugate of each entry in the matrix. We denote the complex
conjugate with a star, like this: . For example,
|
|
|
(C.2)
|
(Notice that the notation for a matrix is a capital letter, whereas
the entries are represented by lower case
letters.)
Transpose
The transpose of a matrix is the same set of
elements, but now the first row becomes the first column, the second row
becomes the second column, and so on. Thus the rows and columns are
interchanged. For example, for a square matrix, the
transpose is given by
|
|
|
(C.3)
|
Hermitian Conjugate
The complex conjugate and transpose of a matrix is called the Hermitian conjugate, or simply the dagger of a matrix. It is called the dagger because the symbol used to denote it,
():
|
|
|
(C.4)
|
For our example,
If a matrix is its own Hermitian conjugate, i.e. , then
we call it a Hermitian matrix.
(Clearly this is only possible for square matrices.) Hermitian
matrices are very important in quantum mechanics since their
eigenvalues are real. (See Sec.(Eigenvalues and Eigenvectors).)
of a matrix is the sum of the diagonal
elements and is denoted . So for example, the trace of an
matrix is
.
Some useful properties of the trace are the following:
- .
Using the first of these results,
This relation is used so often that we state it here explicitly.
-->
of a matrix,
|
|
|
(C.5)
|
is given by
|
|
|
(C.6)
|
Higher-order determinants can be written in terms of smaller ones in a recursive way. For example, let
Then
The determinant of a matrix can be
also be written in terms of its components as
|
|
|
(C.7)
|
where the symbol
|
|
|
(C.8)
|
Let us consider the example of the matrix given
above. The determinant can be calculated by
where, explicitly,
|
|
|
(C.9)
|
so that
|
|
|
(C.10)
|
Now given the values of in Eq. C.9,
this is
The determinant has several properties that are useful to know. A few are listed here:
- The determinant of the transpose of a matrix is the same as the determinant of the matrix itself:
- The determinant of a product is the product of determinants:
From this last property, another specific property can be derived.
If we take the determinant of the product of a matrix and its
inverse, we find
since the determinant of the identity is one. This implies that
-->
The Inverse of a Matrix
The inverse of an square matrix is another square matrix,
denoted , such that
where is the identity matrix consisting of zeroes everywhere
except the diagonal, which has ones. For example, the
identity matrix is
It is important to note that a matrix is invertible if and only if its determinant is nonzero. Thus one only needs to calculate the
determinant to see if a matrix has an inverse or not.
Hermitian Matrices
Hermitian matrices are important for a variety of reasons; primarily, it is because their eigenvalues are real. Thus Hermitian matrices are used to represent density operators and density matrices, as well as Hamiltonians. The density operator is a positive semi-definite Hermitian matrix (it has no negative eigenvalues) that has its trace equal to one. In any case, it is often desirable to represent Hermitian matrices using a real linear combination of a complete set of Hermitian matrices. A set of Hermitian matrices is complete if any Hermitian matrix can be represented in terms of the set. Let be a complete set. Then any Hermitian matrix can be represented by . The set can always be taken to be a set of traceless Hermitian matrices and the identity matrix. This is convenient for the density matrix (its trace is one) because the identity part of an Hermitian matrix is if we take all others in the set to be traceless. For the Hamiltonian, the set consists of a traceless part and an identity part where identity part just gives an overall phase which can often be neglected.
One example of such a set which is extremely useful is the set of Pauli matrices. These are discussed in detail in Chapter 2 and in particular in Section 2.4.
Unitary Matrices
A unitary matrix is one whose
inverse is also its Hermitian conjugate, , so that
If the unitary matrix also has determinant one, it is said to be a special unitary matrix. The set of
unitary matrices is denoted
and the set of special unitary matrices is denoted .
Unitary matrices are particularly important in quantum mechanics
because they describe the evolution of quantum states.
They have this ability due to the fact that the rows and columns of unitary matrices (viewed as vectors) are orthonormal. (This is made clear in an example below.) This means that when
they act on a basis vector of the form
|
|
|
(C.11)
|
with a single 1, in say the th spot, and zeroes everywhere else, the result is a normalized complex vector. Acting on a set of
orthonormal vectors of the form given in Eq.(C.11)
will produce another orthonormal set.
Let us consider the example of a unitary matrix,
|
|
|
(C.12)
|
The inverse of this matrix is the Hermitian conjugate,
|
|
|
(C.13)
|
provided that the matrix satisfies the constraints
|
|
|
(C.14)
|
and
|
|
|
(C.15)
|
Looking at each row as a vector, the constraints in
Eq.(C.14) are the orthonormality conditions for the
vectors forming the rows. Similarly, the constraints in
Eq.(C.15) are the orthonormality conditions for the
vectors forming the columns.
Inner and Outer Products
It is very helpful to note that a column vector with matrix. A row vector is a matrix.
Now that we have a definition for the Hermitian conjugate, consider the
case for a matrix, i.e. a vector. In Dirac notation, this is
The Hermitian conjugate comes up so often that we use the following
notation for vectors:
This is a row vector and in Dirac notation is denoted by the symbol , which is called a bra. Let us consider a second complex vector,
The inner product between and
is computed as follows:
|
|
|
(C.16)
|
The vector is called a ket. When you put a bra together with a ket, you get a bracket. This is the origin of the terms.
The outer product between these same two vectors is
This type of product is also called a Kronecker product or a tensor product. Vectors and matrices can be considered special cases of the more general class of tensors. A tensor can have any number of indices indicating rows, columns, and depth, for the case of a three index tensor.
Unitary Matrices
A unitary matrix is one whose
inverse is also its Hermitian conjugate, , so that
If the unitary matrix also has determinant one, it is said to be a special unitary matrix. The set of
unitary matrices is denoted
and the set of special unitary matrices is denoted .
Unitary matrices are particularly important in quantum mechanics
because they describe the evolution of quantum states.
They have this ability due to the fact that the rows and columns of unitary matrices (viewed as vectors) are orthonormal. (This is made clear in an example below.) This means that when
they act on a basis vector of the form
|
|
|
(C.11)
|
with a single 1, in say the th spot, and zeroes everywhere else, the result is a normalized complex vector. Acting on a set of
orthonormal vectors of the form given in Eq.(C.11)
will produce another orthonormal set.
Let us consider the example of a unitary matrix,
|
|
|
(C.12)
|
The inverse of this matrix is the Hermitian conjugate,
|
|
|
(C.13)
|
provided that the matrix satisfies the constraints
|
|
|
(C.14)
|
and
|
|
|
(C.15)
|
Looking at each row as a vector, the constraints in
Eq.(C.14) are the orthonormality conditions for the
vectors forming the rows. Similarly, the constraints in
Eq.(C.15) are the orthonormality conditions for the
vectors forming the columns.
Unitary matrices are very important because the preserve the magnitude of a complex vector. In other words, if if the magnitude of a vector is one, for example , then .
Copyright
© Copyright 2022 BKR Collaboration