Chapter 9 - Dynamical Decoupling Controls

From Qunet
Revision as of 16:56, 19 April 2011 by Mbyrd (talk | contribs)
Jump to: navigation, search

Introduction

In the last chapter, it was shown that a symmetry in the system-bath Hamiltonian, if present, could be used to construct states immune to noise. In this chapter we will see that under certain conditions it is possible to reduce errors, create a symmetry, or even remove errors in the evolution of a quantum system. This is done though repeated use of external controls which act on the system. These controls are often called "dynamical decoupling controls" due to their original objective of decoupling the system from the bath. They are quite generally useful controls to consider for the elimination and/or reduction of errors. In this chapter, a simple introduction to dynamical decoupling controls is given and some important concepts discussed.

General Conditions

As stated in Chapter 8 the Hamiltonian describing the evolution of a system and bath which are coupled together can always be written as


(9.1)

where acts only on the system, acts only on the bath, and

is the interaction Hamiltonian with the acting only on the system and the acting only on the bath.

The idea is to modify the evolution of the system and bath such that the errors are reduced or eliminated using external control Hamiltonians. These controls are called dynamical decoupling controls since they are used to decouple (at least approximately decouple) the system from the bath. Since can be difficult to change states of a bath, indeed one often does not know details of the bath, the controls which are to be used for reducing errors should act on the system. As discussed previously, the errors arise from the system-bath interaction Hamiltonian and, in particular, the system operators are the operators which describe the affect of the coupling on the system. In general the interaction Hamiltonian is time-dependent since the bath operators will change in time. However, for short times we may assume the interaction Hamiltonian is unchanged, or at least approximately constant. This is sometimes called the short-time assumption in dynamical decoupling.

The Magnus Expansion

A fairly good starting point to see how this is done is the so-called Magnus expansion. (See Blanes, et al. and references therein.) The general problem is that a time-dependent operation is to be applied to the Hamiltonian making the Hamiltonian itself time-dependent and one would like to solve the time-dependent Schrodinger equation:


(9.2)

which is sometimes written as


(9.3)

The question is, what will solve this equation? If and are just numbers, the solution would be


(9.4)

However, when the Schrodinger equation is the equation to be solved, and are matrices. To be specific, is a unitary matrix and is a Hermitian matrix. The solution is often written in the form


(9.5)

where denotes the time-ordered exponential. In this case, matrices do not commute so that the exponential must be handled with care. Operators must be ordered according to the time where they appear in the operation, and the solution Eq.(9.2) is not the solution to the problem unless is a constant matrix.

The solution to this problem is the following,


(9.6)

where


(9.7)

and


(9.8)

where is some characteristic time scale.

This expansion can be used to find approximations to the time-dependent evolution of a quantum system to any desired order and is thus worth noting. However, due to the introductory nature of this material, we will primarily discuss first-order theory and the reader should assume the calculations are for the first-order theory unless otherwise noted.

A First-Order Theory

To show how this theory of dynamical decoupling controls could work in an ideal case, let us consider a simple example. Suppose that the external controls (decoupling controls) are so strong that the Hamiltonian evolution can be neglected during the time the external controls are turned on. Due to their strength, we will also assume that they can be implemented in a very short time and that there are different controls to be used. We will first use a given control and then its inverse. Between the controls the system evolves for a short time . After all control pulses have been implemented, the effective evolution of the system will be


(9.9)

where


(9.10)

and is free evolution given by Eq.(9.1) above. Furthermore, suppose that the time is small so that


(9.11)

Now suppose that we let . Inserting Eq.(9.11) into Eq.(9.9) and keeping only first order terms in the product gives


(9.12)

This is a simple expression for the effective Hamiltonian evolution of a system undergoing a series of dynamical decoupling controls. Note that the assumptions are that the operations are strong (since the free evolution is neglected during the control pulses) and fast (since we assume that the Hamiltonian of Eq.(9.1) is constant during the entire time of this cycle of control pulses). Due to these strong and fast assumptions, these are often referred to as "bang-bang" controls.

It is important to note that these controls are rather unrealistic. That is, these criteria are never met completely. However, they are met approximately in some systems, most notably in nuclear magnetic resonance experiments where the so-called average Hamiltonian theory originated. More realistic pulses can be, and have been, explored for use in actual physical systems where they have been shown to reduce noise very effectively. This has been done by generalizing the theory beyond the first-order limit and without the assumption that the pulses are extremely strong.

The Single Qubit Case

The simplest case involves the elimination of an error on a single qubit. There are several types of errors that can degrade a qubit state as discussed in Section 6.4. There is a bit-flip, phase-flip, or both. In this section the first-order approximation is used to show how to eliminate first phase errors and then arbitrary errors on an arbitrary qubit state.


Phase Errors

Let us suppose that the Hamiltonian for the free evolution contains only an interaction part which induces a phase error,


(9.13)

where is the bath operator. This interaction Hamiltonian will couple the system to the bath and thus case errors. The factor indicates that it is a phase error. Using the first-order theory, the objective is to find a series of pulses which will effectively decouple the system from the bath. In this case it can be done with only one decoupling pulse, . This will be denoted and the identity (doing nothing) will be denoted . The effective Hamiltonian is


(9.14)

A rotation about the x-axis by an angle will rotate the Pauli matrix to . (See Section C.5, in particular Section C.5.1.) This is because . After the pulse sequence , the system is decoupled from the bath because the effective Hamiltonian is zero! There is no more interaction between the system and bath! That is


(9.15)

Thus the noise has been removed from the system.

This may be considered an averaging method. (As mentioned before, it is sometimes called average Hamiltonian theory in the NMR literature.) In this case, it is also sometimes called a parity kick since the sign of the interaction Hamiltonian is reversed giving just two terms which cancel.

Arbitrary Single Qubit Errors

Now let us consider arbitrary single qubit errors in an interaction Hamiltonian. The interaction will have the form


(9.16)

The objective here is to eliminate all terms in the interaction Hamiltonian. It turns out that this may be accomplished in several different ways. Let us first consider the obvious choice of bang-bang pulses, . First, recall that the Pauli matrices have the property that if and if . Then the effective Hamiltonian is


(9.17)

So again we see that the interaction Hamiltonian has been eliminated, so the errors will be removed.

Notice that if Eq.(9.9) is used, then the number of pulses required is actually two. To see this, consider the sequence of pulses above which include . Eq.(9.9) indicates that the sequence is . However, and . So this sequence is equivalent to which involves only two different pulses and whereas the sum would seem to indicate three are required.

Extensions

As mentioned above, the theory of dynamical decoupling controls has been extended beyond the first-order limit and without the assumption that the pulses are extremely strong. However, even within the first-order theory there are several ways one can extend these results in order to be able to find a complete set of pulses to eliminate a particular error. Here we describe two of these that are also useful beyond first order. However, beginning with the first order theory aids in our understanding of extensions.

Groups of Transformations

One method for finding a set of pulses to achieve a particular decoupling goal is to choose the set of pulses to belong to a discrete group of the unitary group. (See Appendix D for definitions and examples of groups.) Let us suppose that the set of math>N\,\!</math> pulses forms a group. Then our effective Hamiltonian after implementing a complete set of pulses is


(9.18)

where, by convention, is the identity and the sum is over every element of the group. This effective Hamiltonian commutes with any element of the group. To see this, let some particular element of the group be denoted , and let


(9.19)

where is some constant (as of yet unknown). Then


(9.20)

We may rewrite this as


(9.21)

or equivalently,


(9.22)

Now since both , and are both group elements, then is also a group element. This is the closure property (Section D.2.1). So, since the sum is over all group elements, the two sums on the left-hand side are both equal to the same thing. Therefore, the constant on the left-hand side must be zero and therefore the cummutator must be zero.

This is called the group symmetrization of the Hamiltonian and is very useful since we can choose or pulses from a set of group elements and the Hamiltonian resulting from the symmetrization procedure will also be invariant, or immune to, noises which are elements of the group. This provides a critierion for choosing a set of pulses.

An example of this was already given since the group of matrices formed from gives us a set of noises to which our effective Hamiltonian of Section 9.5.2 will be immune.

Geometric Conditions

The geometric conditions are quite appealing for two reasons. The first is that they provide a picture for the effect of dynamical decoupling operations which gives a sufficient criterion for the removal of noise. The second is that they are more general than the group-theoretical criterion from the last subsection. These will both be valuable in the next chapter where combinations of error prevention methods are given.

The set of geometric conditions becomes clear after two observations. The first is that the Hamiltonian can be described by a complete set of Hermitian matrices ,


(9.23)

The second is that a unitary transformation acting on the Hamiltonian (as in Eq.(9.14)) can be viewed as acting as a rotation


(9.24)

Both of these will be discussed briefly before the geometric conditions are given. Any Hermitian matrix can be expanded in terms of a complete set of Hermitian matrices. (See Section C.3.9.) The (adjoint) action of the unitary transformation on matrix then acts as a rotation. They way to see this is to compare to the case of the two-state system which is described in Section 3.5.4 and Section 3.5.5. So just like the Bloch vector, one can take the set of coefficients to be components of a vector. Then the unitary transformation acts as a rotation of this vector,


(9.25)

In the third equality the rotation is called passive since the rotation is acting on the basis and in the fourth equality the rotation is called active since it is acting on the vector. This is two different views of the rotation dependent upon the frame of reference. Note that the matrices can also be decomposed using a complete set of Hermitian matrices. (In fact, any matrix can be decomposed using a complex combination of a complete set of Hermitian matrices.)

Now, returning to Eq.(9.12), the geometric picture becomes clear. Each transformation acts as a rotation on the vector and when the results of each rotation (each being another vector) are added up, the result should be zero if the errors are to be completely removed. So if we associate to each vector rotated by a new vector , then the condition for the Hamiltonian to vanish is that the sum of these vectors must be zero


(9.25)

This is the geometric condition for the elimination of errors given a set of dynamical decoupling pulses. This will be quite useful in the next chapter.