<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www2.physics.siu.edu/qunet/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tjones</id>
	<title>Qunet - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www2.physics.siu.edu/qunet/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tjones"/>
	<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php/Special:Contributions/Tjones"/>
	<updated>2026-04-09T22:49:41Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.7</generator>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1778</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1778"/>
		<updated>2011-12-16T05:19:20Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Let us define things here or at least add links to definitions. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[ [n,k,d] ] code: A code using n qubits to encode k logical qubits to correct (d-1)/2 errors (d will be odd).&lt;br /&gt;
&lt;br /&gt;
Abelian group&lt;br /&gt;
&lt;br /&gt;
Abelian subgroup (see abelian group and subgroup)&lt;br /&gt;
&lt;br /&gt;
Adjoint: The transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Ancilla&lt;br /&gt;
&lt;br /&gt;
Angular momentum&lt;br /&gt;
&lt;br /&gt;
Anti-commutation:  Two operators anti-commute when AB+BA=0.&lt;br /&gt;
&lt;br /&gt;
Basis&lt;br /&gt;
&lt;br /&gt;
Bath system:&lt;br /&gt;
&lt;br /&gt;
Bell's theorem&lt;br /&gt;
&lt;br /&gt;
Bit flip error&lt;br /&gt;
&lt;br /&gt;
Bloch sphere&lt;br /&gt;
&lt;br /&gt;
Block diagonal matrix:&lt;br /&gt;
&lt;br /&gt;
Bra-ket notation:&lt;br /&gt;
&lt;br /&gt;
Centralizer&lt;br /&gt;
&lt;br /&gt;
Checksum (see dot product)&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Code&lt;br /&gt;
&lt;br /&gt;
Codewords&lt;br /&gt;
&lt;br /&gt;
Commutator: The commutator of A and B, signified by [A,B], is AB-BA.  Its value may be found by implementing the operators of A and B on a test function.  If something has a commutator of zero, it is said to commute.&lt;br /&gt;
&lt;br /&gt;
Complex conjugate:&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
Controlled not (CNOT gate)&lt;br /&gt;
&lt;br /&gt;
Controlled operation: An operation on a state or set of states that is conditioned on another state or set of states.&lt;br /&gt;
&lt;br /&gt;
Coset of a group&lt;br /&gt;
&lt;br /&gt;
CSS codes&lt;br /&gt;
&lt;br /&gt;
Cyclic group&lt;br /&gt;
&lt;br /&gt;
Dagger (see hermitian conjugate)&lt;br /&gt;
&lt;br /&gt;
Definite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Degenerate: Having more than one of the same eigenvalue.&lt;br /&gt;
&lt;br /&gt;
Density matrix&lt;br /&gt;
&lt;br /&gt;
Density operator&lt;br /&gt;
&lt;br /&gt;
Depolarizing error&lt;br /&gt;
&lt;br /&gt;
Determinant: When rows or columns of a matrix are taken as vectors, the determinant is the volume enclosed by those vectors and corresponding parallel vectors creating parallelograms.  Determinants only exist for square matrices.&lt;br /&gt;
&lt;br /&gt;
Diagonalizable: A matrix M is diagonalizable when it can be put into the form D=S^(-1)MS, where S and S^(-1) exist and are inverses.&lt;br /&gt;
&lt;br /&gt;
Differentiable manifold&lt;br /&gt;
&lt;br /&gt;
Dirac delta function&lt;br /&gt;
&lt;br /&gt;
Dirac notation (see bra-ket notation)&lt;br /&gt;
&lt;br /&gt;
Disjointness condition&lt;br /&gt;
&lt;br /&gt;
Distance of a quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Dot product: The scalar that results when two vectors have their corresponding components multiplied, and each of these products summed.&lt;br /&gt;
&lt;br /&gt;
Dual matrix&lt;br /&gt;
&lt;br /&gt;
Dual of a code&lt;br /&gt;
&lt;br /&gt;
Eigenfunction, eigenvalue, eigenvector: If HY=EY where H is a matrix, E is a scalar and Y is a vector, then Y is the eigenvector and E is the eigenvalue. If Y is a function, it is called an eigenfunction.&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Environment system (see Bath system)&lt;br /&gt;
&lt;br /&gt;
EPR paradox&lt;br /&gt;
&lt;br /&gt;
Epsilon tensor&lt;br /&gt;
&lt;br /&gt;
Equivalent representation&lt;br /&gt;
&lt;br /&gt;
Error syndrome&lt;br /&gt;
&lt;br /&gt;
Euler angle parametrization&lt;br /&gt;
&lt;br /&gt;
Euler's law: sine(x)+i*cos(x)=e^(ix)&lt;br /&gt;
&lt;br /&gt;
Expectation value:&lt;br /&gt;
&lt;br /&gt;
Exponentiating a matrix (see matrix exponential)&lt;br /&gt;
&lt;br /&gt;
Faithful representation&lt;br /&gt;
&lt;br /&gt;
Field&lt;br /&gt;
&lt;br /&gt;
Gate (see Quantum gate)&lt;br /&gt;
&lt;br /&gt;
General linear group&lt;br /&gt;
&lt;br /&gt;
Generators of a group&lt;br /&gt;
&lt;br /&gt;
Generator matrix&lt;br /&gt;
&lt;br /&gt;
Gram-Schmidt decomposition (see Schmidt decomposition)&lt;br /&gt;
&lt;br /&gt;
Group&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
H bar: Planck's constant divided by 2\pi&lt;br /&gt;
&lt;br /&gt;
Hadamard gate&lt;br /&gt;
&lt;br /&gt;
Hamming bound&lt;br /&gt;
&lt;br /&gt;
Hamming code&lt;br /&gt;
&lt;br /&gt;
Hamming distance&lt;br /&gt;
&lt;br /&gt;
Hamming weight&lt;br /&gt;
&lt;br /&gt;
Hamiltonian: The operator for all conservative (able to be transformed) energy in the system.  In quantum mechanics most energy is conservative.&lt;br /&gt;
&lt;br /&gt;
Heisenberg exchange interaction (8.5.2:&lt;br /&gt;
&lt;br /&gt;
Heisenberg uncertainty principle (see uncertainty principle)&lt;br /&gt;
&lt;br /&gt;
Hermitian: An operator whose transpose equals its complex conjugate.&lt;br /&gt;
&lt;br /&gt;
Hermitian conjugate: The transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Hidden variable theory (see also local hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Hilbert-Schmidt inner product (2.4)&lt;br /&gt;
&lt;br /&gt;
Hilbert space&lt;br /&gt;
&lt;br /&gt;
Homomorphism&lt;br /&gt;
&lt;br /&gt;
i: square root of negative one&lt;br /&gt;
&lt;br /&gt;
Identity matrix: A matrix of zeros except for the diagonal, where each element is 1.  Multiplying any matrix of the same dimension by it leaves the original matrix unchanged.&lt;br /&gt;
&lt;br /&gt;
Inner product (see dot product)&lt;br /&gt;
&lt;br /&gt;
Inverse of a matrix: A matrix's inverse is the matrix which, when they are multiplied together, yield an identity matrix.&lt;br /&gt;
&lt;br /&gt;
Invertible matrix: A matrix for which an inverse exists.&lt;br /&gt;
&lt;br /&gt;
Isolated system (see Closed system)&lt;br /&gt;
&lt;br /&gt;
Isomorphism&lt;br /&gt;
&lt;br /&gt;
Isotropy group or Isotropy subgroup (see stabilizer)&lt;br /&gt;
&lt;br /&gt;
Jacobi identity&lt;br /&gt;
&lt;br /&gt;
Ket: See bra-ket notation&lt;br /&gt;
&lt;br /&gt;
Kraus representation (or Kraus decomposition) (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Kronecker delta&lt;br /&gt;
&lt;br /&gt;
Levi-Civita symbol (see epsilon tensor)&lt;br /&gt;
&lt;br /&gt;
Lie algebra&lt;br /&gt;
&lt;br /&gt;
Lie group&lt;br /&gt;
&lt;br /&gt;
Linear code&lt;br /&gt;
&lt;br /&gt;
Linear combination: A set of vectors each multiplied by a scalar and summed to equal a desired vector.  A complete basis has a linear combination for all vectors of that dimension.&lt;br /&gt;
&lt;br /&gt;
Linear map: A transformation from one vector to another using one operator once.&lt;br /&gt;
&lt;br /&gt;
Little group (see stabilizer)&lt;br /&gt;
&lt;br /&gt;
Local actions&lt;br /&gt;
&lt;br /&gt;
Local hidden variable theory (see also hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Logical bit&lt;br /&gt;
&lt;br /&gt;
Matrix exponential&lt;br /&gt;
&lt;br /&gt;
Matrix properties&lt;br /&gt;
&lt;br /&gt;
Matrix transformation&lt;br /&gt;
&lt;br /&gt;
Measurement&lt;br /&gt;
&lt;br /&gt;
Minimum distance&lt;br /&gt;
&lt;br /&gt;
Modular arithmetic: When a number is divided into another and does not go evenly, there is left a remainder.  Modular arithmetic takes the information about what number was used to divide and what remainder is left to calculate how it will interact with another number.  For example, 13 can go into 5 twice with remainder 3, so its representation is 3mod 5, (pronounced &amp;quot;three modulo 5) which it has in common with any number satisfying 3+5x where x is an integer.  This usage of modulo has nothing to do with the physics usage of modulus.&lt;br /&gt;
&lt;br /&gt;
Modulus&lt;br /&gt;
&lt;br /&gt;
n,k,d code (see [n,k,d] code)&lt;br /&gt;
&lt;br /&gt;
No cloning theorem: No operator can duplicate an arbitrary quantum state.&lt;br /&gt;
&lt;br /&gt;
Noise&lt;br /&gt;
&lt;br /&gt;
Non-degenerate code&lt;br /&gt;
&lt;br /&gt;
Normalizer:&lt;br /&gt;
&lt;br /&gt;
Normalization: A process of scaling some set of numbers or functions in order that an operation including them returns a desired value.  For instance the set of all possible probabilities is usually scaled or normalized so they sum to one.&lt;br /&gt;
&lt;br /&gt;
One-to-one: A mapping where each domain element is mapped to exactly one range element, and each range element is mapped from one domain element.&lt;br /&gt;
&lt;br /&gt;
Onto: A mapping where each domain element mapped to at most one range element.&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Operator&lt;br /&gt;
&lt;br /&gt;
Operator-sum representation (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Order of a group&lt;br /&gt;
&lt;br /&gt;
Ordered basis&lt;br /&gt;
&lt;br /&gt;
Orthogonal: Two vectors are orthogonal when their dot product is zero.&lt;br /&gt;
&lt;br /&gt;
Outer product&lt;br /&gt;
&lt;br /&gt;
P gate (not the phase gate):&lt;br /&gt;
&lt;br /&gt;
Parity&lt;br /&gt;
&lt;br /&gt;
Parity check (see inner product)&lt;br /&gt;
&lt;br /&gt;
Parity check matrix&lt;br /&gt;
&lt;br /&gt;
Partial trace&lt;br /&gt;
&lt;br /&gt;
Partition of a group&lt;br /&gt;
&lt;br /&gt;
Pauli group&lt;br /&gt;
&lt;br /&gt;
Pauli matrices: The X,Y,Z gates.&lt;br /&gt;
&lt;br /&gt;
Permutation:  &lt;br /&gt;
&lt;br /&gt;
Phase flip error&lt;br /&gt;
&lt;br /&gt;
Phase gate: See Z gate&lt;br /&gt;
&lt;br /&gt;
Planck's constant:&lt;br /&gt;
&lt;br /&gt;
Polarization&lt;br /&gt;
&lt;br /&gt;
Positive definite and semidefinite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Probability for existing in a state:&lt;br /&gt;
&lt;br /&gt;
Projector: A transformation such that P^2=P.&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Pure state&lt;br /&gt;
&lt;br /&gt;
QKD: See quantum key distribution&lt;br /&gt;
&lt;br /&gt;
Quantum bit: See Qubit&lt;br /&gt;
&lt;br /&gt;
Quantum cryptography&lt;br /&gt;
&lt;br /&gt;
Quantum dense coding&lt;br /&gt;
&lt;br /&gt;
Quantum gate: A unitary transformation applied to one or more qubits.&lt;br /&gt;
&lt;br /&gt;
Quantum hamming bound&lt;br /&gt;
&lt;br /&gt;
Quantum key distribution:&lt;br /&gt;
&lt;br /&gt;
Quantum NOT gate: see X gate&lt;br /&gt;
&lt;br /&gt;
Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
Rank&lt;br /&gt;
&lt;br /&gt;
Rate of a code&lt;br /&gt;
&lt;br /&gt;
Reduced density operator&lt;br /&gt;
&lt;br /&gt;
Representation space&lt;br /&gt;
&lt;br /&gt;
Reversibility of a quantum operation:  For every operation on a qubit there exists an operation which restores the state to its original function.&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
Schmidt decomposition&lt;br /&gt;
&lt;br /&gt;
Schrodinger's Equation&lt;br /&gt;
&lt;br /&gt;
Set: Any mathematical construct.&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;br /&gt;
&lt;br /&gt;
Shor's nine-bit quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
Similarity transformation&lt;br /&gt;
&lt;br /&gt;
Singular values&lt;br /&gt;
&lt;br /&gt;
Singular value decomposition&lt;br /&gt;
&lt;br /&gt;
SMR representation&lt;br /&gt;
&lt;br /&gt;
Special unitary matrix&lt;br /&gt;
&lt;br /&gt;
Spin&lt;br /&gt;
&lt;br /&gt;
Spooky action at a distance&lt;br /&gt;
&lt;br /&gt;
Stabilizers of a group&lt;br /&gt;
&lt;br /&gt;
Stabilizer code&lt;br /&gt;
&lt;br /&gt;
Standard deviation&lt;br /&gt;
&lt;br /&gt;
Stationary subgroup (see stabilizer)&lt;br /&gt;
&lt;br /&gt;
Stirling's formula&lt;br /&gt;
&lt;br /&gt;
Subgroup&lt;br /&gt;
&lt;br /&gt;
Superposition: A qubit state in superposition, \phi may be written as |\phi&amp;gt;=\alpha|0&amp;gt;+\beta|1&amp;gt; where \alpha and \beta are complex numbers.&lt;br /&gt;
&lt;br /&gt;
Syndrome measurement&lt;br /&gt;
&lt;br /&gt;
Taylor expansion&lt;br /&gt;
&lt;br /&gt;
Teleportation&lt;br /&gt;
&lt;br /&gt;
Tensor product&lt;br /&gt;
&lt;br /&gt;
Trace: The sum of the diagonal elements of a matrix.&lt;br /&gt;
&lt;br /&gt;
Transpose&lt;br /&gt;
&lt;br /&gt;
Trivial representation&lt;br /&gt;
&lt;br /&gt;
Turing machine&lt;br /&gt;
&lt;br /&gt;
Uncertainty principle&lt;br /&gt;
&lt;br /&gt;
Unitary matrix&lt;br /&gt;
&lt;br /&gt;
Unitary transformation: A transformation which leaves the magnitude of any object it transforms the same.&lt;br /&gt;
&lt;br /&gt;
Universal quantum computing&lt;br /&gt;
&lt;br /&gt;
Universal set of gates (universality) (2.6)&lt;br /&gt;
&lt;br /&gt;
Variance&lt;br /&gt;
&lt;br /&gt;
Vector: A directed quantity.&lt;br /&gt;
&lt;br /&gt;
Vector space&lt;br /&gt;
&lt;br /&gt;
Weight of a vector (see Hamming weight)&lt;br /&gt;
&lt;br /&gt;
Weight of an operator: The number of non-identity elements in the tensor product.&lt;br /&gt;
&lt;br /&gt;
Wigner-Clebsch-Gordon Coefficients&lt;br /&gt;
&lt;br /&gt;
X gate (2.3.2)&lt;br /&gt;
&lt;br /&gt;
Y gate&lt;br /&gt;
&lt;br /&gt;
Z gate, or phase-flip gate (2.3.2)&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1777</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1777"/>
		<updated>2011-12-16T05:16:04Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Let us define things here or at least add links to definitions. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[ [n,k,d] ] code: A code using n qubits to encode k logical qubits to correct (d-1)/2 errors (d will be odd).&lt;br /&gt;
&lt;br /&gt;
Abelian group&lt;br /&gt;
&lt;br /&gt;
Abelian subgroup (see abelian group and subgroup)&lt;br /&gt;
&lt;br /&gt;
Adjoint: The transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Ancilla&lt;br /&gt;
&lt;br /&gt;
Angular momentum&lt;br /&gt;
&lt;br /&gt;
Anti-commutation:  Two operators anti-commute when AB+BA=0.&lt;br /&gt;
&lt;br /&gt;
Basis&lt;br /&gt;
&lt;br /&gt;
Bath system:&lt;br /&gt;
&lt;br /&gt;
Bell's theorem&lt;br /&gt;
&lt;br /&gt;
Bit flip error&lt;br /&gt;
&lt;br /&gt;
Bloch sphere&lt;br /&gt;
&lt;br /&gt;
Block diagonal matrix:&lt;br /&gt;
&lt;br /&gt;
Bra-ket notation:&lt;br /&gt;
&lt;br /&gt;
Centralizer&lt;br /&gt;
&lt;br /&gt;
Checksum (see dot product)&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Code&lt;br /&gt;
&lt;br /&gt;
Codewords&lt;br /&gt;
&lt;br /&gt;
Commutator: The commutator of A and B, signified by [A,B], is AB-BA.  Its value may be found by implementing the operators of A and B on a test function.  If something has a commutator of zero, it is said to commute.&lt;br /&gt;
&lt;br /&gt;
Complex conjugate:&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
Controlled not (CNOT gate)&lt;br /&gt;
&lt;br /&gt;
Controlled operation: An operation on a state or set of states that is conditioned on another state or set of states.&lt;br /&gt;
&lt;br /&gt;
Coset of a group&lt;br /&gt;
&lt;br /&gt;
CSS codes&lt;br /&gt;
&lt;br /&gt;
Cyclic group&lt;br /&gt;
&lt;br /&gt;
Dagger (see hermitian conjugate)&lt;br /&gt;
&lt;br /&gt;
Definite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Degenerate: Having more than one of the same eigenvalue.&lt;br /&gt;
&lt;br /&gt;
Density matrix&lt;br /&gt;
&lt;br /&gt;
Density operator&lt;br /&gt;
&lt;br /&gt;
Depolarizing error&lt;br /&gt;
&lt;br /&gt;
Determinant: When rows or columns of a matrix are taken as vectors, the determinant is the volume enclosed by those vectors and corresponding parallel vectors creating parallelograms.  Determinants only exist for square matrices.&lt;br /&gt;
&lt;br /&gt;
Diagonalizable: A matrix M is diagonalizable when it can be put into the form D=S^(-1)MS, where S and S^(-1) exist and are inverses.&lt;br /&gt;
&lt;br /&gt;
Differentiable manifold&lt;br /&gt;
&lt;br /&gt;
Dirac delta function&lt;br /&gt;
&lt;br /&gt;
Dirac notation (see bra-ket notation)&lt;br /&gt;
&lt;br /&gt;
Disjointness condition&lt;br /&gt;
&lt;br /&gt;
Distance of a quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Dot product: The scalar that results when two vectors have their corresponding components multiplied, and each of these products summed.&lt;br /&gt;
&lt;br /&gt;
Dual matrix&lt;br /&gt;
&lt;br /&gt;
Dual of a code&lt;br /&gt;
&lt;br /&gt;
Eigenfunction, eigenvalue, eigenvector: If HY=EY where H is a matrix, E is a scalar and Y is a vector, then Y is the eigenvector and E is the eigenvalue. If Y is a function, it is called an eigenfunction.&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Environment system (see Bath system)&lt;br /&gt;
&lt;br /&gt;
EPR paradox&lt;br /&gt;
&lt;br /&gt;
Epsilon tensor&lt;br /&gt;
&lt;br /&gt;
Equivalent representation&lt;br /&gt;
&lt;br /&gt;
Error syndrome&lt;br /&gt;
&lt;br /&gt;
Euler angle parametrization&lt;br /&gt;
&lt;br /&gt;
Euler's law: sine(x)+i*cos(x)=e^(ix)&lt;br /&gt;
&lt;br /&gt;
Expectation value:&lt;br /&gt;
&lt;br /&gt;
Exponentiating a matrix (see matrix exponential)&lt;br /&gt;
&lt;br /&gt;
Faithful representation&lt;br /&gt;
&lt;br /&gt;
Field&lt;br /&gt;
&lt;br /&gt;
Gate (see Quantum gate)&lt;br /&gt;
&lt;br /&gt;
General linear group&lt;br /&gt;
&lt;br /&gt;
Generators of a group&lt;br /&gt;
&lt;br /&gt;
Generator matrix&lt;br /&gt;
&lt;br /&gt;
Gram-Schmidt decomposition (see Schmidt decomposition)&lt;br /&gt;
&lt;br /&gt;
Group&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
H bar: Planck's constant divided by 2\pi&lt;br /&gt;
&lt;br /&gt;
Hadamard gate&lt;br /&gt;
&lt;br /&gt;
Hamming bound&lt;br /&gt;
&lt;br /&gt;
Hamming code&lt;br /&gt;
&lt;br /&gt;
Hamming distance&lt;br /&gt;
&lt;br /&gt;
Hamming weight&lt;br /&gt;
&lt;br /&gt;
Hamiltonian: The operator for all conservative (able to be transformed) energy in the system.  In quantum mechanics most energy is conservative.&lt;br /&gt;
&lt;br /&gt;
Heisenberg exchange interaction (8.5.2:&lt;br /&gt;
&lt;br /&gt;
Heisenberg uncertainty principle (see uncertainty principle)&lt;br /&gt;
&lt;br /&gt;
Hermitian: An operator whose transpose equals its complex conjugate.&lt;br /&gt;
&lt;br /&gt;
Hermitian conjugate: The transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Hidden variable theory (see also local hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Hilbert-Schmidt inner product (2.4)&lt;br /&gt;
&lt;br /&gt;
Hilbert space&lt;br /&gt;
&lt;br /&gt;
Homomorphism&lt;br /&gt;
&lt;br /&gt;
i: square root of negative one&lt;br /&gt;
&lt;br /&gt;
Identity matrix: A matrix of zeros except for the diagonal, where each element is 1.  Multiplying any matrix of the same dimension by it leaves the original matrix unchanged.&lt;br /&gt;
&lt;br /&gt;
Inner product (see dot product)&lt;br /&gt;
&lt;br /&gt;
Inverse of a matrix: A matrix's inverse is the matrix which, when they are multiplied together, yield an identity matrix.&lt;br /&gt;
&lt;br /&gt;
Invertible matrix: A matrix for which an inverse exists.&lt;br /&gt;
&lt;br /&gt;
Isolated system (see Closed system)&lt;br /&gt;
&lt;br /&gt;
Isomorphism&lt;br /&gt;
&lt;br /&gt;
Isotropy group or Isotropy subgroup (see stabilizer)&lt;br /&gt;
&lt;br /&gt;
Jacobi identity&lt;br /&gt;
&lt;br /&gt;
Ket: See bra-ket notation&lt;br /&gt;
&lt;br /&gt;
Kraus representation (or Kraus decomposition) (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Kronecker delta&lt;br /&gt;
&lt;br /&gt;
Levi-Civita symbol (see epsilon tensor)&lt;br /&gt;
&lt;br /&gt;
Lie algebra&lt;br /&gt;
&lt;br /&gt;
Lie group&lt;br /&gt;
&lt;br /&gt;
Linear code&lt;br /&gt;
&lt;br /&gt;
Linear combination: A set of vectors each multiplied by a scalar and summed to equal a desired vector.  A complete basis has a linear combination for all vectors of that dimension.&lt;br /&gt;
&lt;br /&gt;
Linear map: A transformation from one vector to another using one operator once.&lt;br /&gt;
&lt;br /&gt;
Little group (see stabilizer)&lt;br /&gt;
&lt;br /&gt;
Local actions&lt;br /&gt;
&lt;br /&gt;
Local hidden variable theory (see also hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Logical bit&lt;br /&gt;
&lt;br /&gt;
Matrix exponential&lt;br /&gt;
&lt;br /&gt;
Matrix properties&lt;br /&gt;
&lt;br /&gt;
Matrix transformation&lt;br /&gt;
&lt;br /&gt;
Measurement&lt;br /&gt;
&lt;br /&gt;
Minimum distance&lt;br /&gt;
&lt;br /&gt;
Modular arithmetic: When a number is divided into another and does not go evenly, there is left a remainder.  Modular arithmetic takes the information about what number was used to divide and what remainder is left to calculate how it will interact with another number.  For example, 13 can go into 5 twice with remainder 3, so its representation is 3mod 5, (pronounced &amp;quot;three modulo 5) which it has in common with any number satisfying 3+5x where x is an integer.  This usage of modulo has nothing to do with the physics usage of modulus.&lt;br /&gt;
&lt;br /&gt;
Modulus&lt;br /&gt;
&lt;br /&gt;
n,k,d code (see [n,k,d] code)&lt;br /&gt;
&lt;br /&gt;
No cloning theorem: No operator can duplicate an arbitrary quantum state.&lt;br /&gt;
&lt;br /&gt;
Noise&lt;br /&gt;
&lt;br /&gt;
Non-degenerate code&lt;br /&gt;
&lt;br /&gt;
Normalizer:&lt;br /&gt;
&lt;br /&gt;
Normalization: A process of scaling some set of numbers or functions in order that an operation including them returns a desired value.  For instance the set of all possible probabilities is usually scaled or normalized so they sum to one.&lt;br /&gt;
&lt;br /&gt;
One-to-one: A mapping where each domain element is mapped to exactly one range element, and each range element is mapped from one domain element.&lt;br /&gt;
&lt;br /&gt;
Onto: A mapping where each domain element mapped to at most one range element.&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Operator&lt;br /&gt;
&lt;br /&gt;
Operator-sum representation (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Order of a group&lt;br /&gt;
&lt;br /&gt;
Ordered basis&lt;br /&gt;
&lt;br /&gt;
Orthogonal: Two vectors are orthogonal when their dot product is zero.&lt;br /&gt;
&lt;br /&gt;
Outer product&lt;br /&gt;
&lt;br /&gt;
P gate (not the phase gate):&lt;br /&gt;
&lt;br /&gt;
Parity&lt;br /&gt;
&lt;br /&gt;
Parity check (see inner product)&lt;br /&gt;
&lt;br /&gt;
Parity check matrix&lt;br /&gt;
&lt;br /&gt;
Partial trace&lt;br /&gt;
&lt;br /&gt;
Partition of a group&lt;br /&gt;
&lt;br /&gt;
Pauli group&lt;br /&gt;
&lt;br /&gt;
Pauli matrices: The X,Y,Z gates.&lt;br /&gt;
&lt;br /&gt;
Permutation:  &lt;br /&gt;
&lt;br /&gt;
Phase flip error&lt;br /&gt;
&lt;br /&gt;
Phase gate: See Z gate&lt;br /&gt;
&lt;br /&gt;
Planck's constant:&lt;br /&gt;
&lt;br /&gt;
Polarization&lt;br /&gt;
&lt;br /&gt;
Positive definite and semidefinite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Probability for existing in a state:&lt;br /&gt;
&lt;br /&gt;
Projector: A transformation such that P^2=P.&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Pure state&lt;br /&gt;
&lt;br /&gt;
QKD: See quantum key distribution&lt;br /&gt;
&lt;br /&gt;
Quantum bit: See Qubit&lt;br /&gt;
&lt;br /&gt;
Quantum cryptography&lt;br /&gt;
&lt;br /&gt;
Quantum dense coding&lt;br /&gt;
&lt;br /&gt;
Quantum gate: A unitary transformation applied to one or more qubits.&lt;br /&gt;
&lt;br /&gt;
Quantum hamming bound&lt;br /&gt;
&lt;br /&gt;
Quantum key distribution:&lt;br /&gt;
&lt;br /&gt;
Quantum NOT gate: see X gate&lt;br /&gt;
&lt;br /&gt;
Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
Rank&lt;br /&gt;
&lt;br /&gt;
Rate of a code&lt;br /&gt;
&lt;br /&gt;
Reduced density operator&lt;br /&gt;
&lt;br /&gt;
Representation space&lt;br /&gt;
&lt;br /&gt;
Reversibility of a quantum operation:  For every operation on a qubit there exists an operation which restores the state to its original function.&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
Schmidt decomposition&lt;br /&gt;
&lt;br /&gt;
Schrodinger's Equation&lt;br /&gt;
&lt;br /&gt;
set&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;br /&gt;
&lt;br /&gt;
Shor's nine-bit quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
Similarity transformation&lt;br /&gt;
&lt;br /&gt;
Singular values&lt;br /&gt;
&lt;br /&gt;
Singular value decomposition&lt;br /&gt;
&lt;br /&gt;
SMR representation&lt;br /&gt;
&lt;br /&gt;
Special unitary matrix&lt;br /&gt;
&lt;br /&gt;
Spin&lt;br /&gt;
&lt;br /&gt;
Spooky action at a distance&lt;br /&gt;
&lt;br /&gt;
Stabilizers of a group&lt;br /&gt;
&lt;br /&gt;
Stabilizer code&lt;br /&gt;
&lt;br /&gt;
Standard deviation&lt;br /&gt;
&lt;br /&gt;
Stationary subgroup (see stabilizer)&lt;br /&gt;
&lt;br /&gt;
Stirling's formula&lt;br /&gt;
&lt;br /&gt;
Subgroup&lt;br /&gt;
&lt;br /&gt;
Superposition: A qubit state in superposition, \phi may be written as |\phi&amp;gt;=\alpha|0&amp;gt;+\beta|1&amp;gt; where \alpha and \beta are complex numbers.&lt;br /&gt;
&lt;br /&gt;
Syndrome measurement&lt;br /&gt;
&lt;br /&gt;
Taylor expansion&lt;br /&gt;
&lt;br /&gt;
Teleportation&lt;br /&gt;
&lt;br /&gt;
Tensor product&lt;br /&gt;
&lt;br /&gt;
Trace: The sum of the diagonal elements of a matrix.&lt;br /&gt;
&lt;br /&gt;
Transpose&lt;br /&gt;
&lt;br /&gt;
Trivial representation&lt;br /&gt;
&lt;br /&gt;
Turing machine&lt;br /&gt;
&lt;br /&gt;
Uncertainty principle&lt;br /&gt;
&lt;br /&gt;
Unitary matrix&lt;br /&gt;
&lt;br /&gt;
Unitary transformation: A transformation which leaves the magnitude of any object it transforms the same.&lt;br /&gt;
&lt;br /&gt;
Universal quantum computing&lt;br /&gt;
&lt;br /&gt;
Universal set of gates (universality) (2.6)&lt;br /&gt;
&lt;br /&gt;
Variance&lt;br /&gt;
&lt;br /&gt;
Vector: A directed quantity.&lt;br /&gt;
&lt;br /&gt;
Vector space&lt;br /&gt;
&lt;br /&gt;
Weight of a vector (see Hamming weight)&lt;br /&gt;
&lt;br /&gt;
Weight of an operator: The number of non-identity elements in the tensor product.&lt;br /&gt;
&lt;br /&gt;
Wigner-Clebsch-Gordon Coefficients&lt;br /&gt;
&lt;br /&gt;
X gate (2.3.2)&lt;br /&gt;
&lt;br /&gt;
Y gate&lt;br /&gt;
&lt;br /&gt;
Z gate, or phase-flip gate (2.3.2)&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1776</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1776"/>
		<updated>2011-12-15T22:48:18Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Let us define things here or at least add links to definitions. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[ [n,k,d] ] code: A code using n qubits to encode k logical qubits to correct (d-1)/2 errors (d will be odd).&lt;br /&gt;
&lt;br /&gt;
Abelian subgroup&lt;br /&gt;
&lt;br /&gt;
Adjoint: The transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Ancilla&lt;br /&gt;
&lt;br /&gt;
Angular momentum&lt;br /&gt;
&lt;br /&gt;
Anti-commutation:  Two operators anti-commute when AB+BA=0.&lt;br /&gt;
&lt;br /&gt;
Basis&lt;br /&gt;
&lt;br /&gt;
Bath system:&lt;br /&gt;
&lt;br /&gt;
Bell's theorem&lt;br /&gt;
&lt;br /&gt;
Bit flip error&lt;br /&gt;
&lt;br /&gt;
Bloch sphere&lt;br /&gt;
&lt;br /&gt;
Block diagonal matrix:&lt;br /&gt;
&lt;br /&gt;
Bra-ket notation:&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Commutator: The commutator of A and B, signified by [A,B], is AB-BA.  Its value may be found by implementing the operators of A and B on a test function.  If something has a commutator of zero, it is said to commute.&lt;br /&gt;
&lt;br /&gt;
Complex conjugate:&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
Controlled not (CNOT gate)&lt;br /&gt;
&lt;br /&gt;
Controlled operation: An operation on a state or set of states that is conditioned on another state or set of states.&lt;br /&gt;
&lt;br /&gt;
CSS codes&lt;br /&gt;
&lt;br /&gt;
Dagger (see hermitian conjugate)&lt;br /&gt;
&lt;br /&gt;
Definite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Degenerate: Having more than one of the same eigenvalue.&lt;br /&gt;
&lt;br /&gt;
Density matrix&lt;br /&gt;
&lt;br /&gt;
Density operator&lt;br /&gt;
&lt;br /&gt;
Depolarizing error&lt;br /&gt;
&lt;br /&gt;
Determinant: When rows or columns of a matrix are taken as vectors, the determinant is the volume enclosed by those vectors and corresponding parallel vectors creating parallelograms.  Determinants only exist for square matrices.&lt;br /&gt;
&lt;br /&gt;
Diagonalized&lt;br /&gt;
&lt;br /&gt;
Dirac delta function&lt;br /&gt;
&lt;br /&gt;
Dirac notation (see bra-ket notation)&lt;br /&gt;
&lt;br /&gt;
Disjointness condition&lt;br /&gt;
&lt;br /&gt;
Distance of a quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Dot product: The scalar that results when two vectors have their corresponding components multiplied, and each of these products summed.&lt;br /&gt;
&lt;br /&gt;
Dual of a code&lt;br /&gt;
&lt;br /&gt;
Eigenfunction&lt;br /&gt;
&lt;br /&gt;
Eigenvalue&lt;br /&gt;
&lt;br /&gt;
Eigenvector&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Environment system (see Bath system)&lt;br /&gt;
&lt;br /&gt;
EPR paradox&lt;br /&gt;
&lt;br /&gt;
Error syndrome&lt;br /&gt;
&lt;br /&gt;
Euler angle parametrization&lt;br /&gt;
&lt;br /&gt;
Euler's law&lt;br /&gt;
&lt;br /&gt;
Expectation value:&lt;br /&gt;
&lt;br /&gt;
Exponentiating a matrix (see matrix exponential)&lt;br /&gt;
&lt;br /&gt;
Field&lt;br /&gt;
&lt;br /&gt;
Gate (see Quantum gate)&lt;br /&gt;
&lt;br /&gt;
Generator&lt;br /&gt;
&lt;br /&gt;
Generator matrix&lt;br /&gt;
&lt;br /&gt;
Gram-Schmidt decomposition (see Schmidt decomposition)&lt;br /&gt;
&lt;br /&gt;
Group&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
H bar: Planck's constant divided by 2\pi&lt;br /&gt;
&lt;br /&gt;
Hadamard gate&lt;br /&gt;
&lt;br /&gt;
Hamiltonian: The operator for all conservative (able to be transformed) energy in the system.  In quantum mechanics most energy is conservative.&lt;br /&gt;
&lt;br /&gt;
Heisenberg exchange interaction (8.5.2:&lt;br /&gt;
&lt;br /&gt;
Heisenberg uncertainty principle (see uncertainty principle)&lt;br /&gt;
&lt;br /&gt;
Hermitian: An operator whose transpose equals its complex conjugate.&lt;br /&gt;
&lt;br /&gt;
Hermitian conjugate: The transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Hidden variable theory (see also local hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Hilbert-Schmidt inner product (2.4)&lt;br /&gt;
&lt;br /&gt;
Hilbert space&lt;br /&gt;
&lt;br /&gt;
i: square root of negative one&lt;br /&gt;
&lt;br /&gt;
Identity matrix: A matrix of zeros except for the diagonal, where each element is 1.  Multiplying any matrix of the same dimension by it leaves the original matrix unchanged.&lt;br /&gt;
&lt;br /&gt;
Inner product (see dot product)&lt;br /&gt;
&lt;br /&gt;
Inverse of a matrix&lt;br /&gt;
&lt;br /&gt;
Isolated system (see Closed system)&lt;br /&gt;
&lt;br /&gt;
Ket: See bra-ket notation&lt;br /&gt;
&lt;br /&gt;
Kraus representation (or Kraus decomposition) (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Kronecker delta&lt;br /&gt;
&lt;br /&gt;
Linear combination: A set of vectors each multiplied by a scalar and summed to equal a desired vector.  A complete basis has a linear combination for all vectors of that dimension.&lt;br /&gt;
&lt;br /&gt;
Linear map: A transformation from one vector to another using one operator once.&lt;br /&gt;
&lt;br /&gt;
Local actions&lt;br /&gt;
&lt;br /&gt;
Local hidden variable theory (see also hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Logical bit&lt;br /&gt;
&lt;br /&gt;
Matrix exponential&lt;br /&gt;
&lt;br /&gt;
Matrix properties&lt;br /&gt;
&lt;br /&gt;
Matrix transformation&lt;br /&gt;
&lt;br /&gt;
Measurement&lt;br /&gt;
&lt;br /&gt;
Modulus&lt;br /&gt;
&lt;br /&gt;
n,k,d code (see [n,k,d] code)&lt;br /&gt;
&lt;br /&gt;
No cloning theorem: No operator can duplicate an arbitrary quantum state.&lt;br /&gt;
&lt;br /&gt;
Noise&lt;br /&gt;
&lt;br /&gt;
Non-degenerate code&lt;br /&gt;
&lt;br /&gt;
Normalization: A process of scaling some set of numbers or functions in order that an operation including them returns a desired value.  For instance the set of all possible probabilities is usually scaled or normalized so they sum to one.&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Operator&lt;br /&gt;
&lt;br /&gt;
Operator-sum representation (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Ordered basis&lt;br /&gt;
&lt;br /&gt;
Orthogonal: Two vectors are orthogonal when their dot product is zero.&lt;br /&gt;
&lt;br /&gt;
Outer product&lt;br /&gt;
&lt;br /&gt;
P gate (not the phase gate):&lt;br /&gt;
&lt;br /&gt;
Parity&lt;br /&gt;
&lt;br /&gt;
Parity check matrix&lt;br /&gt;
&lt;br /&gt;
Partial trace&lt;br /&gt;
&lt;br /&gt;
Pauli group&lt;br /&gt;
&lt;br /&gt;
Pauli matrices: The X,Y,Z gates.&lt;br /&gt;
&lt;br /&gt;
Phase flip error&lt;br /&gt;
&lt;br /&gt;
Phase gate: See Z gate&lt;br /&gt;
&lt;br /&gt;
Planck's constant:&lt;br /&gt;
&lt;br /&gt;
Polarization&lt;br /&gt;
&lt;br /&gt;
Positive definite and semidefinite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Probability for existing in a state:&lt;br /&gt;
&lt;br /&gt;
Projector: A transformation such that P^2=P.&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Pure state&lt;br /&gt;
&lt;br /&gt;
QKD: See quantum key distribution&lt;br /&gt;
&lt;br /&gt;
Quantum bit: See Qubit&lt;br /&gt;
&lt;br /&gt;
Quantum cryptography&lt;br /&gt;
&lt;br /&gt;
Quantum dense coding&lt;br /&gt;
&lt;br /&gt;
Quantum gate: A unitary transformation applied to one or more qubits.&lt;br /&gt;
&lt;br /&gt;
Quantum hamming bound&lt;br /&gt;
&lt;br /&gt;
Quantum key distribution:&lt;br /&gt;
&lt;br /&gt;
Quantum NOT gate: see X gate&lt;br /&gt;
&lt;br /&gt;
Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
Rank&lt;br /&gt;
&lt;br /&gt;
Rate of a code&lt;br /&gt;
&lt;br /&gt;
Reduced density operator&lt;br /&gt;
&lt;br /&gt;
Reversibility of a quantum operation:  For every operation on a qubit there exists an operation which restores the state to its original function.&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
Schmidt decomposition&lt;br /&gt;
&lt;br /&gt;
Schrodinger's Equation&lt;br /&gt;
&lt;br /&gt;
set&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;br /&gt;
&lt;br /&gt;
Shor's nine-bit quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
Similarity transformation&lt;br /&gt;
&lt;br /&gt;
Singular values&lt;br /&gt;
&lt;br /&gt;
Singular value decomposition&lt;br /&gt;
&lt;br /&gt;
SMR representation&lt;br /&gt;
&lt;br /&gt;
Special unitary matrix&lt;br /&gt;
&lt;br /&gt;
Spin&lt;br /&gt;
&lt;br /&gt;
Spooky action at a distance&lt;br /&gt;
&lt;br /&gt;
Stabilizer code&lt;br /&gt;
&lt;br /&gt;
Standard deviation&lt;br /&gt;
&lt;br /&gt;
Stirling's formula&lt;br /&gt;
&lt;br /&gt;
Superposition: A qubit state in superposition, \phi may be written as |\phi&amp;gt;=\alpha|0&amp;gt;+\beta|1&amp;gt; where \alpha and \beta are complex numbers.&lt;br /&gt;
&lt;br /&gt;
Taylor expansion&lt;br /&gt;
&lt;br /&gt;
Teleportation&lt;br /&gt;
&lt;br /&gt;
Tensor product&lt;br /&gt;
&lt;br /&gt;
Trace: The sum of the diagonal elements of a matrix.&lt;br /&gt;
&lt;br /&gt;
Transpose&lt;br /&gt;
&lt;br /&gt;
Turing machine&lt;br /&gt;
&lt;br /&gt;
Uncertainty principle&lt;br /&gt;
&lt;br /&gt;
Unitary matrix&lt;br /&gt;
&lt;br /&gt;
Unitary transformation: A transformation which leaves the magnitude of any object it transforms the same.&lt;br /&gt;
&lt;br /&gt;
Universal quantum computing&lt;br /&gt;
&lt;br /&gt;
Universal set of gates (universality) (2.6)&lt;br /&gt;
&lt;br /&gt;
Variance&lt;br /&gt;
&lt;br /&gt;
Vector: A directed quantity.&lt;br /&gt;
&lt;br /&gt;
Vector space&lt;br /&gt;
&lt;br /&gt;
Weight of an operator: The number of non-identity elements in the tensor product.&lt;br /&gt;
&lt;br /&gt;
Wigner-Clebsch-Gordon Coefficients&lt;br /&gt;
&lt;br /&gt;
X gate (2.3.2)&lt;br /&gt;
&lt;br /&gt;
Y gate&lt;br /&gt;
&lt;br /&gt;
Z gate, or phase-flip gate (2.3.2)&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1775</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1775"/>
		<updated>2011-12-12T17:18:48Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Let us define things here or at least add links to definitions. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[ [n,k,d] ] code: A code using n qubits to encode k logical qubits to correct (d-1)/2 errors (d will be odd).&lt;br /&gt;
&lt;br /&gt;
Abelian subgroup&lt;br /&gt;
&lt;br /&gt;
Adjoint: The transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Ancilla&lt;br /&gt;
&lt;br /&gt;
Angular momentum&lt;br /&gt;
&lt;br /&gt;
Anti-commutation:  Two operators anti-commute when AB+BA=0.&lt;br /&gt;
&lt;br /&gt;
Basis&lt;br /&gt;
&lt;br /&gt;
Bath system:&lt;br /&gt;
&lt;br /&gt;
Bell's theorem&lt;br /&gt;
&lt;br /&gt;
Bit flip error&lt;br /&gt;
&lt;br /&gt;
Bloch sphere&lt;br /&gt;
&lt;br /&gt;
Block diagonal matrix:&lt;br /&gt;
&lt;br /&gt;
Bra-ket notation:&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Commutator: The commutator of A and B, signified by [A,B], is AB-BA.  Its value may be found by implementing the operators of A and B on a test function.  If something has a commutator of zero, it is said to commute.&lt;br /&gt;
&lt;br /&gt;
Complex conjugate:&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
Controlled not (CNOT gate)&lt;br /&gt;
&lt;br /&gt;
Controlled operation: An operation on a state or set of states that is conditioned on another state or set of states.&lt;br /&gt;
&lt;br /&gt;
CSS codes&lt;br /&gt;
&lt;br /&gt;
Definite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Degenerate: Having more than one of the same eigenvalue.&lt;br /&gt;
&lt;br /&gt;
Density matrix&lt;br /&gt;
&lt;br /&gt;
Density operator&lt;br /&gt;
&lt;br /&gt;
Depolarizing error&lt;br /&gt;
&lt;br /&gt;
Determinant: When rows or columns of a matrix are taken as vectors, the determinant is the volume enclosed by those vectors and corresponding parallel vectors creating parallelograms.  Determinants only exist for square matrices.&lt;br /&gt;
&lt;br /&gt;
Dirac delta function&lt;br /&gt;
&lt;br /&gt;
Disjointness condition&lt;br /&gt;
&lt;br /&gt;
Distance of a quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Dot product: The scalar that results when two vectors have their corresponding components multiplied, and each of these products summed.&lt;br /&gt;
&lt;br /&gt;
Dual of a code&lt;br /&gt;
&lt;br /&gt;
Eigenfunction&lt;br /&gt;
&lt;br /&gt;
Eigenvalue&lt;br /&gt;
&lt;br /&gt;
Eigenvector&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Environment system (see Bath system)&lt;br /&gt;
&lt;br /&gt;
EPR paradox&lt;br /&gt;
&lt;br /&gt;
Error syndrome&lt;br /&gt;
&lt;br /&gt;
Euler's law&lt;br /&gt;
&lt;br /&gt;
Expectation value:&lt;br /&gt;
&lt;br /&gt;
Exponentiating a matrix (see matrix exponential)&lt;br /&gt;
&lt;br /&gt;
Field&lt;br /&gt;
&lt;br /&gt;
Gate (see Quantum gate)&lt;br /&gt;
&lt;br /&gt;
Generator&lt;br /&gt;
&lt;br /&gt;
Generator matrix&lt;br /&gt;
&lt;br /&gt;
Gram-Schmidt decomposition (see Schmidt decomposition)&lt;br /&gt;
&lt;br /&gt;
Group&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
H bar: Planck's constant divided by 2\pi&lt;br /&gt;
&lt;br /&gt;
Hadamard gate&lt;br /&gt;
&lt;br /&gt;
Hamiltonian: The operator for all conservative (able to be transformed) energy in the system.  In quantum mechanics most energy is conservative.&lt;br /&gt;
&lt;br /&gt;
Heisenberg exchange interaction (8.5.2:&lt;br /&gt;
&lt;br /&gt;
Heisenberg uncertainty principle (see uncertainty principle)&lt;br /&gt;
&lt;br /&gt;
Hermitian: An operator whose transpose equals its complex conjugate.&lt;br /&gt;
&lt;br /&gt;
Hidden variable theory (see also local hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Hilbert-Schmidt inner product (2.4)&lt;br /&gt;
&lt;br /&gt;
Hilbert space&lt;br /&gt;
&lt;br /&gt;
i: square root of negative one&lt;br /&gt;
&lt;br /&gt;
Identity matrix:&lt;br /&gt;
&lt;br /&gt;
Isolated system (see Closed system)&lt;br /&gt;
&lt;br /&gt;
Ket: See bra-ket notation&lt;br /&gt;
&lt;br /&gt;
Kraus representation (or Kraus decomposition) (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Linear combination: A set of vectors each multiplied by a scalar and summed to equal a desired vector.  A complete basis has a linear combination for all vectors of that dimension.&lt;br /&gt;
&lt;br /&gt;
Linear map: A transformation from one vector to another using one operator once.&lt;br /&gt;
&lt;br /&gt;
Local actions&lt;br /&gt;
&lt;br /&gt;
Local hidden variable theory (see also hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Logical bit&lt;br /&gt;
&lt;br /&gt;
Matrix exponential&lt;br /&gt;
&lt;br /&gt;
Matrix properties&lt;br /&gt;
&lt;br /&gt;
Matrix transformation&lt;br /&gt;
&lt;br /&gt;
Measurement&lt;br /&gt;
&lt;br /&gt;
n,k,d code (see [n,k,d] code)&lt;br /&gt;
&lt;br /&gt;
No cloning theorem: No operator can duplicate an arbitrary quantum state.&lt;br /&gt;
&lt;br /&gt;
Noise&lt;br /&gt;
&lt;br /&gt;
Non-degenerate code&lt;br /&gt;
&lt;br /&gt;
Normalization: A process of scaling some set of numbers or functions in order that an operation including them returns a desired value.  For instance the set of all possible probabilities is usually scaled or normalized so they sum to one.&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Operator-sum representation (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Ordered basis&lt;br /&gt;
&lt;br /&gt;
Orthogonal&lt;br /&gt;
&lt;br /&gt;
Outer product&lt;br /&gt;
&lt;br /&gt;
P gate (not the phase gate):&lt;br /&gt;
&lt;br /&gt;
Parity&lt;br /&gt;
&lt;br /&gt;
Parity check matrix&lt;br /&gt;
&lt;br /&gt;
Partial trace&lt;br /&gt;
&lt;br /&gt;
Pauli group&lt;br /&gt;
&lt;br /&gt;
Pauli matrices: The X,Y,Z gates.&lt;br /&gt;
&lt;br /&gt;
Phase flip error&lt;br /&gt;
&lt;br /&gt;
Phase gate: See Z gate&lt;br /&gt;
&lt;br /&gt;
Planck's constant:&lt;br /&gt;
&lt;br /&gt;
Polarization&lt;br /&gt;
&lt;br /&gt;
Positive definite and semidefinite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Probability for existing in a state:&lt;br /&gt;
&lt;br /&gt;
Projector: A transformation such that P^2=P.&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Pure state&lt;br /&gt;
&lt;br /&gt;
QKD: See quantum key distribution&lt;br /&gt;
&lt;br /&gt;
Quantum bit: See Qubit&lt;br /&gt;
&lt;br /&gt;
Quantum cryptography&lt;br /&gt;
&lt;br /&gt;
Quantum dense coding&lt;br /&gt;
&lt;br /&gt;
Quantum gate: A unitary transformation applied to one or more qubits.&lt;br /&gt;
&lt;br /&gt;
Quantum hamming bound&lt;br /&gt;
&lt;br /&gt;
Quantum key distribution:&lt;br /&gt;
&lt;br /&gt;
Quantum NOT gate: see X gate&lt;br /&gt;
&lt;br /&gt;
Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
Rank&lt;br /&gt;
&lt;br /&gt;
Rate of a code&lt;br /&gt;
&lt;br /&gt;
Reduced density operator&lt;br /&gt;
&lt;br /&gt;
Reversibility of a quantum operation:  For every operation on a qubit there exists an operation which restores the state to its original function.&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
Schmidt decomposition&lt;br /&gt;
&lt;br /&gt;
Schrodinger's Equation&lt;br /&gt;
&lt;br /&gt;
set&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;br /&gt;
&lt;br /&gt;
Shor's nine-bit quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
SMR representation&lt;br /&gt;
&lt;br /&gt;
Spin&lt;br /&gt;
&lt;br /&gt;
Spooky action at a distance&lt;br /&gt;
&lt;br /&gt;
Stabilizer code&lt;br /&gt;
&lt;br /&gt;
Superposition: A qubit state in superposition, \phi may be written as |\phi&amp;gt;=\alpha|0&amp;gt;+\beta|1&amp;gt; where \alpha and \beta are complex numbers.&lt;br /&gt;
&lt;br /&gt;
Taylor expansion&lt;br /&gt;
&lt;br /&gt;
Teleportation&lt;br /&gt;
&lt;br /&gt;
Tensor product&lt;br /&gt;
&lt;br /&gt;
Trace&lt;br /&gt;
&lt;br /&gt;
Transpose&lt;br /&gt;
&lt;br /&gt;
Turing machine&lt;br /&gt;
&lt;br /&gt;
Uncertainty principle&lt;br /&gt;
&lt;br /&gt;
Unitary transformation: A transformation which leaves the magnitude of any object it transforms the same.&lt;br /&gt;
&lt;br /&gt;
Universal quantum computing&lt;br /&gt;
&lt;br /&gt;
Universal set of gates (universality) (2.6)&lt;br /&gt;
&lt;br /&gt;
Vector space&lt;br /&gt;
&lt;br /&gt;
Weight of an operator: The number of non-identity elements in the tensor product.&lt;br /&gt;
&lt;br /&gt;
Wigner-Clebsch-Gordon Coefficients&lt;br /&gt;
&lt;br /&gt;
X gate (2.3.2)&lt;br /&gt;
&lt;br /&gt;
Y gate&lt;br /&gt;
&lt;br /&gt;
Z gate, or phase-flip gate (2.3.2)&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1774</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1774"/>
		<updated>2011-12-12T17:18:29Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Let us define things here or at least add links to definitions. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
\[\[n,k,d\]\] code: A code using n qubits to encode k logical qubits to correct (d-1)/2 errors (d will be odd).&lt;br /&gt;
&lt;br /&gt;
Abelian subgroup&lt;br /&gt;
&lt;br /&gt;
Adjoint: The transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Ancilla&lt;br /&gt;
&lt;br /&gt;
Angular momentum&lt;br /&gt;
&lt;br /&gt;
Anti-commutation:  Two operators anti-commute when AB+BA=0.&lt;br /&gt;
&lt;br /&gt;
Basis&lt;br /&gt;
&lt;br /&gt;
Bath system:&lt;br /&gt;
&lt;br /&gt;
Bell's theorem&lt;br /&gt;
&lt;br /&gt;
Bit flip error&lt;br /&gt;
&lt;br /&gt;
Bloch sphere&lt;br /&gt;
&lt;br /&gt;
Block diagonal matrix:&lt;br /&gt;
&lt;br /&gt;
Bra-ket notation:&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Commutator: The commutator of A and B, signified by [A,B], is AB-BA.  Its value may be found by implementing the operators of A and B on a test function.  If something has a commutator of zero, it is said to commute.&lt;br /&gt;
&lt;br /&gt;
Complex conjugate:&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
Controlled not (CNOT gate)&lt;br /&gt;
&lt;br /&gt;
Controlled operation: An operation on a state or set of states that is conditioned on another state or set of states.&lt;br /&gt;
&lt;br /&gt;
CSS codes&lt;br /&gt;
&lt;br /&gt;
Definite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Degenerate: Having more than one of the same eigenvalue.&lt;br /&gt;
&lt;br /&gt;
Density matrix&lt;br /&gt;
&lt;br /&gt;
Density operator&lt;br /&gt;
&lt;br /&gt;
Depolarizing error&lt;br /&gt;
&lt;br /&gt;
Determinant: When rows or columns of a matrix are taken as vectors, the determinant is the volume enclosed by those vectors and corresponding parallel vectors creating parallelograms.  Determinants only exist for square matrices.&lt;br /&gt;
&lt;br /&gt;
Dirac delta function&lt;br /&gt;
&lt;br /&gt;
Disjointness condition&lt;br /&gt;
&lt;br /&gt;
Distance of a quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Dot product: The scalar that results when two vectors have their corresponding components multiplied, and each of these products summed.&lt;br /&gt;
&lt;br /&gt;
Dual of a code&lt;br /&gt;
&lt;br /&gt;
Eigenfunction&lt;br /&gt;
&lt;br /&gt;
Eigenvalue&lt;br /&gt;
&lt;br /&gt;
Eigenvector&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Environment system (see Bath system)&lt;br /&gt;
&lt;br /&gt;
EPR paradox&lt;br /&gt;
&lt;br /&gt;
Error syndrome&lt;br /&gt;
&lt;br /&gt;
Euler's law&lt;br /&gt;
&lt;br /&gt;
Expectation value:&lt;br /&gt;
&lt;br /&gt;
Exponentiating a matrix (see matrix exponential)&lt;br /&gt;
&lt;br /&gt;
Field&lt;br /&gt;
&lt;br /&gt;
Gate (see Quantum gate)&lt;br /&gt;
&lt;br /&gt;
Generator&lt;br /&gt;
&lt;br /&gt;
Generator matrix&lt;br /&gt;
&lt;br /&gt;
Gram-Schmidt decomposition (see Schmidt decomposition)&lt;br /&gt;
&lt;br /&gt;
Group&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
H bar: Planck's constant divided by 2\pi&lt;br /&gt;
&lt;br /&gt;
Hadamard gate&lt;br /&gt;
&lt;br /&gt;
Hamiltonian: The operator for all conservative (able to be transformed) energy in the system.  In quantum mechanics most energy is conservative.&lt;br /&gt;
&lt;br /&gt;
Heisenberg exchange interaction (8.5.2:&lt;br /&gt;
&lt;br /&gt;
Heisenberg uncertainty principle (see uncertainty principle)&lt;br /&gt;
&lt;br /&gt;
Hermitian: An operator whose transpose equals its complex conjugate.&lt;br /&gt;
&lt;br /&gt;
Hidden variable theory (see also local hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Hilbert-Schmidt inner product (2.4)&lt;br /&gt;
&lt;br /&gt;
Hilbert space&lt;br /&gt;
&lt;br /&gt;
i: square root of negative one&lt;br /&gt;
&lt;br /&gt;
Identity matrix:&lt;br /&gt;
&lt;br /&gt;
Isolated system (see Closed system)&lt;br /&gt;
&lt;br /&gt;
Ket: See bra-ket notation&lt;br /&gt;
&lt;br /&gt;
Kraus representation (or Kraus decomposition) (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Linear combination: A set of vectors each multiplied by a scalar and summed to equal a desired vector.  A complete basis has a linear combination for all vectors of that dimension.&lt;br /&gt;
&lt;br /&gt;
Linear map: A transformation from one vector to another using one operator once.&lt;br /&gt;
&lt;br /&gt;
Local actions&lt;br /&gt;
&lt;br /&gt;
Local hidden variable theory (see also hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Logical bit&lt;br /&gt;
&lt;br /&gt;
Matrix exponential&lt;br /&gt;
&lt;br /&gt;
Matrix properties&lt;br /&gt;
&lt;br /&gt;
Matrix transformation&lt;br /&gt;
&lt;br /&gt;
Measurement&lt;br /&gt;
&lt;br /&gt;
n,k,d code (see [n,k,d] code)&lt;br /&gt;
&lt;br /&gt;
No cloning theorem: No operator can duplicate an arbitrary quantum state.&lt;br /&gt;
&lt;br /&gt;
Noise&lt;br /&gt;
&lt;br /&gt;
Non-degenerate code&lt;br /&gt;
&lt;br /&gt;
Normalization: A process of scaling some set of numbers or functions in order that an operation including them returns a desired value.  For instance the set of all possible probabilities is usually scaled or normalized so they sum to one.&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Operator-sum representation (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Ordered basis&lt;br /&gt;
&lt;br /&gt;
Orthogonal&lt;br /&gt;
&lt;br /&gt;
Outer product&lt;br /&gt;
&lt;br /&gt;
P gate (not the phase gate):&lt;br /&gt;
&lt;br /&gt;
Parity&lt;br /&gt;
&lt;br /&gt;
Parity check matrix&lt;br /&gt;
&lt;br /&gt;
Partial trace&lt;br /&gt;
&lt;br /&gt;
Pauli group&lt;br /&gt;
&lt;br /&gt;
Pauli matrices: The X,Y,Z gates.&lt;br /&gt;
&lt;br /&gt;
Phase flip error&lt;br /&gt;
&lt;br /&gt;
Phase gate: See Z gate&lt;br /&gt;
&lt;br /&gt;
Planck's constant:&lt;br /&gt;
&lt;br /&gt;
Polarization&lt;br /&gt;
&lt;br /&gt;
Positive definite and semidefinite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Probability for existing in a state:&lt;br /&gt;
&lt;br /&gt;
Projector: A transformation such that P^2=P.&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Pure state&lt;br /&gt;
&lt;br /&gt;
QKD: See quantum key distribution&lt;br /&gt;
&lt;br /&gt;
Quantum bit: See Qubit&lt;br /&gt;
&lt;br /&gt;
Quantum cryptography&lt;br /&gt;
&lt;br /&gt;
Quantum dense coding&lt;br /&gt;
&lt;br /&gt;
Quantum gate: A unitary transformation applied to one or more qubits.&lt;br /&gt;
&lt;br /&gt;
Quantum hamming bound&lt;br /&gt;
&lt;br /&gt;
Quantum key distribution:&lt;br /&gt;
&lt;br /&gt;
Quantum NOT gate: see X gate&lt;br /&gt;
&lt;br /&gt;
Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
Rank&lt;br /&gt;
&lt;br /&gt;
Rate of a code&lt;br /&gt;
&lt;br /&gt;
Reduced density operator&lt;br /&gt;
&lt;br /&gt;
Reversibility of a quantum operation:  For every operation on a qubit there exists an operation which restores the state to its original function.&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
Schmidt decomposition&lt;br /&gt;
&lt;br /&gt;
Schrodinger's Equation&lt;br /&gt;
&lt;br /&gt;
set&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;br /&gt;
&lt;br /&gt;
Shor's nine-bit quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
SMR representation&lt;br /&gt;
&lt;br /&gt;
Spin&lt;br /&gt;
&lt;br /&gt;
Spooky action at a distance&lt;br /&gt;
&lt;br /&gt;
Stabilizer code&lt;br /&gt;
&lt;br /&gt;
Superposition: A qubit state in superposition, \phi may be written as |\phi&amp;gt;=\alpha|0&amp;gt;+\beta|1&amp;gt; where \alpha and \beta are complex numbers.&lt;br /&gt;
&lt;br /&gt;
Taylor expansion&lt;br /&gt;
&lt;br /&gt;
Teleportation&lt;br /&gt;
&lt;br /&gt;
Tensor product&lt;br /&gt;
&lt;br /&gt;
Trace&lt;br /&gt;
&lt;br /&gt;
Transpose&lt;br /&gt;
&lt;br /&gt;
Turing machine&lt;br /&gt;
&lt;br /&gt;
Uncertainty principle&lt;br /&gt;
&lt;br /&gt;
Unitary transformation: A transformation which leaves the magnitude of any object it transforms the same.&lt;br /&gt;
&lt;br /&gt;
Universal quantum computing&lt;br /&gt;
&lt;br /&gt;
Universal set of gates (universality) (2.6)&lt;br /&gt;
&lt;br /&gt;
Vector space&lt;br /&gt;
&lt;br /&gt;
Weight of an operator: The number of non-identity elements in the tensor product.&lt;br /&gt;
&lt;br /&gt;
Wigner-Clebsch-Gordon Coefficients&lt;br /&gt;
&lt;br /&gt;
X gate (2.3.2)&lt;br /&gt;
&lt;br /&gt;
Y gate&lt;br /&gt;
&lt;br /&gt;
Z gate, or phase-flip gate (2.3.2)&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1773</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1773"/>
		<updated>2011-12-12T17:17:57Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Let us define things here or at least add links to definitions. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[n,k,d]] code: A code using n qubits to encode k logical qubits to correct (d-1)/2 errors (d will be odd).&lt;br /&gt;
&lt;br /&gt;
Abelian subgroup&lt;br /&gt;
&lt;br /&gt;
Adjoint: The transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Ancilla&lt;br /&gt;
&lt;br /&gt;
Angular momentum&lt;br /&gt;
&lt;br /&gt;
Anti-commutation:  Two operators anti-commute when AB+BA=0.&lt;br /&gt;
&lt;br /&gt;
Basis&lt;br /&gt;
&lt;br /&gt;
Bath system:&lt;br /&gt;
&lt;br /&gt;
Bell's theorem&lt;br /&gt;
&lt;br /&gt;
Bit flip error&lt;br /&gt;
&lt;br /&gt;
Bloch sphere&lt;br /&gt;
&lt;br /&gt;
Block diagonal matrix:&lt;br /&gt;
&lt;br /&gt;
Bra-ket notation:&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Commutator: The commutator of A and B, signified by [A,B], is AB-BA.  Its value may be found by implementing the operators of A and B on a test function.  If something has a commutator of zero, it is said to commute.&lt;br /&gt;
&lt;br /&gt;
Complex conjugate:&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
Controlled not (CNOT gate)&lt;br /&gt;
&lt;br /&gt;
Controlled operation: An operation on a state or set of states that is conditioned on another state or set of states.&lt;br /&gt;
&lt;br /&gt;
CSS codes&lt;br /&gt;
&lt;br /&gt;
Definite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Degenerate: Having more than one of the same eigenvalue.&lt;br /&gt;
&lt;br /&gt;
Density matrix&lt;br /&gt;
&lt;br /&gt;
Density operator&lt;br /&gt;
&lt;br /&gt;
Depolarizing error&lt;br /&gt;
&lt;br /&gt;
Determinant: When rows or columns of a matrix are taken as vectors, the determinant is the volume enclosed by those vectors and corresponding parallel vectors creating parallelograms.  Determinants only exist for square matrices.&lt;br /&gt;
&lt;br /&gt;
Dirac delta function&lt;br /&gt;
&lt;br /&gt;
Disjointness condition&lt;br /&gt;
&lt;br /&gt;
Distance of a quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Dot product: The scalar that results when two vectors have their corresponding components multiplied, and each of these products summed.&lt;br /&gt;
&lt;br /&gt;
Dual of a code&lt;br /&gt;
&lt;br /&gt;
Eigenfunction&lt;br /&gt;
&lt;br /&gt;
Eigenvalue&lt;br /&gt;
&lt;br /&gt;
Eigenvector&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Environment system (see Bath system)&lt;br /&gt;
&lt;br /&gt;
EPR paradox&lt;br /&gt;
&lt;br /&gt;
Error syndrome&lt;br /&gt;
&lt;br /&gt;
Euler's law&lt;br /&gt;
&lt;br /&gt;
Expectation value:&lt;br /&gt;
&lt;br /&gt;
Exponentiating a matrix (see matrix exponential)&lt;br /&gt;
&lt;br /&gt;
Field&lt;br /&gt;
&lt;br /&gt;
Gate (see Quantum gate)&lt;br /&gt;
&lt;br /&gt;
Generator&lt;br /&gt;
&lt;br /&gt;
Generator matrix&lt;br /&gt;
&lt;br /&gt;
Gram-Schmidt decomposition (see Schmidt decomposition)&lt;br /&gt;
&lt;br /&gt;
Group&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
H bar: Planck's constant divided by 2\pi&lt;br /&gt;
&lt;br /&gt;
Hadamard gate&lt;br /&gt;
&lt;br /&gt;
Hamiltonian: The operator for all conservative (able to be transformed) energy in the system.  In quantum mechanics most energy is conservative.&lt;br /&gt;
&lt;br /&gt;
Heisenberg exchange interaction (8.5.2:&lt;br /&gt;
&lt;br /&gt;
Heisenberg uncertainty principle (see uncertainty principle)&lt;br /&gt;
&lt;br /&gt;
Hermitian: An operator whose transpose equals its complex conjugate.&lt;br /&gt;
&lt;br /&gt;
Hidden variable theory (see also local hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Hilbert-Schmidt inner product (2.4)&lt;br /&gt;
&lt;br /&gt;
Hilbert space&lt;br /&gt;
&lt;br /&gt;
i: square root of negative one&lt;br /&gt;
&lt;br /&gt;
Identity matrix:&lt;br /&gt;
&lt;br /&gt;
Isolated system (see Closed system)&lt;br /&gt;
&lt;br /&gt;
Ket: See bra-ket notation&lt;br /&gt;
&lt;br /&gt;
Kraus representation (or Kraus decomposition) (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Linear combination: A set of vectors each multiplied by a scalar and summed to equal a desired vector.  A complete basis has a linear combination for all vectors of that dimension.&lt;br /&gt;
&lt;br /&gt;
Linear map: A transformation from one vector to another using one operator once.&lt;br /&gt;
&lt;br /&gt;
Local actions&lt;br /&gt;
&lt;br /&gt;
Local hidden variable theory (see also hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Logical bit&lt;br /&gt;
&lt;br /&gt;
Matrix exponential&lt;br /&gt;
&lt;br /&gt;
Matrix properties&lt;br /&gt;
&lt;br /&gt;
Matrix transformation&lt;br /&gt;
&lt;br /&gt;
Measurement&lt;br /&gt;
&lt;br /&gt;
n,k,d code (see [n,k,d] code)&lt;br /&gt;
&lt;br /&gt;
No cloning theorem: No operator can duplicate an arbitrary quantum state.&lt;br /&gt;
&lt;br /&gt;
Noise&lt;br /&gt;
&lt;br /&gt;
Non-degenerate code&lt;br /&gt;
&lt;br /&gt;
Normalization: A process of scaling some set of numbers or functions in order that an operation including them returns a desired value.  For instance the set of all possible probabilities is usually scaled or normalized so they sum to one.&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Operator-sum representation (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Ordered basis&lt;br /&gt;
&lt;br /&gt;
Orthogonal&lt;br /&gt;
&lt;br /&gt;
Outer product&lt;br /&gt;
&lt;br /&gt;
P gate (not the phase gate):&lt;br /&gt;
&lt;br /&gt;
Parity&lt;br /&gt;
&lt;br /&gt;
Parity check matrix&lt;br /&gt;
&lt;br /&gt;
Partial trace&lt;br /&gt;
&lt;br /&gt;
Pauli group&lt;br /&gt;
&lt;br /&gt;
Pauli matrices: The X,Y,Z gates.&lt;br /&gt;
&lt;br /&gt;
Phase flip error&lt;br /&gt;
&lt;br /&gt;
Phase gate: See Z gate&lt;br /&gt;
&lt;br /&gt;
Planck's constant:&lt;br /&gt;
&lt;br /&gt;
Polarization&lt;br /&gt;
&lt;br /&gt;
Positive definite and semidefinite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Probability for existing in a state:&lt;br /&gt;
&lt;br /&gt;
Projector: A transformation such that P^2=P.&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Pure state&lt;br /&gt;
&lt;br /&gt;
QKD: See quantum key distribution&lt;br /&gt;
&lt;br /&gt;
Quantum bit: See Qubit&lt;br /&gt;
&lt;br /&gt;
Quantum cryptography&lt;br /&gt;
&lt;br /&gt;
Quantum dense coding&lt;br /&gt;
&lt;br /&gt;
Quantum gate: A unitary transformation applied to one or more qubits.&lt;br /&gt;
&lt;br /&gt;
Quantum hamming bound&lt;br /&gt;
&lt;br /&gt;
Quantum key distribution:&lt;br /&gt;
&lt;br /&gt;
Quantum NOT gate: see X gate&lt;br /&gt;
&lt;br /&gt;
Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
Rank&lt;br /&gt;
&lt;br /&gt;
Rate of a code&lt;br /&gt;
&lt;br /&gt;
Reduced density operator&lt;br /&gt;
&lt;br /&gt;
Reversibility of a quantum operation:  For every operation on a qubit there exists an operation which restores the state to its original function.&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
Schmidt decomposition&lt;br /&gt;
&lt;br /&gt;
Schrodinger's Equation&lt;br /&gt;
&lt;br /&gt;
set&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;br /&gt;
&lt;br /&gt;
Shor's nine-bit quantum error correcting code&lt;br /&gt;
&lt;br /&gt;
SMR representation&lt;br /&gt;
&lt;br /&gt;
Spin&lt;br /&gt;
&lt;br /&gt;
Spooky action at a distance&lt;br /&gt;
&lt;br /&gt;
Stabilizer code&lt;br /&gt;
&lt;br /&gt;
Superposition: A qubit state in superposition, \phi may be written as |\phi&amp;gt;=\alpha|0&amp;gt;+\beta|1&amp;gt; where \alpha and \beta are complex numbers.&lt;br /&gt;
&lt;br /&gt;
Taylor expansion&lt;br /&gt;
&lt;br /&gt;
Teleportation&lt;br /&gt;
&lt;br /&gt;
Tensor product&lt;br /&gt;
&lt;br /&gt;
Trace&lt;br /&gt;
&lt;br /&gt;
Transpose&lt;br /&gt;
&lt;br /&gt;
Turing machine&lt;br /&gt;
&lt;br /&gt;
Uncertainty principle&lt;br /&gt;
&lt;br /&gt;
Unitary transformation: A transformation which leaves the magnitude of any object it transforms the same.&lt;br /&gt;
&lt;br /&gt;
Universal quantum computing&lt;br /&gt;
&lt;br /&gt;
Universal set of gates (universality) (2.6)&lt;br /&gt;
&lt;br /&gt;
Vector space&lt;br /&gt;
&lt;br /&gt;
Weight of an operator: The number of non-identity elements in the tensor product.&lt;br /&gt;
&lt;br /&gt;
Wigner-Clebsch-Gordon Coefficients&lt;br /&gt;
&lt;br /&gt;
X gate (2.3.2)&lt;br /&gt;
&lt;br /&gt;
Y gate&lt;br /&gt;
&lt;br /&gt;
Z gate, or phase-flip gate (2.3.2)&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1772</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1772"/>
		<updated>2011-12-12T16:32:19Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Let us define things here or at least add links to definitions. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adjoint: Transpose complex conjugate of an operator.&lt;br /&gt;
&lt;br /&gt;
Basis&lt;br /&gt;
&lt;br /&gt;
Bath system:&lt;br /&gt;
&lt;br /&gt;
Bell's theorem&lt;br /&gt;
&lt;br /&gt;
Bit flip error&lt;br /&gt;
&lt;br /&gt;
Bloch sphere&lt;br /&gt;
&lt;br /&gt;
Bra-ket notation:&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Commutator: The commutator of A and B, signified by [A,B], is AB-BA.  Its value may be found by implementing the operators of A and B on a test function.  If something has a commutator of zero, it is said to commute.&lt;br /&gt;
&lt;br /&gt;
Complex conjugate:&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
Controlled not (CNOT gate)&lt;br /&gt;
&lt;br /&gt;
Controlled operation: An operation on a state or set of states that is conditioned on another state or set of states.&lt;br /&gt;
&lt;br /&gt;
Definite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Density matrix&lt;br /&gt;
&lt;br /&gt;
Density operator&lt;br /&gt;
&lt;br /&gt;
Depolarizing error&lt;br /&gt;
&lt;br /&gt;
Determinant: When rows or columns of a matrix are taken as vectors, the determinant is the volume enclosed by those vectors and corresponding parallel vectors creating parallelograms.  Determinants only exist for square matrices.&lt;br /&gt;
&lt;br /&gt;
Dirac delta function&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Dot product: The scalar that results when two vectors have their corresponding components multiplied, and each of these products summed.&lt;br /&gt;
&lt;br /&gt;
Eigenfunction&lt;br /&gt;
&lt;br /&gt;
Eigenvalue&lt;br /&gt;
&lt;br /&gt;
Eigenvector&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Environment system (see Bath system)&lt;br /&gt;
&lt;br /&gt;
EPR paradox&lt;br /&gt;
&lt;br /&gt;
Euler's law&lt;br /&gt;
&lt;br /&gt;
Expectation value:&lt;br /&gt;
&lt;br /&gt;
Exponentiating a matrix (see matrix exponential)&lt;br /&gt;
&lt;br /&gt;
Gate (see Quantum gate)&lt;br /&gt;
&lt;br /&gt;
Gram-Schmidt decomposition (see Schmidt decomposition)&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
H bar: Planck's constant divided by 2\pi&lt;br /&gt;
&lt;br /&gt;
Hadamard gate&lt;br /&gt;
&lt;br /&gt;
Hamiltonian: The operator for all conservative (able to be transformed) energy in the system.  In quantum mechanics most energy is conservative.&lt;br /&gt;
&lt;br /&gt;
Heisenberg uncertainty principle (see uncertainty principle)&lt;br /&gt;
&lt;br /&gt;
Hermitian: An operator whose transpose equals its complex conjugate.&lt;br /&gt;
&lt;br /&gt;
Hidden variable theory (see also local hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Hilbert-Schmidt inner product (2.4)&lt;br /&gt;
&lt;br /&gt;
i: square root of negative one&lt;br /&gt;
&lt;br /&gt;
Identity matrix:&lt;br /&gt;
&lt;br /&gt;
Isolated system (see Closed system)&lt;br /&gt;
&lt;br /&gt;
Ket: See bra-ket notation&lt;br /&gt;
&lt;br /&gt;
Kraus representation (or Kraus decomposition) (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Linear map: A transformation from one vector to another using one operator once.&lt;br /&gt;
&lt;br /&gt;
Local actions&lt;br /&gt;
&lt;br /&gt;
Local hidden variable theory (see also hidden variable theory):&lt;br /&gt;
&lt;br /&gt;
Matrix exponential&lt;br /&gt;
&lt;br /&gt;
Matrix properties&lt;br /&gt;
&lt;br /&gt;
Matrix transformation&lt;br /&gt;
&lt;br /&gt;
Measurement&lt;br /&gt;
&lt;br /&gt;
No cloning theorem: No operator can duplicate an arbitrary quantum state.&lt;br /&gt;
&lt;br /&gt;
Noise&lt;br /&gt;
&lt;br /&gt;
Normalization: A process of scaling some set of numbers or functions in order that an operation including them returns a desired value.  For instance the set of all possible probabilities is usually scaled or normalized so they sum to one.&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Operator-sum representation (see SMR representation)&lt;br /&gt;
&lt;br /&gt;
Ordered basis&lt;br /&gt;
&lt;br /&gt;
Orthogonal&lt;br /&gt;
&lt;br /&gt;
Outer product&lt;br /&gt;
&lt;br /&gt;
P gate:&lt;br /&gt;
&lt;br /&gt;
Partial trace&lt;br /&gt;
&lt;br /&gt;
Pauli matrices: The X,Y,Z gates.&lt;br /&gt;
&lt;br /&gt;
Phase flip error&lt;br /&gt;
&lt;br /&gt;
Phase gate: See Z gate or P gate&lt;br /&gt;
&lt;br /&gt;
Planck's constant:&lt;br /&gt;
&lt;br /&gt;
Polarization&lt;br /&gt;
&lt;br /&gt;
Positive definite and semidefinite matrix (see matrix properties)&lt;br /&gt;
&lt;br /&gt;
Probability for existing in a state:&lt;br /&gt;
&lt;br /&gt;
Projector: A transformation such that P^2=P.&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Pure state&lt;br /&gt;
&lt;br /&gt;
QKD: See quantum key distribution&lt;br /&gt;
&lt;br /&gt;
Quantum bit: See Qubit&lt;br /&gt;
&lt;br /&gt;
Quantum cryptography&lt;br /&gt;
&lt;br /&gt;
Quantum dense coding&lt;br /&gt;
&lt;br /&gt;
Quantum gate: A unitary transformation applied to one or more qubits.&lt;br /&gt;
&lt;br /&gt;
Quantum key distribution:&lt;br /&gt;
&lt;br /&gt;
Quantum NOT gate: see X gate&lt;br /&gt;
&lt;br /&gt;
Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
Rank&lt;br /&gt;
&lt;br /&gt;
Reduced density operator&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
Schmidt decomposition&lt;br /&gt;
&lt;br /&gt;
Schrodinger's Equation&lt;br /&gt;
&lt;br /&gt;
set&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;br /&gt;
&lt;br /&gt;
SMR representation&lt;br /&gt;
&lt;br /&gt;
Spin&lt;br /&gt;
&lt;br /&gt;
Spooky action at a distance&lt;br /&gt;
&lt;br /&gt;
Superposition: A qubit state in superposition, \phi may be written as |\phi&amp;gt;=\alpha|0&amp;gt;+\beta|1&amp;gt; where \alpha and \beta are complex numbers.&lt;br /&gt;
&lt;br /&gt;
Taylor expansion&lt;br /&gt;
&lt;br /&gt;
Teleportation&lt;br /&gt;
&lt;br /&gt;
Tensor product&lt;br /&gt;
&lt;br /&gt;
Trace&lt;br /&gt;
&lt;br /&gt;
Transpose&lt;br /&gt;
&lt;br /&gt;
Turing machine&lt;br /&gt;
&lt;br /&gt;
Uncertainty principle&lt;br /&gt;
&lt;br /&gt;
Unitary transformation: A transformation which leaves the magnitude of any object it transforms the same.&lt;br /&gt;
&lt;br /&gt;
Universal quantum computing&lt;br /&gt;
&lt;br /&gt;
Universal set of gates (universality) (2.6)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
X gate (2.3.2)&lt;br /&gt;
&lt;br /&gt;
Vector space&lt;br /&gt;
&lt;br /&gt;
Y gate&lt;br /&gt;
&lt;br /&gt;
Z gate, or phase-flip gate (2.3.2)&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1771</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1771"/>
		<updated>2011-12-12T15:32:59Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Let us define things here or at least add links to definitions. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Basis&lt;br /&gt;
&lt;br /&gt;
Bra-ket notation:&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Commutator: The commutator of A and B, signified by [A,B], is AB-BA.  Its value may be found by implementing the operators of A and B on a test function.  If something has a commutator of zero, it is said to commute.&lt;br /&gt;
&lt;br /&gt;
Complex conjugate:&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Controlled not (CNOT gate)&lt;br /&gt;
&lt;br /&gt;
Controlled operation: An operation on a state or set of states that is conditioned on another state or set of states.&lt;br /&gt;
&lt;br /&gt;
Dirac delta function&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Dot product: The scalar that results when two vectors have their corresponding components multiplied, and each of these products summed.&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Gate (see Quantum gate)&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
Hadamard gate&lt;br /&gt;
&lt;br /&gt;
Hermitian: An operator whose transpose equals its complex conjugate.&lt;br /&gt;
&lt;br /&gt;
Hilbert-Schmidt inner product (2.4)&lt;br /&gt;
&lt;br /&gt;
i: square root of negative one&lt;br /&gt;
&lt;br /&gt;
Identity matrix:&lt;br /&gt;
&lt;br /&gt;
Isolated system (seen Closed system)&lt;br /&gt;
&lt;br /&gt;
Ket: See bra-ket notation&lt;br /&gt;
&lt;br /&gt;
Matrix transformation&lt;br /&gt;
&lt;br /&gt;
Normalization: A process of scaling some set of numbers or functions in order that an operation including them returns a desired value.  For instance the set of all possible probabilities is usually scaled or normalized so they sum to one.&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Ordered basis&lt;br /&gt;
&lt;br /&gt;
Orthogonal&lt;br /&gt;
&lt;br /&gt;
P gate:&lt;br /&gt;
&lt;br /&gt;
Pauli matrices: The X,Y,Z gates.&lt;br /&gt;
&lt;br /&gt;
Phase gate: See Z gate or P gate&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Quantum bit: See Qubit&lt;br /&gt;
&lt;br /&gt;
Quantum gate: A unitary transformation applied to one or more qubits.&lt;br /&gt;
&lt;br /&gt;
Quantum NOT gate: see X gate&lt;br /&gt;
&lt;br /&gt;
Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
set&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;br /&gt;
&lt;br /&gt;
Superposition: A qubit state in superposition, \phi may be written as |\phi&amp;gt;=\alpha|0&amp;gt;+\beta|1&amp;gt; where \alpha and \beta are complex numbers.&lt;br /&gt;
&lt;br /&gt;
Tensor product&lt;br /&gt;
&lt;br /&gt;
Trace&lt;br /&gt;
&lt;br /&gt;
Transpose&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unitary transformation: A transformation which leaves the magnitude of any object it transforms the same.&lt;br /&gt;
&lt;br /&gt;
Universal quantum computing&lt;br /&gt;
&lt;br /&gt;
Universal set of gates (universality) (2.6)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
X gate (2.3.2)&lt;br /&gt;
&lt;br /&gt;
Vector space&lt;br /&gt;
&lt;br /&gt;
Y gate&lt;br /&gt;
&lt;br /&gt;
Z gate, or phase-flip gate (2.3.2)&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1770</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1770"/>
		<updated>2011-12-12T14:46:19Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;HZH = X\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1769</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1769"/>
		<updated>2011-12-12T14:41:48Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Let us define things here or at least add links to definitions. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
Isolated system (seen Closed system)&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Quantum bit or Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1768</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Glossary&amp;diff=1768"/>
		<updated>2011-12-12T14:41:15Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Let us define things here or at least add links to definitions.&lt;br /&gt;
&lt;br /&gt;
Classical bit: A classical bit is represented by two different states of a classical system, which are represented by 1 and 0. (1.3)&lt;br /&gt;
&lt;br /&gt;
Closed system&lt;br /&gt;
&lt;br /&gt;
Complex number: A complex number has a real and imaginary part.  A complex number can be represented in the form a+bi or Ce^(i\theta).&lt;br /&gt;
&lt;br /&gt;
DiVincenzo's requirements for quantum computing&lt;br /&gt;
&lt;br /&gt;
Entangled state&lt;br /&gt;
&lt;br /&gt;
Grover's algorithm&lt;br /&gt;
&lt;br /&gt;
Isolated system (seen Closed system)&lt;br /&gt;
&lt;br /&gt;
Open system&lt;br /&gt;
&lt;br /&gt;
Projection postulate&lt;br /&gt;
&lt;br /&gt;
Quantum bit or Qubit: A Qubit is represented by two states of a quantum mechanical system. (1.3)&lt;br /&gt;
&lt;br /&gt;
RSA encryption&lt;br /&gt;
&lt;br /&gt;
Shor's algorithm&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1762</id>
		<title>Index</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1762"/>
		<updated>2011-11-28T14:54:34Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;A&amp;quot;&amp;gt;&amp;lt;big&amp;gt;A&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:average - [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;big&amp;gt;B&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:basis vectors (real) [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
:binary numbers [[Appendix F - Classical Error Correcting Codes#Binary Operations|'''F.2''']]&lt;br /&gt;
:bit [[Chapter 1 - Introduction#Bits and Qubits: An Introduction|1.3]]&lt;br /&gt;
:bit-flip operation [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Bloch Sphere [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:bra [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:bracket [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;C&amp;quot;&amp;gt;&amp;lt;big&amp;gt;C&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:check-sum [[Appendix F - Classical Error Correcting Codes#Definition 1|'''F.3.1''']]&lt;br /&gt;
:closed-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:CNOT gate(see controlled NOT) &lt;br /&gt;
:Code [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code word [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:commutator [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], &lt;br /&gt;
:complex conjugate [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
::of a matrix [[Appendix C - Vectors and Linear Algebra#Complex Conjugate|'''C.3.1''']], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']]&lt;br /&gt;
:complex number [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
:computational basis [[Chapter 2 - Qubits and Collections of Qubits#Qubit States|2.2]]&lt;br /&gt;
:controlled NOT [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:controlled phase gate [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|6.1]]&lt;br /&gt;
:controlled unitary operation [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;big&amp;gt;D&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:decoherence [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:degenerate [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:delta&lt;br /&gt;
::Kronecker [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dense coding [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]]&lt;br /&gt;
:density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]],[[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::for two qubits [[Chapter 3 - Physics of Quantum Information#Density Matrix for a Mixed State: Two States|3.5.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
::mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:density operator [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:determinant [[Appendix C - Vectors and Linear Algebra#The Determinant|'''C.3.6''']]&lt;br /&gt;
:disjointness condition [[Appendix F - Classical Error Correcting Codes#Errors|'''F.5''']]&lt;br /&gt;
:distance (see also, code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']])&lt;br /&gt;
:DiVencenzo's requirements [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:Dirac notation [[Appendix C - Vectors and Linear Algebra#Introduction|'''C.2.1''']], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dot product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:dual code [[Appendix F - Classical Error Correcting Codes#Definition_11:_Dual_Code|'''F.8.1''']]&lt;br /&gt;
:dual matrix [[Appendix F - Classical Error Correcting Codes#Parity_Check_Matrix|'''F.4.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;E&amp;quot;&amp;gt;&amp;lt;big&amp;gt;E&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:eigenvalue decomposition [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvectors [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:epsilon tensor (see Levi-Civita Tensor)&lt;br /&gt;
:entangled states (see entanglement)&lt;br /&gt;
:entanglement [[Chapter 4 - Entanglement|4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 1 - Introduction#How do quantum computers provide an advantage?|1.2.5]]&lt;br /&gt;
::pure state [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
::mixed state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:error syndrome [[Appendix F - Classical Error Correcting Codes#Parity Check Matrix|'''F.4.2''']]&lt;br /&gt;
:expectation value [[Chapter 3 - Physics of Quantum Information#Expectation Values|3.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;F&amp;quot;&amp;gt;&amp;lt;big&amp;gt;F&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:field [[Appendix F - Classical Error Correcting Codes#Binary_Operations|'''F.2''']]&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;G&amp;quot;&amp;gt;&amp;lt;big&amp;gt;G&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:generator matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.1''']]&lt;br /&gt;
:group [[Appendix D - Group Theory#Definitions and Examples|'''D.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;H&amp;quot;&amp;gt;&amp;lt;big&amp;gt;H&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Hadamard gate [[Chapter 2 - Qubits and Collections of Qubits#eq2.16|2.16]]&lt;br /&gt;
:Hamiltonian [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:Hamming distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.3''']]&lt;br /&gt;
:Hamming weight, or weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
:Hermitian matrix [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5]], [[Chapter 8 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|8.2]], [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']], [[Appendix C - Vectors and Linear Algebra#Examples|'''C.6.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:Hilbert-Schmidt inner product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;I&amp;quot;&amp;gt;&amp;lt;big&amp;gt;I&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:inner product  &lt;br /&gt;
::for real vectors [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
::for complex vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:inverse of a matrix [[Appendix C - Vectors and Linear Algebra#The Inverse of a Matrix|'''C.3.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;K&amp;quot;&amp;gt;&amp;lt;big&amp;gt;K&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:ket [[Chapter 2 - Qubits and Collections of Qubits#States of Many Qubits|2.5]], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']]&lt;br /&gt;
:Kraus operators [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]]&lt;br /&gt;
:Kronecker delta [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:Kronecker product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;L&amp;quot;&amp;gt;&amp;lt;big&amp;gt;L&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Levi-Civita Tensor [[Appendix C - Vectors and Linear Algebra#eqC.9|'''C.3.6''']]&lt;br /&gt;
::Generalized [[Appendix C - Vectors and Linear Algebra#eqC.8|'''C.3.6''']]&lt;br /&gt;
:linear code [[Appendix F - Classical Error Correcting Codes#Definition 6|'''F.3.8''']]&lt;br /&gt;
:local operations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
:local unitary transformations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;big&amp;gt;M&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:matrix exponentiation [[Chapter 3 - Physics of Quantum Information#expmatrix|3.2]]&lt;br /&gt;
:maximally entangled states [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:maximally mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]]&lt;br /&gt;
::two qubits&lt;br /&gt;
:mean (see Average)&lt;br /&gt;
:median [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:minimum distance of a code (also code distance) [[Appendix F - Classical Error Correcting Codes#Definition 5|'''F.3.5''']]&lt;br /&gt;
:mixed state density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
:modulus squared [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;O&amp;quot;&amp;gt;&amp;lt;big&amp;gt;O&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:open quantum systems [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:open-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:operator-sum decomposition [[Chapter 8 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|8.4]]&lt;br /&gt;
:orthogonal [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|5.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
::vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;big&amp;gt;P&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:parity check [[Appendix F - Classical Error Correcting Codes#Defintion 1|'''F.3.1''']]&lt;br /&gt;
:parity check matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.2''']]&lt;br /&gt;
:partial trace&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:Pauli matrices [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:phase gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:phase-flip [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Planck's constant [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:projection operator [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]]&lt;br /&gt;
:pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Q&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Q&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Qbit (see qubit)&lt;br /&gt;
:quantum bit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
:quantum dense coding (see [[#D|dense coding]])&lt;br /&gt;
:quantum gates [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:qubit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;R&amp;quot;&amp;gt;&amp;lt;big&amp;gt;R&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:reduced density matrix [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::see reduced density operator&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:requirements for scalable quantum computing [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;S&amp;quot;&amp;gt;&amp;lt;big&amp;gt;S&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:scalability&lt;br /&gt;
:Schrodinger Equation [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
::for density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:separable state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
::simply separable [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:similar matrices [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:similarity transformation [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:singular values [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:special unitary matrix [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]]&lt;br /&gt;
:spectrum [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:standard deviation [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:SU [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']]&lt;br /&gt;
:syndrome measurement [[Appendix F - Classical Error Correcting Codes#Parity_Check_Matrix|'''F.4.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;big&amp;gt;T&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:teleportation [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:tensor product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
:trace [[Appendix C - Vectors and Linear Algebra#The Trace|'''C.3.5''']]&lt;br /&gt;
::partial(see partial trace)&lt;br /&gt;
:transformation [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Circuit Diagrams for Qubit Gates|2.3.1]], [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]], [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 8 - Noise in Quantum Systems#Modelling Open System Evolution|8.3]], [[Chapter 8 - Noise in Quantum Systems#Fixed-Basis Operations|8.3.2]], [[Chapter 8 - Noise in Quantum Systems#Unitary Freedom|8.4.1]], [[Chapter 8 - Noise in Quantum Systems#Physical Interpretation of the Unitary Freedom|8.4.2]], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']], [[Appendix D - Group Theory#Introduction|'''D.1''']]&lt;br /&gt;
::active [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
::passive [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:transpose [[Appendix C - Vectors and Linear Algebra#Transpose|'''C.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;U&amp;quot;&amp;gt;&amp;lt;big&amp;gt;U&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:uncertainty principle [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
:unitary matrix [[Chapter 2 - Qubits and Collections of Qubits#Chapter 2 - Qubits and Collections of Qubits|2.3]], [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']], [[Appendix D - Group Theory#Infinite Order Groups: Lie Groups|'''D.7.2''']]&lt;br /&gt;
:universal set of gates [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:universality [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;V&amp;quot;&amp;gt;&amp;lt;big&amp;gt;V&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:variance [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;W&amp;quot;&amp;gt;&amp;lt;big&amp;gt;W&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:weight, or Hamming weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;X&amp;quot;&amp;gt;&amp;lt;big&amp;gt;X&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:X-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Y&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Y&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Y-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Z&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Z&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Z-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_D_-_Group_Theory&amp;diff=1761</id>
		<title>Appendix D - Group Theory</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_D_-_Group_Theory&amp;diff=1761"/>
		<updated>2011-11-28T14:46:49Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Example 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;''&amp;lt;/nowiki&amp;gt;''Symmetry, as wide or as narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty and perfection.''&amp;lt;nowiki&amp;gt;''&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Hermann Weyl'''&lt;br /&gt;
&lt;br /&gt;
====Symmetries and Groups====&lt;br /&gt;
&lt;br /&gt;
Symmetry arguments have been used widely in mathematics, physics,&lt;br /&gt;
chemistry, biology, computer science, engineering, and elsewhere.  &lt;br /&gt;
Group theory can be an invaluable organizational tool,&lt;br /&gt;
whether it is used explicitly or implicitly, in many areas of&lt;br /&gt;
science.  &lt;br /&gt;
&lt;br /&gt;
In physics, symmetry principles are often used to describe what&lt;br /&gt;
changes and what does not in a physical system undergoing some&lt;br /&gt;
particular transformation.  For example, if a knob is turned in an&lt;br /&gt;
experiment and nothing changes, then that is an invariant of the&lt;br /&gt;
system and thus indicates a symmetry.  (Of course, the trivial case&lt;br /&gt;
where the knob has nothing to do with the experiment, like if the&lt;br /&gt;
machine with the knob is unplugged, should be excluded.) The objective&lt;br /&gt;
here is to explain group theory with this practical viewpoint in&lt;br /&gt;
mind; the idea is for this motivation to be kept in mind&lt;br /&gt;
throughout these notes.  Some formalism is necessary however.  &lt;br /&gt;
&lt;br /&gt;
It is worth noting that very general things tend to need to be &lt;br /&gt;
abstract.  And so it is with group theory.  However, to reiterate, the &lt;br /&gt;
objective here is to be as concrete as possible with the emphasis on &lt;br /&gt;
physical applications.  In this regard, it is worth mentioning that, &lt;br /&gt;
directly or indirectly, [[Bibliography#Tinkham:gpthbook|Michael Tinkham's book]] &lt;br /&gt;
on group theory very much influenced these notes.  Also, Encyclopedia of Maths, Hammermesh, ...&lt;br /&gt;
&lt;br /&gt;
====Group Theory in Physics====&lt;br /&gt;
&lt;br /&gt;
The applications to physics are too numerous to mention here.  However, several comments&lt;br /&gt;
are in order.  First, if a system has a symmetry (often able to be determined by inspection), then it has a&lt;br /&gt;
constraint placed on it. This limits the acceptability of solutions to a problem -&lt;br /&gt;
they must satisfy the symmetry requirement.  Thus identifying&lt;br /&gt;
symmetries is an excellent problem-solving technique.  Choosing &lt;br /&gt;
coordinates is an example of such symmetry identification.  &lt;br /&gt;
&lt;br /&gt;
A group is a set of symmetries.  To see this, suppose that, for example, elements &amp;lt;math&amp;gt;A,B,C,D,...\,\!&amp;lt;/math&amp;gt; operate on an object in such a way&lt;br /&gt;
that they do not change the object.  Most often in physics the&lt;br /&gt;
elements are matrices and the objects on which they act are vectors.&lt;br /&gt;
If a vector or set of vectors is unchanged by these operations, then&lt;br /&gt;
the vectors have a symmetry described by the action of these&lt;br /&gt;
operators.  In Example 2 the vectors are the vertices of the triangle&lt;br /&gt;
and the triangle is unchanged by the action of the group elements given in the example.  (Notice, as an example&lt;br /&gt;
of how a set of symmetries forms a group, that if the vector is &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and assuming &amp;lt;math&amp;gt;Av=v\,\!&amp;lt;/math&amp;gt;, i.e. &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is a symmetry operation, and also assuming&lt;br /&gt;
&amp;lt;math&amp;gt;Bv=v\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;ABv = v\,\!&amp;lt;/math&amp;gt;. Thus the set is closed under multiplication, which means that the product of elements in the set is always in the set.)&lt;br /&gt;
One way to think of this is quite literal.  If a symmetry operation is&lt;br /&gt;
applied to the equilateral triangle and the triangle is still&lt;br /&gt;
equilateral with the vertices indistinguishable (assuming no labels), then the&lt;br /&gt;
operation did not change anything discernible.  &lt;br /&gt;
&lt;br /&gt;
It turns out that group theory has been applied with great success to&lt;br /&gt;
many areas of quantum physics: solid-state physics including&lt;br /&gt;
crystallography, nuclear physics, atomic physics, molecular physics,&lt;br /&gt;
and particle physics.  It has also been applied in classical physics&lt;br /&gt;
and relativity.  It has been especially indispensable in quantum field theory and particle physics where symmetries correspond to conserved quantities observed in experiment.  &lt;br /&gt;
&lt;br /&gt;
Some groups of infinite order, such as Lie groups, were originally&lt;br /&gt;
studied largely in order to understand the symmetries of&lt;br /&gt;
differential equations.  This is the set of groups that is discussed&lt;br /&gt;
next.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Examples===&lt;br /&gt;
&lt;br /&gt;
====Definition 1: Group====&lt;br /&gt;
A '''group''' &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; is a set of objects &amp;lt;math&amp;gt;\{A,B,C,&lt;br /&gt;
...\}\,\!&amp;lt;/math&amp;gt; together with a composition rule between them (denoted &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; here and &lt;br /&gt;
called a product or multiplication) such that the following are satisfied:&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\circ B)\circ C = A\circ (B \circ C)\,\!&amp;lt;/math&amp;gt;. (&amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; is associative.)&lt;br /&gt;
#If &amp;lt;math&amp;gt;A\in \mathcal{G}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\in\mathcal{G}\,\!&amp;lt;/math&amp;gt;, then their product is &amp;lt;math&amp;gt;A\circ B \in \mathcal{G}\,\!&amp;lt;/math&amp;gt;.  (The set is closed under multiplication.)&lt;br /&gt;
#There is an element &amp;lt;math&amp;gt;\mathbb{I}\in \mathcal{G}\,\!&amp;lt;/math&amp;gt;  such that, for all &amp;lt;math&amp;gt;A\in \mathcal{G}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbb{I}A = A = A\mathbb{I}\,\!&amp;lt;/math&amp;gt;. (&amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; contains the identity element.)&lt;br /&gt;
#For all &amp;lt;math&amp;gt;A\in \mathcal{G}\,\!&amp;lt;/math&amp;gt; there exists an element &amp;lt;math&amp;gt;A^{-1}\in\mathcal{G}\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;AA^{-1} =  \mathbb{I} =A^{-1}A\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the provided examples, the objective is to make the direct connection between a group and a set of symmetries of an object.  The reason is that ''a set of symmetries forms a group'' since it satisfies all the conditions in the definition.  The symmetries are things you can do to (i.e., operations you can perform on) a set that leaves the set unchanged.   &lt;br /&gt;
&lt;br /&gt;
To see this, suppose that we operate on a set of vectors whose endpoints have a certain symmetry associated with them.  (For example, the vertices of a triangle.)  Assume their origin is the origin of a coordinate axes.  Operating on these with a set of matrix operators may leave the set of vectors unchanged, e.g. the arrows associated with the vectors still point to the same set of points, if the set of matrices is chosen properly.  Assuming all possible such matrices are included in the set, then the set of matrices, or set of symmetries, forms a group.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
&lt;br /&gt;
Consider a line segment of length 2 cm with midpoint at zero.  Suppose the end points are located at &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt; cm of the x-axis.  If the line segment were rotated &amp;lt;math&amp;gt;180^o\,\!&amp;lt;/math&amp;gt; about any line perpendicular to the segment, it would look like the same line segment.  (Let us be definite and choose the axis perpendicular to the x-y plane after choosing x and y axes.)  What this would do is exchange the two ends.  The set of points &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt; could be acted upon by an operator that&lt;br /&gt;
exchanges the two.  This rotation operation can be represented through multiplication by &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;.  Then there are two elements in the set of operations to consider.  The first is ''do nothing'' represented by &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;.  (This, of course, is the identity operation &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; for this ''group''.)  The other element is &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;.  Thus, representing&lt;br /&gt;
multiplication by &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt;, we have a group with the set &amp;lt;math&amp;gt;\{+1,-1\}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and operation &amp;lt;math&amp;gt;\circ \equiv \times \,\!&amp;lt;/math&amp;gt;.  Clearly the product is associative (it is multiplication), the set contains the identity, products are either &amp;lt;math&amp;gt;+1\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt; which are both in the group (indicating closure), and the inverse of &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;; all of requirements defined above are satisfied.  In fact this is the simplest group.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
&lt;br /&gt;
The set of symmetries of an equilateral triangle can be represented in several ways.  Two that are presented here are the set of operations on vectors from the origin to the vertices and the set of permutations on three objects.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Figure D.1&amp;quot;&amp;gt;'''Figure D.1'''&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:triangle2.jpg]]&lt;br /&gt;
|}&lt;br /&gt;
Figure D.1:  An equilateral triangle with vertices in the x-y plane, &amp;lt;math&amp;gt; v_1\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;(0,1/\sqrt{3})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;v_2\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;(-1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;, and  &amp;lt;math&amp;gt;v_3\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;(1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Consider an equilateral triangle with its &lt;br /&gt;
center at the origin of the x-y plane and  vertices&lt;br /&gt;
placed at the following points: &amp;lt;math&amp;gt;(0,1/\sqrt{3})\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math&amp;gt;(1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;(-1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;.  (See [[#Figure D.1|Figure D.1]].)  Now consider the&lt;br /&gt;
following operations on the triangle: a rotation of &amp;lt;math&amp;gt;0^o\,\!&amp;lt;/math&amp;gt; (do&lt;br /&gt;
nothing), a rotation of &amp;lt;math&amp;gt;120^o\,\!&amp;lt;/math&amp;gt;, a rotation of &amp;lt;math&amp;gt;240^o\,\!&amp;lt;/math&amp;gt;, and a reflection&lt;br /&gt;
about the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; axis, &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;.  There are two other reflections we could perform, labelled &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;, which are reflections through lines bisecting the angles at vertices 2 and 3 respectively, as shown in [[#Figure D.1|Figure D.1]].  These make up the set of six symmetry operations on the equilateral triangle.  &lt;br /&gt;
&lt;br /&gt;
If we take the first of these, &amp;lt;math&amp;gt;P_0\,\!&amp;lt;/math&amp;gt;, to be the original configuration (shown in [[#Figure D.1|Figure D.1]]), then each of the first three of these are a rotation from the original configuration.  Each of the last three is obtained from a reflection combined with a rotation.  To be explicit, let us consider the following operations:  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}_2 = \left(\begin{array}{cc} 1 &amp;amp; 0  \\ 0 &amp;amp; 1  &lt;br /&gt;
  \end{array}\right), &lt;br /&gt;
R_1 = \left(\begin{array}{cc} -1/2 &amp;amp; -\sqrt{3}/2 \\ \sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), \; \; &lt;br /&gt;
R_2 = \left(\begin{array}{cc}  -1/2 &amp;amp; \sqrt{3}/2  \\ -\sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.1}}&lt;br /&gt;
where &amp;lt;math&amp;gt;R_1\,\!&amp;lt;/math&amp;gt; is a rotation of the x-y plane by &amp;lt;math&amp;gt;120^o\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt;  is a rotation&lt;br /&gt;
of the x-y plane by &amp;lt;math&amp;gt;240^o\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\mathbb{I}_2\,\!&amp;lt;/math&amp;gt; is a rotation of &amp;lt;math&amp;gt;0^o\,\!&amp;lt;/math&amp;gt; (the identity subscript refers to the 2-dimensional nature of the transformation, unlike the other subscripts here).  In addition to these operations, two others must be included&lt;br /&gt;
to complete the set: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\sigma_1   = \left(\begin{array}{cc} -1 &amp;amp;0 \\ 0 &amp;amp;1 \end{array}\right), \;\; &lt;br /&gt;
\sigma_2 =\sigma_1 R_1 =  \left(\begin{array}{cc} 1/2 &amp;amp; \sqrt{3}/2 \\ \sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), \; \; &lt;br /&gt;
\sigma_3 = \sigma_1R_2 = \left(\begin{array}{cc}  1/2 &amp;amp; -\sqrt{3}/2  \\ -\sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), \; \; &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.2}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\sigma_1 R_1\,\!&amp;lt;/math&amp;gt; is the same as &amp;lt;math&amp;gt;\sigma_1\circ R_1\,\!&amp;lt;/math&amp;gt;, but the &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; has been dropped&lt;br /&gt;
since this is ordinary matrix multiplication.  This group will be used&lt;br /&gt;
as an example for several group properties and is called &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  The&lt;br /&gt;
products of these &lt;br /&gt;
elements are summarized in [[#Table D.1|Table D.1]], which is called the&lt;br /&gt;
multiplication table for the group.  The multiplication table will be&lt;br /&gt;
discussed repeatedly throughout this appendix due to its importance in&lt;br /&gt;
group theory.  It would be advisable to stare at it for some time to&lt;br /&gt;
see what patterns can be identified.  The meaning of these patterns&lt;br /&gt;
will be discussed later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;div id=&amp;quot;Table D.1&amp;quot;&amp;gt;&lt;br /&gt;
'''Table D.1: Group Multiplication Table for''' &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;20&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \downarrow\rightarrow\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Table D.1: ''Group multiplication table for the group &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  The notation in the upper left corner (&amp;lt;math&amp;gt;\downarrow\rightarrow\,\!&amp;lt;/math&amp;gt;) indicates that the element in the first column is to be multiplied by the element in the first row to obtain the result.  Since the group is not abelian, i.e. the elements do not commute, the order matters.''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A second way to identify all possible configurations of&lt;br /&gt;
the triangle that leave the triangle looking the same is to use the positions of the vertices.  There are six possible choices for the positions of the vertices.  Let us name them 1,2,3.  Then, reading counter-clockwise&lt;br /&gt;
from the top, we can have &amp;lt;math&amp;gt;P_0=(1,2,3)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_2=(3,1,2)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_4=(2,3,1)\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math&amp;gt;P_1=(1,3,2)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_3=(3,2,1)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_5=(2,1,3)\,\!&amp;lt;/math&amp;gt;.  These are all of the permutations of three objects.  (In this case the three objects are the numbers 1,2,3.)  This is another way to represent the various configurations of the equilateral triangle.&lt;br /&gt;
&lt;br /&gt;
====Definition 2: Order of a Group====&lt;br /&gt;
&lt;br /&gt;
The number of elements in a group is called the '''order''' of the group.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example 1 has two elements and so has order two. Example 2 has six&lt;br /&gt;
elements, so the order of this group is six.&lt;br /&gt;
&lt;br /&gt;
====Definition 3: Abelian and Nonabelian Group====&lt;br /&gt;
&lt;br /&gt;
A group for which every element of the group commutes with every other element of the group (&amp;lt;math&amp;gt;g_1g_2 = g_2g_1,\;\;\forall g_1,g_2\in \mathcal{G}\,\!&amp;lt;/math&amp;gt;) is called '''abelian'''.  If any two elements do not commute, the group is called '''nonabelian'''.  &lt;br /&gt;
&lt;br /&gt;
It is clear that Example 1 is an abelian group consisting of only two&lt;br /&gt;
elements &amp;lt;math&amp;gt;+1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;.  However, Example 2 is clearly a nonabelian&lt;br /&gt;
group as can be seen from the multiplication table.  For example&lt;br /&gt;
&amp;lt;math&amp;gt;\sigma_2R_2 = \sigma_1 \,\!&amp;lt;/math&amp;gt;, but &amp;lt;math&amp;gt;R_2\sigma_2 =\sigma_3 \neq \sigma_1 \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4: Cyclic Group====&lt;br /&gt;
A '''cyclic group''' is a group in which every element of the group can be obtained from one element and all its distinct powers.  The particular element is called the '''generating element'''.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example 4 provides examples of cyclic groups.  &lt;br /&gt;
&lt;br /&gt;
====Definition 5: Subgroup====&lt;br /&gt;
&lt;br /&gt;
A '''subgroup''' &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; is a subset of the group elements that satisfies all&lt;br /&gt;
the properties in the definition of a group under the inherited multiplication rule.&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
&lt;br /&gt;
Consider the set &amp;lt;math&amp;gt;\{0,1,2,3, \cdots, N-1\}\,\!&amp;lt;/math&amp;gt; and identify &amp;lt;math&amp;gt; N\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt;.  This is written as &amp;lt;math&amp;gt;0 \equiv N\,\!&amp;lt;/math&amp;gt;.  The operation on this set will be addition.  This is the group of integers modulo &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; and is&lt;br /&gt;
denoted &amp;lt;math&amp;gt;\mathbb{Z}_N\,\!&amp;lt;/math&amp;gt;.  To be concrete, let us consider the group &amp;lt;math&amp;gt;\mathbb{Z}_3\,\!&amp;lt;/math&amp;gt;, consisting of &lt;br /&gt;
&amp;lt;math&amp;gt;\{0,1,2;+\}\,\!&amp;lt;/math&amp;gt;.  (When the operation could be ambiguous, it is often useful to specify it explicitly along with the members of the set.)  Let us check that this is a group.  First, addition is certainly associative.  Second, the identity is zero since &lt;br /&gt;
&amp;lt;math&amp;gt;a+0 =a\,\!&amp;lt;/math&amp;gt; for any integer &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt;.  Third, &amp;lt;math&amp;gt;1+2=3 = 0\,\!&amp;lt;/math&amp;gt; mod &amp;lt;math&amp;gt;3\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
In other words, since &amp;lt;math&amp;gt;3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; are equivalent, the sum of one and&lt;br /&gt;
two is zero which is in the set.  The order of the group is 3 (hence the subscript).  &lt;br /&gt;
&lt;br /&gt;
====Example 1 Revisited====&lt;br /&gt;
&lt;br /&gt;
Recall  [[#Example 1|Example 1]] is a group with &amp;lt;math&amp;gt;\{+1,-1\}\,\!&amp;lt;/math&amp;gt; using multiplication.  &lt;br /&gt;
This is the simplest nontrivial &lt;br /&gt;
''cyclic group'', since it is a cyclic group of order two.  &lt;br /&gt;
All elements of this group are obtained from powers&lt;br /&gt;
of &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;, namely &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(-1)^2 =1\,\!&amp;lt;/math&amp;gt;.  Notice that the generating&lt;br /&gt;
element is special; one cannot just take any element of the group to&lt;br /&gt;
be a generating element.&lt;br /&gt;
&lt;br /&gt;
====Example 4====&lt;br /&gt;
&lt;br /&gt;
We can represent the cyclic group of order &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
in several ways.  One we have seen is &amp;lt;math&amp;gt;\mathbb{Z}_N\,\!&amp;lt;/math&amp;gt; with the operation of addition.  Another is the set of elements &lt;br /&gt;
&amp;lt;math&amp;gt;\{e^{2\pi i n/(N-1)}\}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n = 0, 1, 2, 3, ..., N-1\,\!&amp;lt;/math&amp;gt; with the operation of multiplication.  Since this group can be&lt;br /&gt;
seen as the consisting of the element &amp;lt;math&amp;gt;e^{2\pi i/(N-1)}\,\!&amp;lt;/math&amp;gt; and all its&lt;br /&gt;
powers, then this is a cyclic group with generating element &lt;br /&gt;
&amp;lt;math&amp;gt;e^{2\pi i/(N-1)}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Example 5====&lt;br /&gt;
&lt;br /&gt;
Include modular arithmetic under multiplication as a group.&lt;br /&gt;
&lt;br /&gt;
===Comparing Groups: Homomorphisms and Isomorphisms===&lt;br /&gt;
&lt;br /&gt;
Let us consider two groups &amp;lt;math&amp;gt;\mathcal{G}_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathcal{G}_2\,\!&amp;lt;/math&amp;gt; with product rules symbolized by &amp;lt;math&amp;gt;\cdot\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; respectively.  Let the elements of &amp;lt;math&amp;gt;\mathcal{G}_1\,\!&amp;lt;/math&amp;gt; be denoted &amp;lt;math&amp;gt;a_1,a_2, ...\,\!&amp;lt;/math&amp;gt; and the elements of &amp;lt;math&amp;gt;\mathcal{G}_2\,\!&amp;lt;/math&amp;gt; be denoted &amp;lt;math&amp;gt;b_1,b_2, ...\,\!&amp;lt;/math&amp;gt;  When comparing two groups to see how similar they are, the relationship among the&lt;br /&gt;
elements under the product rule is all-important.  Therefore, if a map from one set of elements to another is given by &amp;lt;math&amp;gt;f:\mathcal{G}_1\rightarrow\mathcal{G}_2\,\!&amp;lt;/math&amp;gt;, meaning &amp;lt;math&amp;gt;f(a_1) \in\mathcal{G}_2\,\!&amp;lt;/math&amp;gt;, then the two groups have the same (algebraic) structure if, for all&lt;br /&gt;
&amp;lt;math&amp;gt;a_i,a_j,a_k \in \mathcal{G}_1\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
a_i\cdot a_j = a_k \;\; \Rightarrow \;\; f(a_i)\circ f(a_j) = f(a_k).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.3}}  &lt;br /&gt;
(Notice that this can be true even if the map takes all of the elements &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; to the identity.)  &lt;br /&gt;
&lt;br /&gt;
====Definition 6: Homomorphism====&lt;br /&gt;
&lt;br /&gt;
If the condition [[#eqD.3|Eq.(D.3)]] is satisfied, the map is called a '''homomorpic map''' or a '''homomorphism'''.  A homomorphism &amp;lt;math&amp;gt;f\,\!&amp;lt;/math&amp;gt; satisfies the important property that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f(A\circ B) = f(A) \cdot f(B).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.4}}&lt;br /&gt;
The composition &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; can, in general, be different from &amp;lt;math&amp;gt;\cdot\,\!&amp;lt;/math&amp;gt;, but here both will be matrix multiplication unless otherwise stated.&lt;br /&gt;
&lt;br /&gt;
====Definition 7: Isomorphism====&lt;br /&gt;
&lt;br /&gt;
If a homomorphism is one-to-one (each &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; is mapped to one and only one &amp;lt;math&amp;gt;b_j\,\!&amp;lt;/math&amp;gt;) and onto (each element in &amp;lt;math&amp;gt;\mathcal{G}_2\,\!&amp;lt;/math&amp;gt; has an element of &amp;lt;math&amp;gt;\mathcal{G}_1\,\!&amp;lt;/math&amp;gt; mapped to it), then the map is called an '''isomorphic map''' or an&lt;br /&gt;
'''isomorphism'''.  &lt;br /&gt;
&lt;br /&gt;
These definitions are used repeatedly in the representation theory of groups discussed below.&lt;br /&gt;
&lt;br /&gt;
===Discussion===&lt;br /&gt;
&lt;br /&gt;
With only these few definitions it is possible to discuss many important properties of groups and some of the reasons why they are so&lt;br /&gt;
important to physics.  Let us first discuss some of the important properties of the group multiplication table.  &lt;br /&gt;
&lt;br /&gt;
====Group Multiplication Table====&lt;br /&gt;
&lt;br /&gt;
The group multiplication table specifies the structure of the group and thus identifies a group.  One example of this is when the&lt;br /&gt;
group is abelian.  For all abelian groups the table is symmetric about the diagonal.  (This follows from the fact that &amp;lt;math&amp;gt;ab=ba\,\!&amp;lt;/math&amp;gt; for abelian&lt;br /&gt;
groups.)  Another example is the presence of subgroups.  This will be illustrated in this section.   &lt;br /&gt;
&lt;br /&gt;
====Subgroups: Return to Example 2====&lt;br /&gt;
&lt;br /&gt;
In [[#Example 2|Example 2]], [[#Table D.1|Table D.1]] immediately shows that the elements&lt;br /&gt;
&amp;lt;math&amp;gt;\mathbb{I}, R_1,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt; form a subgroup since they are closed&lt;br /&gt;
under multiplication.  Another somewhat less obvious subgroup&lt;br /&gt;
consists of the elements &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_1 \,\!&amp;lt;/math&amp;gt;.  This is a convenient&lt;br /&gt;
method for identifying subgroups, but is clearly limited to groups&lt;br /&gt;
with a relatively small order.&lt;br /&gt;
&lt;br /&gt;
====The Rearrangement Theorem====&lt;br /&gt;
&lt;br /&gt;
Notice that each group element appears in each row and each column of [[#Table D.1|Table D.1]] once and only once.  This is no coincidence, but&lt;br /&gt;
is a general property of the multiplication table for groups.  This implies that each row and column contains each and every group element&lt;br /&gt;
(due to the presence of the identity) so that each row and column is a simple rearrangement of the set of elements.  For this reason, this is&lt;br /&gt;
sometimes called the rearrangement theorem and follows directly from the uniqueness of the elements in the set.  (If there were two&lt;br /&gt;
elements in a row that were the same, then &amp;lt;math&amp;gt;ac=ab\,\!&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;a,b,c\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
But then &amp;lt;math&amp;gt;a^{-1}ac = a^{-1}ab \Rightarrow c=b\,\!&amp;lt;/math&amp;gt;, which cannot happen if&lt;br /&gt;
all elements are distinct.)&lt;br /&gt;
&lt;br /&gt;
===A Little Representation Theory===&lt;br /&gt;
&lt;br /&gt;
A group is specified by a set of elements, its product rule, and the&lt;br /&gt;
relations among the elements of the group under the product rule.  &lt;br /&gt;
For finite order groups the group multiplication table is how one &lt;br /&gt;
identifies a group or shows that two groups are homomorphic&lt;br /&gt;
(explicitly or not).  &lt;br /&gt;
&lt;br /&gt;
====Definition 8: Representation====&lt;br /&gt;
&lt;br /&gt;
A '''matrix representation''' of an abstract group is&lt;br /&gt;
any set of elements which is homomorphic to the set of elements in the abstract group.  &lt;br /&gt;
&lt;br /&gt;
More generally, if there is a homomorphic map from the set of abstract group elements onto a set of operators which, with their own combination rule (multiplication rule), satisfies the group axioms, then the operators form a representation of the group.  (This includes preserving products as described in [[#Definition 6: Homomorphism|Section 3.1]].)&lt;br /&gt;
&lt;br /&gt;
For our purposes, it is very important to note that a&lt;br /&gt;
set of group elements can always be represented by a set of matrices&lt;br /&gt;
so that we may restrict our attention to matrix representations.  &lt;br /&gt;
This, along with ordinary &lt;br /&gt;
matrix multiplication for the product rule, provides a way to represent&lt;br /&gt;
any group.  This is true for groups that have a finite order as well&lt;br /&gt;
as infinite order (discussed later).  &lt;br /&gt;
&lt;br /&gt;
Note that a representation is a ''homomorphism'' that can be a many-to-one map.  If it is an isomorphism, the representation is said to be '''faithful'''.  If, however, all matrices are the identity matrix, then all group elements are mapped to the identity and the multiplication relations (in the group multiplication table) are preserved; this representation is sometimes called the ''trivial representation''.  This is always a valid, but not very informative and certainly not faithful, representation of any group.  &lt;br /&gt;
&lt;br /&gt;
As will be shown in this first example, there are different sets of matrices that can represent the same group.  This example will provide motivation for what follows.&lt;br /&gt;
&lt;br /&gt;
====Example 6====&lt;br /&gt;
&lt;br /&gt;
Let us consider an example of the representation of the group from&lt;br /&gt;
[[#Example 2|Example 2]].  This is a group of operations that will&lt;br /&gt;
take any permutation of the vertices to any other permutation.  This&lt;br /&gt;
is also the set of permutations of three objects.  This group is often&lt;br /&gt;
denoted &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  The set of matrices representing the&lt;br /&gt;
rotations, reflection, and rotations combined with reflection provides&lt;br /&gt;
one way of representing this group.  Another way to represent this&lt;br /&gt;
group is to use &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrices rather than the&lt;br /&gt;
&amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices given in the example.  Let us&lt;br /&gt;
consider the following set of matrices:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbb{I}_3 = \left(\begin{array}{ccc} 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;1&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
P_2 = \left(\begin{array}{ccc} 0&amp;amp;0&amp;amp;1 \\ 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;1&amp;amp;0 \end{array}\right), \\&lt;br /&gt;
P_4 = \left(\begin{array}{ccc} 0&amp;amp;1&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \\ 1&amp;amp;0&amp;amp;0 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
P_1 = \left(\begin{array}{ccc} 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \\ 0&amp;amp;1&amp;amp;0 \end{array}\right),\\&lt;br /&gt;
P_3 = \left(\begin{array}{ccc} 0&amp;amp;0&amp;amp;1 \\ 0&amp;amp;1&amp;amp;0 \\ 1&amp;amp;0&amp;amp;0 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
P_5 = \left(\begin{array}{ccc} 0&amp;amp;1&amp;amp;0 \\ 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \end{array}\right). &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|D.5}}&lt;br /&gt;
Clearly, when these matrices act on a column vector, labelling the&lt;br /&gt;
vertices,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{c} 1 \\ 2 \\ 3 \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.6}}&lt;br /&gt;
the result is one of the permutations of three objects.  These&lt;br /&gt;
orientations correspond to the same action as the &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices&lt;br /&gt;
given in [[#Example 2|Example 2]] above.  Therefore, these two sets of matrices&lt;br /&gt;
represent the ''same'' group, &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  These representations are clearly&lt;br /&gt;
different; in fact, the dimensions of the matrices representing the&lt;br /&gt;
group is different for the two different representations.  There are &lt;br /&gt;
other representations that can be immediately constructed.   Consider&lt;br /&gt;
a set of matrices like the following:  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbb{I}_5 = \left(\begin{array}{cc} \mathbb{I}_3&amp;amp;0 \\ 0&amp;amp;\mathbb{I}_2 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
g_2 = \left(\begin{array}{cc} P_2&amp;amp;0 \\ 0&amp;amp;R_1  \end{array}\right), \\&lt;br /&gt;
g_4 = \left(\begin{array}{cc} P_4&amp;amp;0 \\ 0&amp;amp;R_2  \end{array}\right), \; \;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
g_1 = \left(\begin{array}{cc} P_1&amp;amp;0 \\ 0&amp;amp; \sigma_1 \end{array}\right), \\&lt;br /&gt;
g_3 = \left(\begin{array}{cc} P_3&amp;amp;0 \\ 0&amp;amp;\sigma_2 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
g_5 = \left(\begin{array}{cc} P_5&amp;amp;0 \\ 0&amp;amp; \sigma_3  \end{array}\right). &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|D.7}}&lt;br /&gt;
This set of matrices is said to be block-diagonal since it only has&lt;br /&gt;
non-zero elements in blocks along the diagonal.  The &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; represents a&lt;br /&gt;
block of zeroes which is either &amp;lt;math&amp;gt;3\times 2\,\!&amp;lt;/math&amp;gt; (upper right) or &amp;lt;math&amp;gt;2\times&lt;br /&gt;
3\,\!&amp;lt;/math&amp;gt; (lower left).  This set of matrices clearly satisfies the same multiplication relations as the sets given above,&lt;br /&gt;
(&amp;lt;math&amp;gt;\{\mathbb{I}_3,P_1,P_2,P_3,P_4,P_5\}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\{\mathbb{I}_2, R_1, R_2, \sigma_1, \sigma_2, \sigma_3\}\,\!&amp;lt;/math&amp;gt;), since the matrices multiply in&lt;br /&gt;
blocks.  The elements of the group have the same multiplication table&lt;br /&gt;
and thus are isomorphic.  Therefore this is another representation of&lt;br /&gt;
the group &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt; that is different from either of the&lt;br /&gt;
two representations in the subblocks along the diagonal since it is a&lt;br /&gt;
combination of the two.&lt;br /&gt;
&lt;br /&gt;
====Definition 9: Similarity Transformation====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; be an invertible matrix and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; be any matrix.  In these notes, by '''similarity transformation'''  we mean a transformation of the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;M^\prime\,\!&amp;lt;/math&amp;gt; that looks like &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;M^\prime = SMS^{-1}.\,\!&amp;lt;/math&amp;gt;|D.8}}&lt;br /&gt;
We say the matrices &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M^\prime\,\!&amp;lt;/math&amp;gt; are similar matrices.  &lt;br /&gt;
&lt;br /&gt;
The importance of similarity transformations for representation theory is that they leave matrix equations unchanged.  Suppose &amp;lt;math&amp;gt;A=BC \,\!&amp;lt;/math&amp;gt;.  Then defining &amp;lt;math&amp;gt;A^\prime = SAS^{-1}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B^\prime = SBS^{-1}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;C^\prime = SCS^{-1}\,\!&amp;lt;/math&amp;gt;, then&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;A=BC \; \Rightarrow A^\prime=B^\prime C^\prime\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more discussion on similarity transformations, see [[Appendix C - Vectors and Linear Algebra|Appendix C]], especially [[Appendix C - Vectors and Linear Algebra#The Trace|Section 3.5]], [[Appendix C - Vectors and Linear Algebra#The Trace|Section 3.6]], and  [[Appendix C - Vectors and Linear Algebra#The Trace|Section 5.1]].&lt;br /&gt;
&lt;br /&gt;
====Example 6 Continued====&lt;br /&gt;
&lt;br /&gt;
Example 6 is a non-trivial problem even though it appears&lt;br /&gt;
otherwise.  The way to show this is to&lt;br /&gt;
perform a similarity transformation, &amp;lt;math&amp;gt;g&lt;br /&gt;
\rightarrow S g S^{-1}\,\!&amp;lt;/math&amp;gt;, on all elements &amp;lt;math&amp;gt;g\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the group.  Since &amp;lt;math&amp;gt;S \,\!&amp;lt;/math&amp;gt; is any invertible matrix,&lt;br /&gt;
it could mix all rows and columns.  This would make it very difficult to identify the block-diagonal form or even know that it exists unless some other tools are used.&lt;br /&gt;
&lt;br /&gt;
Furthermore, given a set of matrices that are known to form a representation of the group, it is non-trivial to find the similarity transformation that will simultaneously block-diagonalize all of these matrices to enable the identification of irreducible blocks.&lt;br /&gt;
&lt;br /&gt;
====Equivalent Representations====&lt;br /&gt;
&lt;br /&gt;
Two representations &amp;lt;math&amp;gt;D^{(1)}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D^{(2)} \,\!&amp;lt;/math&amp;gt; are '''equivalent''' if and only if there is an invertible matrix &amp;lt;math&amp;gt;S \,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;D^{(1)} = SD^{(2)}S^{-1} \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We will only consider matrix representations.  In this case, the matrices will act on a vector space &amp;lt;math&amp;gt;\mathcal{V}\,\!&amp;lt;/math&amp;gt; called the '''representation space.'''&lt;br /&gt;
&lt;br /&gt;
===Miscellaneous Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 10: Stabilizer====&lt;br /&gt;
&lt;br /&gt;
The '''stabilizer''' of an element &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; of a set &amp;lt;math&amp;gt;\mathcal{M}\,\!&amp;lt;/math&amp;gt; is the subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; that leaves the element &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; fixed: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{S} = \{S\in \mathcal{S}|sm=m\}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.9}}&lt;br /&gt;
The stabilizer of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; is also called the '''isotropy group''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;, the '''isotropy subgroup''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;, the '''stationary subgroup''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;, or sometimes in physics, '''little group''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Centralizer====&lt;br /&gt;
&lt;br /&gt;
The '''centralizer''' subgroup of a group consists of elements of the group that commute with all elements of a certain set.&lt;br /&gt;
&lt;br /&gt;
====Definition 12: Pauli Group====&lt;br /&gt;
The '''Pauli Group''' on &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; qubits, denoted &amp;lt;math&amp;gt;\mathcal{P}_n\,\!&amp;lt;/math&amp;gt;, is the set of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; tensor products of the Pauli matrices &amp;lt;math&amp;gt;\mathbb{I}, X, Y, Z\,\!&amp;lt;/math&amp;gt; along with coefficients &amp;lt;math&amp;gt;\pm 1,\pm i\,\!&amp;lt;/math&amp;gt;.  This is an example of a group.  It is defined here due to its importance for quantum error correcting codes and the factors &amp;lt;math&amp;gt;\pm 1,\pm i\,\!&amp;lt;/math&amp;gt; are required for the closure property in the definition of a group.&lt;br /&gt;
&lt;br /&gt;
====Properties of the Pauli Group====&lt;br /&gt;
&lt;br /&gt;
Let us consider the Pauli group for 2 qubits with the tensor product symbols omitted.  The following are elements:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}\mathbb{I},\mathbb{I}X,\mathbb{I}Y,\mathbb{I}Z,X\mathbb{I},XX,XY,XZ,Y\mathbb{I},YX,YY,YZ,Z\mathbb{I},ZX,ZY,ZZ,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.10}}&lt;br /&gt;
as are all of these elements multiplied by &amp;lt;math&amp;gt;-1,\,\!&amp;lt;/math&amp;gt; and all of these elements multiplied by &amp;lt;math&amp;gt;i,\,\!&amp;lt;/math&amp;gt; as well as all of these elements multiplied by &amp;lt;math&amp;gt;-i.\,\!&amp;lt;/math&amp;gt;  Thus there are &amp;lt;math&amp;gt;4^3\,\!&amp;lt;/math&amp;gt; total elements of the group for two qubits.  In general there are &amp;lt;math&amp;gt;4\cdot 4^n\,\!&amp;lt;/math&amp;gt; elements for the Pauli group for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; qubits.&lt;br /&gt;
&lt;br /&gt;
One of the nice and interesting properties of the Pauli group is that every pair, say &amp;lt;math&amp;gt;A,B\,\!&amp;lt;/math&amp;gt;, of elements of the Pauli group either commutes &amp;lt;math&amp;gt;[A,B]= AB-BA =0\,\!&amp;lt;/math&amp;gt; or anti-commutes &amp;lt;math&amp;gt;\{A,B\} = AB+BA =0\,\!&amp;lt;/math&amp;gt;.  This turns out the be very useful.  &lt;br /&gt;
&lt;br /&gt;
Another notation for [[#eqD.10|Equation (D.10)]] is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I},X_2,Y_2,Z_2,X_1,X_1X_2,X_1Y_2,X_1Z_2,Y_1,Y_1X_2,Y_1Y_2,Y_1Z_2,Z_1,Z_1X_2,Z_1Y_2,Z_1Z_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.11}}&lt;br /&gt;
Clearly this index notation has an advantage for large products.  It also enables us to immediately see the weight of an operator.&lt;br /&gt;
&lt;br /&gt;
====Definition 13: Weight of an Operator====&lt;br /&gt;
&lt;br /&gt;
The '''weight of an operator''' is the number of non-identity elements in the tensor product.  &lt;br /&gt;
&lt;br /&gt;
This definition is most often used in the context of the Pauli Group.  Its importance is seen in quantum error correcting codes.&lt;br /&gt;
&lt;br /&gt;
====Definition 14: Generators of a Group====&lt;br /&gt;
&lt;br /&gt;
Let us consider a discrete group (or subgroup of a larger group).  There exists a subset of the group elements that will give all of the (sub)group elements through multiplication.  The elements in this subset are called '''generators''' of the group.  &lt;br /&gt;
&lt;br /&gt;
Note that the set of generators is not unique.  &lt;br /&gt;
&lt;br /&gt;
The generators are a very convenient set to use because it is a much smaller set than the whole group and many properties of the group can discovered using only the generators.  For example, if every generator of a subgroup acts on an object and leaves it invariant, then every element of the group will also leave it invariant since they are all given by products of the generators.  Thus one only needs to check whether or not the generators will leave an object invariant.  &lt;br /&gt;
&lt;br /&gt;
One example is the stabilizer subgroup where a set of generators stabilizes, or leaves invariant, the code words of the stabilizer code.&lt;br /&gt;
&lt;br /&gt;
====Definition 15: Normalizer====&lt;br /&gt;
&lt;br /&gt;
The '''normalizer''' of a set &amp;lt;math&amp;gt;\mathcal{M}\,\!&amp;lt;/math&amp;gt; is the subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; that leaves the set &amp;lt;math&amp;gt;\mathcal{M}\,\!&amp;lt;/math&amp;gt; fixed: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{S} = \{S\in \mathcal{S}|S\mathcal{M}=\mathcal{M}\}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.12}}&lt;br /&gt;
Note the difference between the centralizer, with which this should not be confused.  The centralizer leaves ''every element'' of the set fixed.  The elements of the normalizer contain the elements of the centralizer as a special case, but they can move elements around within the set.&lt;br /&gt;
&lt;br /&gt;
====Definition 16: Coset====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; be a subgroup and &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; be an element of the group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt;.  The left '''coset''' is a subset of the group &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G\mathcal{S} = \{GS|S\in\mathcal{S} \}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.13}}&lt;br /&gt;
&lt;br /&gt;
One can similarly define the right coset.  &lt;br /&gt;
&lt;br /&gt;
The importance of cosets is that they partition the group in a particular way.  If there is another coset, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
K\mathcal{S} = \{KS|S\in\mathcal{S} \},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.14}}&lt;br /&gt;
then either &amp;lt;math&amp;gt;G\mathcal{S}=K\mathcal{S}\,\!&amp;lt;/math&amp;gt; or they are disjoint sets, having no element in common.  (This is because &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; is a subgroup.  You could multiply by an element to show they are the same set.)&lt;br /&gt;
&lt;br /&gt;
===Infinite Order Groups: Lie Groups===&lt;br /&gt;
&lt;br /&gt;
All of the examples presented so far have been groups with finite order.  Groups that have infinite order can be described with one or more parameters. Groups that are differentiable with respect to those parameters are called ''Lie groups''.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Definition 17: Lie Group====&lt;br /&gt;
&lt;br /&gt;
A '''Lie group''' is a group that is also a differentiable manifold.  (See for example [[Bibliography#Cecile:book|Analysis, Manifolds, and Physics]]).  &lt;br /&gt;
&lt;br /&gt;
In this section, several examples of Lie groups are given.  In physics these groups correspond to a continuous set of symmetries, whereas the groups of finite order correspond to a discrete set of symmetries.&lt;br /&gt;
&lt;br /&gt;
====Example 7====&lt;br /&gt;
&lt;br /&gt;
The Lie group most often used as the introductory example is the group consisting of the set &amp;lt;math&amp;gt;e^{i\theta}\,\!&amp;lt;/math&amp;gt; for all &lt;br /&gt;
&amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  This group has an infinite number of elements (i.e. an infinite order) and one parameter, &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  The group is also a differentiable manifold---a circle.  Notice that this group is also isomorphic to the set of matrices &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
       \cos \theta &amp;amp; -\sin \theta \\&lt;br /&gt;
       \sin \theta &amp;amp; \cos \theta &lt;br /&gt;
\end{array}\right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.15}}&lt;br /&gt;
If this matrix were to act on a unit vector in the x-y plane, it would rotate that vector around in a circle; after &amp;lt;math&amp;gt;2\pi\,\!&amp;lt;/math&amp;gt;, the tip of the vector would sweep out a circle of unit radius.&lt;br /&gt;
&lt;br /&gt;
====Example 8====&lt;br /&gt;
&lt;br /&gt;
Another example of a Lie group, and one of the most important for quantum information, is the set of complex &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices&lt;br /&gt;
that satisfy&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U^\dagger U = \mathbb{I} = U U^\dagger. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.16}}&lt;br /&gt;
This group is called &amp;lt;math&amp;gt;U(2)\,\!&amp;lt;/math&amp;gt;  and is the set of ''unitary'' &lt;br /&gt;
&amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices (hence the &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;).  Notice that the determinant&lt;br /&gt;
of this set is &amp;lt;math&amp;gt;e^{i\alpha}\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\alpha\,\!&amp;lt;/math&amp;gt; is a real number, since  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
1 = \det(\mathbb{I}) = \det(U U^\dagger) = \det(U)\det(U^\dagger) &lt;br /&gt;
  = \det(U)(\det(U))^*. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.17}}&lt;br /&gt;
&lt;br /&gt;
There is a subgroup of this group that is often considered---the subgroup with determinant one.  This group is denoted &amp;lt;math&amp;gt;SU(2)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and is known as the ''special unitary group''.  The term unitary refers to the fact that &amp;lt;math&amp;gt;U^\dagger U = I = UU^\dagger\,\!&amp;lt;/math&amp;gt;, and the &amp;quot;S&amp;quot; for special indicates that it has determinant one.&lt;br /&gt;
&lt;br /&gt;
====Example 9====&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the unitary and special unitary groups&lt;br /&gt;
to &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; matrices.  These are denoted &amp;lt;math&amp;gt;U(N)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;SU(N)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
respectively.  In quantum computing, an important set of unitary groups is the set with &amp;lt;math&amp;gt;U(2^n)\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of qubits.  This is the set of all possible unitary transformations on a set of qubits.&lt;br /&gt;
&lt;br /&gt;
====Example 10====&lt;br /&gt;
&lt;br /&gt;
The complex General Linear group is the set of invertible &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; matrices with complex numbers as entries.  It is denoted &amp;lt;math&amp;gt;GL(N,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===More Representation Theory===&lt;br /&gt;
&lt;br /&gt;
In physics we are most often concerned with linear representations of groups that use linear operators to represent group elements; we represent these operators with matrices.  In this Appendix, the focus is entirely on these types of representations, although this is not always stated explicitly.  Although these comments have been made above for finite groups, they are worth reiterating due to their importance and because they also apply to infinite order groups, such as Lie groups.  Furthermore, definitions introduced for finite order groups are also applicable to Lie groups.&lt;br /&gt;
&lt;br /&gt;
Thus the previous discussion of representation theory applies to the representation of Lie groups.  A representation of a group can be &amp;quot;reduced&amp;quot; to block-diagonal form.  When these blocks cannot be further reduced, the blocks are called &amp;quot;irreducible&amp;quot;.  These irreducible blocks make up &amp;quot;irreducible representations.&amp;quot; Our study of representation theory is our concern with irreducible blocks and how to find them.  &lt;br /&gt;
&lt;br /&gt;
Clearly, a set of matrices that may be block-diagonalizable but has been acted upon by a highly non-trivial &amp;lt;math&amp;gt;S \,\!&amp;lt;/math&amp;gt; may well represent a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; for sets of matrices with many different dimensions and many different block-diagonal forms.  Therefore finding irreducible blocks and the similarity transformation that simultaneously block-diagonalizes all matrices of a given representation is highly non-trivial.  &lt;br /&gt;
&lt;br /&gt;
Before discussing the representation of Lie groups, there is another definition that is quite helpful.  &lt;br /&gt;
&lt;br /&gt;
====The Lie Algebra of a Lie Group====&lt;br /&gt;
&lt;br /&gt;
The Lie algebra of a Lie group is defined as the set of left-invariant vector fields on the manifold of the Lie group.  For our purposes, the Lie algebra will be described by the basis elements of the tangent space to the origin of the group that is isomorphic to the set of left-invariant vector fields.  To see how to relate the group and algebra and to see how this is useful, let us suppose that there is a Lie algebra corresponding to a Lie group that has a set of basis elements &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt;.  To describe the relation between the Lie group and Lie algebra, let &amp;lt;math&amp;gt;g\in\mathcal{G}\,\!&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;\{a_i\}\,\!&amp;lt;/math&amp;gt; be a set of parameters (which can be taken to be real).  Then an element of the Lie algebra is given by &amp;lt;math&amp;gt; \sum_i a_i\lambda_i\,\!&amp;lt;/math&amp;gt; and an element of the group written in terms of these parameters is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
g=\exp\left(-i\sum_i a_i \lambda_i\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.17}}&lt;br /&gt;
The tangent space to the origin is given by the derivative of &amp;lt;math&amp;gt;g \,\!&amp;lt;/math&amp;gt; with respect to the parameters &amp;lt;math&amp;gt; a_i\,\!&amp;lt;/math&amp;gt;.  In this way, one sees that the group is an analytic manifold.  There are several reasons why it is useful to consider the Lie algebra.  One is that it is often easier to analyze than the Lie group, and several important properties of the Lie group are able to be obtained from properties of the Lie algebra.  (For example, subalgebras correspond to subgroups.)&lt;br /&gt;
&lt;br /&gt;
====Representation Theory for Lie Groups====&lt;br /&gt;
&lt;br /&gt;
As with finite order groups, one of the primary objectives of this introduction to group theory is to enable one to find irreducible representations of a group from a given reducible one.  At the least, the objective should be to understand what this means, how one would go about it in principle, and how it is used in quantum physics and quantum computing.  &lt;br /&gt;
&lt;br /&gt;
Lie groups, represented by a set of matrices consisting of differentiable parameters, may also be described by matrices that are reducible to block-diagonal form with blocks that cannot be reduced further.  These irreducible blocks form irreducible representations of the group.  One may suppose that irreducible representations of Lie groups are more difficult to understand than finite groups due to the fact that there are an infinite number of matrices in the set of group elements.  This is certainly true, so one sometimes relies on the Lie algebra.  Suppose a set of elements &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt; of a Lie algebra obeys a particular set of commutation relations, say&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
[\lambda_i,\lambda_j] = 2i\sum_kf_{ijk}\lambda_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.18}}&lt;br /&gt;
where &amp;lt;math&amp;gt;f_{ijk}\,\!&amp;lt;/math&amp;gt; is some set of constants (and the factor of two is a non-standard convention).  Then any other set that obeys the same commutation relations is also a representation of the same Lie algebra.  The representation of the algebra can then give a representation of the group through exponentiation, although the representation may not be faithful.  &lt;br /&gt;
&lt;br /&gt;
Now let us suppose that there exists a similarity transformation &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; that will simultaneously block-diagonalize all elements of a group.  Then, observing that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
SgS^{-1}=S\exp\left(-i\sum_i a_i \lambda_i\right)S^{-1} = \exp\left(-i\sum_i a_i S\lambda_iS^{-1}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.19}}&lt;br /&gt;
it is clear that the same similarity transformation will block-diagonalize the elements of the algebra as well.&lt;br /&gt;
&lt;br /&gt;
====Some Useful Relations Among Lie Algebra Elements====&lt;br /&gt;
&lt;br /&gt;
A Lie algebra will obey the the commutation relations, [[#eqD.18|Equation (D.18)]].  However, since the emphasis here is the representation of groups in terms of matrices, there are several other useful relations that will be listed.  These relations apply to all Lie algebra elements of &amp;lt;math&amp;gt;SU(d)\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
We have chosen the following convention for the normalization of &lt;br /&gt;
the algebra of Hermitian matrices that represent generators of &amp;lt;math&amp;gt;SU(d)\,\!&amp;lt;/math&amp;gt;:  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\text{Tr}(\lambda_i\lambda_j) = 2\delta_{ij}.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.20}}&lt;br /&gt;
&lt;br /&gt;
The commutation and anti-commutation relations of the matrices &lt;br /&gt;
representing the basis for the Lie algebra can be summarized &lt;br /&gt;
by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_i \lambda_j = \frac{2}{d}\delta_{ij} + if _{ijk} \lambda_k &lt;br /&gt;
                      + d_{ijk}\lambda_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.21}}&lt;br /&gt;
where here, and throughout this section, a sum over repeated &lt;br /&gt;
indices is understood.  &lt;br /&gt;
&lt;br /&gt;
As with any Lie algebra, we have the Jacobi identity:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f_{ilm}f_{jkl} + f_{jlm}f_{kil} + f_{klm}f_{ijl} =0,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.22}}&lt;br /&gt;
which may also be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
[[\lambda_i,\lambda_j],\lambda_k]+ [[\lambda_j,\lambda_k],\lambda_i] + [[\lambda_k,\lambda_i],\lambda_j]  =0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.23}}&lt;br /&gt;
There is also a Jacobi-like identity,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f_{ilm}d_{jkl} + f_{jlm}d_{kil} + f_{klm}d_{ijl} =0,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.24}}&lt;br /&gt;
which was given by Macfarlane, et al. \cite{Macfarlane}. &lt;br /&gt;
&lt;br /&gt;
Also provided in cite{Macfarlane} are the following identities:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
d_{iik} &amp;amp;= 0, \\&lt;br /&gt;
d_{ijk}f_{ljk} &amp;amp;= 0,  \\&lt;br /&gt;
f_{ijk}f_{ljk} &amp;amp;= d\delta_{il},  \\&lt;br /&gt;
d_{ijk}d_{ljk} &amp;amp;= \frac{d^2 - 4}{d}\delta_{il},  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.25}}&lt;br /&gt;
and&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f_{ijm}f_{klm} = \frac{2}{d}(\delta_{ik}\delta_{jl} - \delta_{il}\delta_{jk}) &lt;br /&gt;
                  + (d_{ikm}d_{jlm} - d_{jkm}d_{ilm}) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.26}}&lt;br /&gt;
and finally&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
f_{piq}f_{qjr}f_{rkp} &amp;amp;= -\left(\frac{d}{2}\right)f_{ijk},\\&lt;br /&gt;
d_{piq}f_{qjr}f_{rkp} &amp;amp;= -\left(\frac{d}{2}\right)d_{ijk},\\&lt;br /&gt;
d_{piq}d_{qjr}f_{rkp} &amp;amp;= \left(\frac{d^2 - 4}{2d}\right)f_{ijk},\\&lt;br /&gt;
d_{piq}d_{qjr}d_{rkp} &amp;amp;= \left(\frac{d^2 - 12}{2d}\right)d_{ijk}.&lt;br /&gt;
\end{align} &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.27}}&lt;br /&gt;
The proofs of these are fairly straight-forward and are omitted.&lt;br /&gt;
&lt;br /&gt;
====Tensor Products of Representations====&lt;br /&gt;
&lt;br /&gt;
When one takes the tensor product of two representations, another representation results.  In general, this representation is reducible.  &lt;br /&gt;
&lt;br /&gt;
To see this, let &amp;lt;math&amp;gt; g_1,g_2,g_3,g_4 \in \mathcal{G}\,\!&amp;lt;/math&amp;gt;.  A tensor product of two group elements is  &amp;lt;math&amp;gt; g_1\otimes g_1 \in \mathcal{G}\otimes \mathcal{G} \,\!&amp;lt;/math&amp;gt;.  Certainly, when &amp;lt;math&amp;gt; g_1\cdot g_2 = g_3, \,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt; g_1\otimes g_1\cdot g_2 \otimes g_3 = g_3 \otimes g_3\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)  Therefore, the tensor product of two representations is another representation.  However, even if this is an irreducible representation, one would suspect that the tensor product is a reducible representation---this turns out to be true.  The task is to find the irreducible components.&lt;br /&gt;
&lt;br /&gt;
One very important example of this is used for the addition of angular momenta.  Before revisiting the more general case, this important example is discussed.&lt;br /&gt;
&lt;br /&gt;
====Addition of Angular Momenta====&lt;br /&gt;
&lt;br /&gt;
In the theory of angular momenta, quantum states are labelled according to their total angular momentum eigenstates and the z component of their angular momentum.  Let the total angular momentum square be given by the operator &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{J} = (J_x,J_y,J_z).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.26}}&lt;br /&gt;
These operators satisfy the commutation relations&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
[J_i,J_j] = i\epsilon_{ijk}J_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.27}}&lt;br /&gt;
where &amp;lt;math&amp;gt;i,j,k = 1,2,\text{ or }3, \,\!&amp;lt;/math&amp;gt; and the epsilon tensor is defined in [[Appendix C - Vectors and Linear Algebra#eqC.9|Equation C.9]].  A state &amp;lt;math&amp;gt;\left| j, m\right\rangle\,\!&amp;lt;/math&amp;gt; satisfies &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
J^2\left| j, m\right\rangle = j(j+1)\hbar^2\left| j, m\right\rangle, \;\; \text{and} \;\; J_z\left| j, m\right\rangle = m\hbar\left| j, m\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.28}}&lt;br /&gt;
where &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
J^2 = \vec{J} \cdot \vec{J} = J_x^2 + J_y^2 + J_z^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.29}}&lt;br /&gt;
The common problem is as follows.  Given two states &amp;lt;math&amp;gt;\left| j_1, m_1\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left| j_2, m_2\right\rangle\,\!&amp;lt;/math&amp;gt;, find the total angular momentum of the two states combined.  The objective is to find a new basis,  &amp;lt;math&amp;gt;\left| j, m, j_1, j_2\right\rangle\,\!&amp;lt;/math&amp;gt;, which is expressed in terms of the old basis.  In other words, we need to find the set of numbers &amp;lt;math&amp;gt;C(j_1,j_2,j,m|m_1,m_2,m)\,\!&amp;lt;/math&amp;gt; such that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left| j, m, j_1, j_2\right\rangle = \sum_{m_1,m_2} C(j_1,j_2,j,m|m_1,m_2,m) \left| j_1, m_1\right\rangle \left| j_2, m_2\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.30}}&lt;br /&gt;
The numbers &amp;lt;math&amp;gt;C(j_1,j_2,j,m|m_1,m_2,m)\,\!&amp;lt;/math&amp;gt; are called Clebsch-Gordan coefficients, or Wigner-Clebsch-Gordan coefficients.  These not only put the tensor product of the vectors into this special form, but they also block-diagonalize the tensor products of the operators.  The most common example of this is the addition of angular momentum of two spin-1/2 particles.  The result is a triplet (spin-1 representation) and a singlet (spin-0 representation).  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Concluding Remarks====&lt;br /&gt;
&lt;br /&gt;
'''To summarize''', matrix representations of a group are sets of matrices that represent the group in the sense that they follow the same multiplication law as the original group elements.  The representation may be reducible, meaning the set of matrices are all block-diagonalized by a single similarity transformation such that each individual block will represent a group element in its own representation.  If a representation (the set of matrices) cannot be transformed into by a single similarity transformation such that each matrix is comprised of a set of smaller blocks, then the representation is called irreducible.  If there is a isomorphism from the set of matrices to the original group, then the representation is faithful.&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_D_-_Group_Theory&amp;diff=1760</id>
		<title>Appendix D - Group Theory</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_D_-_Group_Theory&amp;diff=1760"/>
		<updated>2011-11-28T14:45:39Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Example 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;''&amp;lt;/nowiki&amp;gt;''Symmetry, as wide or as narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty and perfection.''&amp;lt;nowiki&amp;gt;''&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Hermann Weyl'''&lt;br /&gt;
&lt;br /&gt;
====Symmetries and Groups====&lt;br /&gt;
&lt;br /&gt;
Symmetry arguments have been used widely in mathematics, physics,&lt;br /&gt;
chemistry, biology, computer science, engineering, and elsewhere.  &lt;br /&gt;
Group theory can be an invaluable organizational tool,&lt;br /&gt;
whether it is used explicitly or implicitly, in many areas of&lt;br /&gt;
science.  &lt;br /&gt;
&lt;br /&gt;
In physics, symmetry principles are often used to describe what&lt;br /&gt;
changes and what does not in a physical system undergoing some&lt;br /&gt;
particular transformation.  For example, if a knob is turned in an&lt;br /&gt;
experiment and nothing changes, then that is an invariant of the&lt;br /&gt;
system and thus indicates a symmetry.  (Of course, the trivial case&lt;br /&gt;
where the knob has nothing to do with the experiment, like if the&lt;br /&gt;
machine with the knob is unplugged, should be excluded.) The objective&lt;br /&gt;
here is to explain group theory with this practical viewpoint in&lt;br /&gt;
mind; the idea is for this motivation to be kept in mind&lt;br /&gt;
throughout these notes.  Some formalism is necessary however.  &lt;br /&gt;
&lt;br /&gt;
It is worth noting that very general things tend to need to be &lt;br /&gt;
abstract.  And so it is with group theory.  However, to reiterate, the &lt;br /&gt;
objective here is to be as concrete as possible with the emphasis on &lt;br /&gt;
physical applications.  In this regard, it is worth mentioning that, &lt;br /&gt;
directly or indirectly, [[Bibliography#Tinkham:gpthbook|Michael Tinkham's book]] &lt;br /&gt;
on group theory very much influenced these notes.  Also, Encyclopedia of Maths, Hammermesh, ...&lt;br /&gt;
&lt;br /&gt;
====Group Theory in Physics====&lt;br /&gt;
&lt;br /&gt;
The applications to physics are too numerous to mention here.  However, several comments&lt;br /&gt;
are in order.  First, if a system has a symmetry (often able to be determined by inspection), then it has a&lt;br /&gt;
constraint placed on it. This limits the acceptability of solutions to a problem -&lt;br /&gt;
they must satisfy the symmetry requirement.  Thus identifying&lt;br /&gt;
symmetries is an excellent problem-solving technique.  Choosing &lt;br /&gt;
coordinates is an example of such symmetry identification.  &lt;br /&gt;
&lt;br /&gt;
A group is a set of symmetries.  To see this, suppose that, for example, elements &amp;lt;math&amp;gt;A,B,C,D,...\,\!&amp;lt;/math&amp;gt; operate on an object in such a way&lt;br /&gt;
that they do not change the object.  Most often in physics the&lt;br /&gt;
elements are matrices and the objects on which they act are vectors.&lt;br /&gt;
If a vector or set of vectors is unchanged by these operations, then&lt;br /&gt;
the vectors have a symmetry described by the action of these&lt;br /&gt;
operators.  In Example 2 the vectors are the vertices of the triangle&lt;br /&gt;
and the triangle is unchanged by the action of the group elements given in the example.  (Notice, as an example&lt;br /&gt;
of how a set of symmetries forms a group, that if the vector is &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and assuming &amp;lt;math&amp;gt;Av=v\,\!&amp;lt;/math&amp;gt;, i.e. &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is a symmetry operation, and also assuming&lt;br /&gt;
&amp;lt;math&amp;gt;Bv=v\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;ABv = v\,\!&amp;lt;/math&amp;gt;. Thus the set is closed under multiplication, which means that the product of elements in the set is always in the set.)&lt;br /&gt;
One way to think of this is quite literal.  If a symmetry operation is&lt;br /&gt;
applied to the equilateral triangle and the triangle is still&lt;br /&gt;
equilateral with the vertices indistinguishable (assuming no labels), then the&lt;br /&gt;
operation did not change anything discernible.  &lt;br /&gt;
&lt;br /&gt;
It turns out that group theory has been applied with great success to&lt;br /&gt;
many areas of quantum physics: solid-state physics including&lt;br /&gt;
crystallography, nuclear physics, atomic physics, molecular physics,&lt;br /&gt;
and particle physics.  It has also been applied in classical physics&lt;br /&gt;
and relativity.  It has been especially indispensable in quantum field theory and particle physics where symmetries correspond to conserved quantities observed in experiment.  &lt;br /&gt;
&lt;br /&gt;
Some groups of infinite order, such as Lie groups, were originally&lt;br /&gt;
studied largely in order to understand the symmetries of&lt;br /&gt;
differential equations.  This is the set of groups that is discussed&lt;br /&gt;
next.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Examples===&lt;br /&gt;
&lt;br /&gt;
====Definition 1: Group====&lt;br /&gt;
A '''group''' &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; is a set of objects &amp;lt;math&amp;gt;\{A,B,C,&lt;br /&gt;
...\}\,\!&amp;lt;/math&amp;gt; together with a composition rule between them (denoted &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; here and &lt;br /&gt;
called a product or multiplication) such that the following are satisfied:&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\circ B)\circ C = A\circ (B \circ C)\,\!&amp;lt;/math&amp;gt;. (&amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; is associative.)&lt;br /&gt;
#If &amp;lt;math&amp;gt;A\in \mathcal{G}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\in\mathcal{G}\,\!&amp;lt;/math&amp;gt;, then their product is &amp;lt;math&amp;gt;A\circ B \in \mathcal{G}\,\!&amp;lt;/math&amp;gt;.  (The set is closed under multiplication.)&lt;br /&gt;
#There is an element &amp;lt;math&amp;gt;\mathbb{I}\in \mathcal{G}\,\!&amp;lt;/math&amp;gt;  such that, for all &amp;lt;math&amp;gt;A\in \mathcal{G}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbb{I}A = A = A\mathbb{I}\,\!&amp;lt;/math&amp;gt;. (&amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; contains the identity element.)&lt;br /&gt;
#For all &amp;lt;math&amp;gt;A\in \mathcal{G}\,\!&amp;lt;/math&amp;gt; there exists an element &amp;lt;math&amp;gt;A^{-1}\in\mathcal{G}\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;AA^{-1} =  \mathbb{I} =A^{-1}A\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the provided examples, the objective is to make the direct connection between a group and a set of symmetries of an object.  The reason is that ''a set of symmetries forms a group'' since it satisfies all the conditions in the definition.  The symmetries are things you can do to (i.e., operations you can perform on) a set that leaves the set unchanged.   &lt;br /&gt;
&lt;br /&gt;
To see this, suppose that we operate on a set of vectors whose endpoints have a certain symmetry associated with them.  (For example, the vertices of a triangle.)  Assume their origin is the origin of a coordinate axes.  Operating on these with a set of matrix operators may leave the set of vectors unchanged, e.g. the arrows associated with the vectors still point to the same set of points, if the set of matrices is chosen properly.  Assuming all possible such matrices are included in the set, then the set of matrices, or set of symmetries, forms a group.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
&lt;br /&gt;
Consider a line segment of length 2 cm with midpoint at zero.  Suppose the end points are located at &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt; cm of the x-axis.  If the line segment were rotated &amp;lt;math&amp;gt;180^o\,\!&amp;lt;/math&amp;gt; about any line perpendicular to the segment, it would look like the same line segment.  (Let us be definite and choose the axis perpendicular to the x-y plane after choosing x and y axes.)  What this would do is exchange the two ends.  The set of points &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt; could be acted upon by an operator that&lt;br /&gt;
exchanges the two.  This rotation operation can be represented through multiplication by &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;.  Then there are two elements in the set of operations to consider.  The first is ''do nothing'' represented by &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;.  (This, of course, is the identity operation &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; for this ''group''.)  The other element is &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;.  Thus, representing&lt;br /&gt;
multiplication by &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt;, we have a group with the set &amp;lt;math&amp;gt;\{+1,-1\}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and operation &amp;lt;math&amp;gt;\circ \equiv \times \,\!&amp;lt;/math&amp;gt;.  Clearly the product is associative (it is multiplication), the set contains the identity, products are either &amp;lt;math&amp;gt;+1\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt; which are both in the group (indicating closure), and the inverse of &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;; all of requirements defined above are satisfied.  In fact this is the simplest group.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
&lt;br /&gt;
The set of symmetries of an equilateral triangle can be represented in several ways.  Two that are presented here are the set of operations on vectors from the origin to the vertices and the set of permutations on three objects.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Figure D.1&amp;quot;&amp;gt;'''Figure D.1'''&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:triangle2.jpg]]&lt;br /&gt;
|}&lt;br /&gt;
Figure D.1:  An equilateral triangle with vertices in the x-y plane, &amp;lt;math&amp;gt; v_1\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;(0,1/\sqrt{3})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;v_2\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;(-1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;, and  &amp;lt;math&amp;gt;v_3\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;(1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Consider an equilateral triangle with its &lt;br /&gt;
center at the origin of the x-y plane and  vertices&lt;br /&gt;
placed at the following points: &amp;lt;math&amp;gt;(0,1/\sqrt{3})\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math&amp;gt;(1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;(-1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;.  (See [[#Figure D.1|Figure D.1]].)  Now consider the&lt;br /&gt;
following operations on the triangle: a rotation of &amp;lt;math&amp;gt;0^o\,\!&amp;lt;/math&amp;gt; (do&lt;br /&gt;
nothing), a rotation of &amp;lt;math&amp;gt;120^o\,\!&amp;lt;/math&amp;gt;, a rotation of &amp;lt;math&amp;gt;240^o\,\!&amp;lt;/math&amp;gt;, and a reflection&lt;br /&gt;
about the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; axis, &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;.  There are two other reflections we could perform, labelled &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;, which are reflections through lines bisecting the angles at vertices 2 and 3 respectively, as shown in [[#Figure D.1|Figure D.1]].  These make up the set of six symmetry operations on the equilateral triangle.  &lt;br /&gt;
&lt;br /&gt;
If we take the first of these, &amp;lt;math&amp;gt;P_0\,\!&amp;lt;/math&amp;gt;, to be the original configuration (shown in [[#Figure D.1|Figure D.1]]), then each of the first three of these are a rotation from the original configuration.  Each of the last three is obtained from a reflection combined with a rotation.  To be explicit, let us consider the following operations:  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}_2 = \left(\begin{array}{cc} 1 &amp;amp; 0  \\ 0 &amp;amp; 1  &lt;br /&gt;
  \end{array}\right), &lt;br /&gt;
R_1 = \left(\begin{array}{cc} -1/2 &amp;amp; -\sqrt{3}/2 \\ \sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), \; \; &lt;br /&gt;
R_2 = \left(\begin{array}{cc}  -1/2 &amp;amp; \sqrt{3}/2  \\ -\sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.1}}&lt;br /&gt;
where &amp;lt;math&amp;gt;R_1\,\!&amp;lt;/math&amp;gt; is a rotation of the x-y plane by &amp;lt;math&amp;gt;120^o\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt;  is a rotation&lt;br /&gt;
of the x-y plane by &amp;lt;math&amp;gt;240^o\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\mathbb{I}_2\,\!&amp;lt;/math&amp;gt; is a rotation of &amp;lt;math&amp;gt;0^o\,\!&amp;lt;/math&amp;gt; (the identity subscript refers to the 2-dimensional nature of the transformation, unlike the rotation subscripts).  In addition to these operations, two others must be included&lt;br /&gt;
to complete the set: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\sigma_1   = \left(\begin{array}{cc} -1 &amp;amp;0 \\ 0 &amp;amp;1 \end{array}\right), \;\; &lt;br /&gt;
\sigma_2 =\sigma_1 R_1 =  \left(\begin{array}{cc} 1/2 &amp;amp; \sqrt{3}/2 \\ \sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), \; \; &lt;br /&gt;
\sigma_3 = \sigma_1R_2 = \left(\begin{array}{cc}  1/2 &amp;amp; -\sqrt{3}/2  \\ -\sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), \; \; &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.2}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\sigma_1 R_1\,\!&amp;lt;/math&amp;gt; is the same as &amp;lt;math&amp;gt;\sigma_1\circ R_1\,\!&amp;lt;/math&amp;gt;, but the &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; has been dropped&lt;br /&gt;
since this is ordinary matrix multiplication.  This group will be used&lt;br /&gt;
as an example for several group properties and is called &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  The&lt;br /&gt;
products of these &lt;br /&gt;
elements are summarized in [[#Table D.1|Table D.1]], which is called the&lt;br /&gt;
multiplication table for the group.  The multiplication table will be&lt;br /&gt;
discussed repeatedly throughout this appendix due to its importance in&lt;br /&gt;
group theory.  It would be advisable to stare at it for some time to&lt;br /&gt;
see what patterns can be identified.  The meaning of these patterns&lt;br /&gt;
will be discussed later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;div id=&amp;quot;Table D.1&amp;quot;&amp;gt;&lt;br /&gt;
'''Table D.1: Group Multiplication Table for''' &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;20&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \downarrow\rightarrow\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Table D.1: ''Group multiplication table for the group &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  The notation in the upper left corner (&amp;lt;math&amp;gt;\downarrow\rightarrow\,\!&amp;lt;/math&amp;gt;) indicates that the element in the first column is to be multiplied by the element in the first row to obtain the result.  Since the group is not abelian, i.e. the elements do not commute, the order matters.''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A second way to identify all possible configurations of&lt;br /&gt;
the triangle that leave the triangle looking the same is to use the positions of the vertices.  There are six possible choices for the positions of the vertices.  Let us name them 1,2,3.  Then, reading counter-clockwise&lt;br /&gt;
from the top, we can have &amp;lt;math&amp;gt;P_0=(1,2,3)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_2=(3,1,2)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_4=(2,3,1)\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math&amp;gt;P_1=(1,3,2)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_3=(3,2,1)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_5=(2,1,3)\,\!&amp;lt;/math&amp;gt;.  These are all of the permutations of three objects.  (In this case the three objects are the numbers 1,2,3.)  This is another way to represent the various configurations of the equilateral triangle.&lt;br /&gt;
&lt;br /&gt;
====Definition 2: Order of a Group====&lt;br /&gt;
&lt;br /&gt;
The number of elements in a group is called the '''order''' of the group.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example 1 has two elements and so has order two. Example 2 has six&lt;br /&gt;
elements, so the order of this group is six.&lt;br /&gt;
&lt;br /&gt;
====Definition 3: Abelian and Nonabelian Group====&lt;br /&gt;
&lt;br /&gt;
A group for which every element of the group commutes with every other element of the group (&amp;lt;math&amp;gt;g_1g_2 = g_2g_1,\;\;\forall g_1,g_2\in \mathcal{G}\,\!&amp;lt;/math&amp;gt;) is called '''abelian'''.  If any two elements do not commute, the group is called '''nonabelian'''.  &lt;br /&gt;
&lt;br /&gt;
It is clear that Example 1 is an abelian group consisting of only two&lt;br /&gt;
elements &amp;lt;math&amp;gt;+1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;.  However, Example 2 is clearly a nonabelian&lt;br /&gt;
group as can be seen from the multiplication table.  For example&lt;br /&gt;
&amp;lt;math&amp;gt;\sigma_2R_2 = \sigma_1 \,\!&amp;lt;/math&amp;gt;, but &amp;lt;math&amp;gt;R_2\sigma_2 =\sigma_3 \neq \sigma_1 \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4: Cyclic Group====&lt;br /&gt;
A '''cyclic group''' is a group in which every element of the group can be obtained from one element and all its distinct powers.  The particular element is called the '''generating element'''.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example 4 provides examples of cyclic groups.  &lt;br /&gt;
&lt;br /&gt;
====Definition 5: Subgroup====&lt;br /&gt;
&lt;br /&gt;
A '''subgroup''' &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; is a subset of the group elements that satisfies all&lt;br /&gt;
the properties in the definition of a group under the inherited multiplication rule.&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
&lt;br /&gt;
Consider the set &amp;lt;math&amp;gt;\{0,1,2,3, \cdots, N-1\}\,\!&amp;lt;/math&amp;gt; and identify &amp;lt;math&amp;gt; N\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt;.  This is written as &amp;lt;math&amp;gt;0 \equiv N\,\!&amp;lt;/math&amp;gt;.  The operation on this set will be addition.  This is the group of integers modulo &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; and is&lt;br /&gt;
denoted &amp;lt;math&amp;gt;\mathbb{Z}_N\,\!&amp;lt;/math&amp;gt;.  To be concrete, let us consider the group &amp;lt;math&amp;gt;\mathbb{Z}_3\,\!&amp;lt;/math&amp;gt;, consisting of &lt;br /&gt;
&amp;lt;math&amp;gt;\{0,1,2;+\}\,\!&amp;lt;/math&amp;gt;.  (When the operation could be ambiguous, it is often useful to specify it explicitly along with the members of the set.)  Let us check that this is a group.  First, addition is certainly associative.  Second, the identity is zero since &lt;br /&gt;
&amp;lt;math&amp;gt;a+0 =a\,\!&amp;lt;/math&amp;gt; for any integer &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt;.  Third, &amp;lt;math&amp;gt;1+2=3 = 0\,\!&amp;lt;/math&amp;gt; mod &amp;lt;math&amp;gt;3\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
In other words, since &amp;lt;math&amp;gt;3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; are equivalent, the sum of one and&lt;br /&gt;
two is zero which is in the set.  The order of the group is 3 (hence the subscript).  &lt;br /&gt;
&lt;br /&gt;
====Example 1 Revisited====&lt;br /&gt;
&lt;br /&gt;
Recall  [[#Example 1|Example 1]] is a group with &amp;lt;math&amp;gt;\{+1,-1\}\,\!&amp;lt;/math&amp;gt; using multiplication.  &lt;br /&gt;
This is the simplest nontrivial &lt;br /&gt;
''cyclic group'', since it is a cyclic group of order two.  &lt;br /&gt;
All elements of this group are obtained from powers&lt;br /&gt;
of &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;, namely &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(-1)^2 =1\,\!&amp;lt;/math&amp;gt;.  Notice that the generating&lt;br /&gt;
element is special; one cannot just take any element of the group to&lt;br /&gt;
be a generating element.&lt;br /&gt;
&lt;br /&gt;
====Example 4====&lt;br /&gt;
&lt;br /&gt;
We can represent the cyclic group of order &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
in several ways.  One we have seen is &amp;lt;math&amp;gt;\mathbb{Z}_N\,\!&amp;lt;/math&amp;gt; with the operation of addition.  Another is the set of elements &lt;br /&gt;
&amp;lt;math&amp;gt;\{e^{2\pi i n/(N-1)}\}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n = 0, 1, 2, 3, ..., N-1\,\!&amp;lt;/math&amp;gt; with the operation of multiplication.  Since this group can be&lt;br /&gt;
seen as the consisting of the element &amp;lt;math&amp;gt;e^{2\pi i/(N-1)}\,\!&amp;lt;/math&amp;gt; and all its&lt;br /&gt;
powers, then this is a cyclic group with generating element &lt;br /&gt;
&amp;lt;math&amp;gt;e^{2\pi i/(N-1)}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Example 5====&lt;br /&gt;
&lt;br /&gt;
Include modular arithmetic under multiplication as a group.&lt;br /&gt;
&lt;br /&gt;
===Comparing Groups: Homomorphisms and Isomorphisms===&lt;br /&gt;
&lt;br /&gt;
Let us consider two groups &amp;lt;math&amp;gt;\mathcal{G}_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathcal{G}_2\,\!&amp;lt;/math&amp;gt; with product rules symbolized by &amp;lt;math&amp;gt;\cdot\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; respectively.  Let the elements of &amp;lt;math&amp;gt;\mathcal{G}_1\,\!&amp;lt;/math&amp;gt; be denoted &amp;lt;math&amp;gt;a_1,a_2, ...\,\!&amp;lt;/math&amp;gt; and the elements of &amp;lt;math&amp;gt;\mathcal{G}_2\,\!&amp;lt;/math&amp;gt; be denoted &amp;lt;math&amp;gt;b_1,b_2, ...\,\!&amp;lt;/math&amp;gt;  When comparing two groups to see how similar they are, the relationship among the&lt;br /&gt;
elements under the product rule is all-important.  Therefore, if a map from one set of elements to another is given by &amp;lt;math&amp;gt;f:\mathcal{G}_1\rightarrow\mathcal{G}_2\,\!&amp;lt;/math&amp;gt;, meaning &amp;lt;math&amp;gt;f(a_1) \in\mathcal{G}_2\,\!&amp;lt;/math&amp;gt;, then the two groups have the same (algebraic) structure if, for all&lt;br /&gt;
&amp;lt;math&amp;gt;a_i,a_j,a_k \in \mathcal{G}_1\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
a_i\cdot a_j = a_k \;\; \Rightarrow \;\; f(a_i)\circ f(a_j) = f(a_k).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.3}}  &lt;br /&gt;
(Notice that this can be true even if the map takes all of the elements &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; to the identity.)  &lt;br /&gt;
&lt;br /&gt;
====Definition 6: Homomorphism====&lt;br /&gt;
&lt;br /&gt;
If the condition [[#eqD.3|Eq.(D.3)]] is satisfied, the map is called a '''homomorpic map''' or a '''homomorphism'''.  A homomorphism &amp;lt;math&amp;gt;f\,\!&amp;lt;/math&amp;gt; satisfies the important property that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f(A\circ B) = f(A) \cdot f(B).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.4}}&lt;br /&gt;
The composition &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; can, in general, be different from &amp;lt;math&amp;gt;\cdot\,\!&amp;lt;/math&amp;gt;, but here both will be matrix multiplication unless otherwise stated.&lt;br /&gt;
&lt;br /&gt;
====Definition 7: Isomorphism====&lt;br /&gt;
&lt;br /&gt;
If a homomorphism is one-to-one (each &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; is mapped to one and only one &amp;lt;math&amp;gt;b_j\,\!&amp;lt;/math&amp;gt;) and onto (each element in &amp;lt;math&amp;gt;\mathcal{G}_2\,\!&amp;lt;/math&amp;gt; has an element of &amp;lt;math&amp;gt;\mathcal{G}_1\,\!&amp;lt;/math&amp;gt; mapped to it), then the map is called an '''isomorphic map''' or an&lt;br /&gt;
'''isomorphism'''.  &lt;br /&gt;
&lt;br /&gt;
These definitions are used repeatedly in the representation theory of groups discussed below.&lt;br /&gt;
&lt;br /&gt;
===Discussion===&lt;br /&gt;
&lt;br /&gt;
With only these few definitions it is possible to discuss many important properties of groups and some of the reasons why they are so&lt;br /&gt;
important to physics.  Let us first discuss some of the important properties of the group multiplication table.  &lt;br /&gt;
&lt;br /&gt;
====Group Multiplication Table====&lt;br /&gt;
&lt;br /&gt;
The group multiplication table specifies the structure of the group and thus identifies a group.  One example of this is when the&lt;br /&gt;
group is abelian.  For all abelian groups the table is symmetric about the diagonal.  (This follows from the fact that &amp;lt;math&amp;gt;ab=ba\,\!&amp;lt;/math&amp;gt; for abelian&lt;br /&gt;
groups.)  Another example is the presence of subgroups.  This will be illustrated in this section.   &lt;br /&gt;
&lt;br /&gt;
====Subgroups: Return to Example 2====&lt;br /&gt;
&lt;br /&gt;
In [[#Example 2|Example 2]], [[#Table D.1|Table D.1]] immediately shows that the elements&lt;br /&gt;
&amp;lt;math&amp;gt;\mathbb{I}, R_1,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt; form a subgroup since they are closed&lt;br /&gt;
under multiplication.  Another somewhat less obvious subgroup&lt;br /&gt;
consists of the elements &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_1 \,\!&amp;lt;/math&amp;gt;.  This is a convenient&lt;br /&gt;
method for identifying subgroups, but is clearly limited to groups&lt;br /&gt;
with a relatively small order.&lt;br /&gt;
&lt;br /&gt;
====The Rearrangement Theorem====&lt;br /&gt;
&lt;br /&gt;
Notice that each group element appears in each row and each column of [[#Table D.1|Table D.1]] once and only once.  This is no coincidence, but&lt;br /&gt;
is a general property of the multiplication table for groups.  This implies that each row and column contains each and every group element&lt;br /&gt;
(due to the presence of the identity) so that each row and column is a simple rearrangement of the set of elements.  For this reason, this is&lt;br /&gt;
sometimes called the rearrangement theorem and follows directly from the uniqueness of the elements in the set.  (If there were two&lt;br /&gt;
elements in a row that were the same, then &amp;lt;math&amp;gt;ac=ab\,\!&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;a,b,c\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
But then &amp;lt;math&amp;gt;a^{-1}ac = a^{-1}ab \Rightarrow c=b\,\!&amp;lt;/math&amp;gt;, which cannot happen if&lt;br /&gt;
all elements are distinct.)&lt;br /&gt;
&lt;br /&gt;
===A Little Representation Theory===&lt;br /&gt;
&lt;br /&gt;
A group is specified by a set of elements, its product rule, and the&lt;br /&gt;
relations among the elements of the group under the product rule.  &lt;br /&gt;
For finite order groups the group multiplication table is how one &lt;br /&gt;
identifies a group or shows that two groups are homomorphic&lt;br /&gt;
(explicitly or not).  &lt;br /&gt;
&lt;br /&gt;
====Definition 8: Representation====&lt;br /&gt;
&lt;br /&gt;
A '''matrix representation''' of an abstract group is&lt;br /&gt;
any set of elements which is homomorphic to the set of elements in the abstract group.  &lt;br /&gt;
&lt;br /&gt;
More generally, if there is a homomorphic map from the set of abstract group elements onto a set of operators which, with their own combination rule (multiplication rule), satisfies the group axioms, then the operators form a representation of the group.  (This includes preserving products as described in [[#Definition 6: Homomorphism|Section 3.1]].)&lt;br /&gt;
&lt;br /&gt;
For our purposes, it is very important to note that a&lt;br /&gt;
set of group elements can always be represented by a set of matrices&lt;br /&gt;
so that we may restrict our attention to matrix representations.  &lt;br /&gt;
This, along with ordinary &lt;br /&gt;
matrix multiplication for the product rule, provides a way to represent&lt;br /&gt;
any group.  This is true for groups that have a finite order as well&lt;br /&gt;
as infinite order (discussed later).  &lt;br /&gt;
&lt;br /&gt;
Note that a representation is a ''homomorphism'' that can be a many-to-one map.  If it is an isomorphism, the representation is said to be '''faithful'''.  If, however, all matrices are the identity matrix, then all group elements are mapped to the identity and the multiplication relations (in the group multiplication table) are preserved; this representation is sometimes called the ''trivial representation''.  This is always a valid, but not very informative and certainly not faithful, representation of any group.  &lt;br /&gt;
&lt;br /&gt;
As will be shown in this first example, there are different sets of matrices that can represent the same group.  This example will provide motivation for what follows.&lt;br /&gt;
&lt;br /&gt;
====Example 6====&lt;br /&gt;
&lt;br /&gt;
Let us consider an example of the representation of the group from&lt;br /&gt;
[[#Example 2|Example 2]].  This is a group of operations that will&lt;br /&gt;
take any permutation of the vertices to any other permutation.  This&lt;br /&gt;
is also the set of permutations of three objects.  This group is often&lt;br /&gt;
denoted &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  The set of matrices representing the&lt;br /&gt;
rotations, reflection, and rotations combined with reflection provides&lt;br /&gt;
one way of representing this group.  Another way to represent this&lt;br /&gt;
group is to use &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrices rather than the&lt;br /&gt;
&amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices given in the example.  Let us&lt;br /&gt;
consider the following set of matrices:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbb{I}_3 = \left(\begin{array}{ccc} 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;1&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
P_2 = \left(\begin{array}{ccc} 0&amp;amp;0&amp;amp;1 \\ 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;1&amp;amp;0 \end{array}\right), \\&lt;br /&gt;
P_4 = \left(\begin{array}{ccc} 0&amp;amp;1&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \\ 1&amp;amp;0&amp;amp;0 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
P_1 = \left(\begin{array}{ccc} 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \\ 0&amp;amp;1&amp;amp;0 \end{array}\right),\\&lt;br /&gt;
P_3 = \left(\begin{array}{ccc} 0&amp;amp;0&amp;amp;1 \\ 0&amp;amp;1&amp;amp;0 \\ 1&amp;amp;0&amp;amp;0 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
P_5 = \left(\begin{array}{ccc} 0&amp;amp;1&amp;amp;0 \\ 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \end{array}\right). &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|D.5}}&lt;br /&gt;
Clearly, when these matrices act on a column vector, labelling the&lt;br /&gt;
vertices,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{c} 1 \\ 2 \\ 3 \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.6}}&lt;br /&gt;
the result is one of the permutations of three objects.  These&lt;br /&gt;
orientations correspond to the same action as the &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices&lt;br /&gt;
given in [[#Example 2|Example 2]] above.  Therefore, these two sets of matrices&lt;br /&gt;
represent the ''same'' group, &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  These representations are clearly&lt;br /&gt;
different; in fact, the dimensions of the matrices representing the&lt;br /&gt;
group is different for the two different representations.  There are &lt;br /&gt;
other representations that can be immediately constructed.   Consider&lt;br /&gt;
a set of matrices like the following:  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbb{I}_5 = \left(\begin{array}{cc} \mathbb{I}_3&amp;amp;0 \\ 0&amp;amp;\mathbb{I}_2 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
g_2 = \left(\begin{array}{cc} P_2&amp;amp;0 \\ 0&amp;amp;R_1  \end{array}\right), \\&lt;br /&gt;
g_4 = \left(\begin{array}{cc} P_4&amp;amp;0 \\ 0&amp;amp;R_2  \end{array}\right), \; \;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
g_1 = \left(\begin{array}{cc} P_1&amp;amp;0 \\ 0&amp;amp; \sigma_1 \end{array}\right), \\&lt;br /&gt;
g_3 = \left(\begin{array}{cc} P_3&amp;amp;0 \\ 0&amp;amp;\sigma_2 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
g_5 = \left(\begin{array}{cc} P_5&amp;amp;0 \\ 0&amp;amp; \sigma_3  \end{array}\right). &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|D.7}}&lt;br /&gt;
This set of matrices is said to be block-diagonal since it only has&lt;br /&gt;
non-zero elements in blocks along the diagonal.  The &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; represents a&lt;br /&gt;
block of zeroes which is either &amp;lt;math&amp;gt;3\times 2\,\!&amp;lt;/math&amp;gt; (upper right) or &amp;lt;math&amp;gt;2\times&lt;br /&gt;
3\,\!&amp;lt;/math&amp;gt; (lower left).  This set of matrices clearly satisfies the same multiplication relations as the sets given above,&lt;br /&gt;
(&amp;lt;math&amp;gt;\{\mathbb{I}_3,P_1,P_2,P_3,P_4,P_5\}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\{\mathbb{I}_2, R_1, R_2, \sigma_1, \sigma_2, \sigma_3\}\,\!&amp;lt;/math&amp;gt;), since the matrices multiply in&lt;br /&gt;
blocks.  The elements of the group have the same multiplication table&lt;br /&gt;
and thus are isomorphic.  Therefore this is another representation of&lt;br /&gt;
the group &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt; that is different from either of the&lt;br /&gt;
two representations in the subblocks along the diagonal since it is a&lt;br /&gt;
combination of the two.&lt;br /&gt;
&lt;br /&gt;
====Definition 9: Similarity Transformation====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; be an invertible matrix and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; be any matrix.  In these notes, by '''similarity transformation'''  we mean a transformation of the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;M^\prime\,\!&amp;lt;/math&amp;gt; that looks like &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;M^\prime = SMS^{-1}.\,\!&amp;lt;/math&amp;gt;|D.8}}&lt;br /&gt;
We say the matrices &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M^\prime\,\!&amp;lt;/math&amp;gt; are similar matrices.  &lt;br /&gt;
&lt;br /&gt;
The importance of similarity transformations for representation theory is that they leave matrix equations unchanged.  Suppose &amp;lt;math&amp;gt;A=BC \,\!&amp;lt;/math&amp;gt;.  Then defining &amp;lt;math&amp;gt;A^\prime = SAS^{-1}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B^\prime = SBS^{-1}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;C^\prime = SCS^{-1}\,\!&amp;lt;/math&amp;gt;, then&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;A=BC \; \Rightarrow A^\prime=B^\prime C^\prime\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more discussion on similarity transformations, see [[Appendix C - Vectors and Linear Algebra|Appendix C]], especially [[Appendix C - Vectors and Linear Algebra#The Trace|Section 3.5]], [[Appendix C - Vectors and Linear Algebra#The Trace|Section 3.6]], and  [[Appendix C - Vectors and Linear Algebra#The Trace|Section 5.1]].&lt;br /&gt;
&lt;br /&gt;
====Example 6 Continued====&lt;br /&gt;
&lt;br /&gt;
Example 6 is a non-trivial problem even though it appears&lt;br /&gt;
otherwise.  The way to show this is to&lt;br /&gt;
perform a similarity transformation, &amp;lt;math&amp;gt;g&lt;br /&gt;
\rightarrow S g S^{-1}\,\!&amp;lt;/math&amp;gt;, on all elements &amp;lt;math&amp;gt;g\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the group.  Since &amp;lt;math&amp;gt;S \,\!&amp;lt;/math&amp;gt; is any invertible matrix,&lt;br /&gt;
it could mix all rows and columns.  This would make it very difficult to identify the block-diagonal form or even know that it exists unless some other tools are used.&lt;br /&gt;
&lt;br /&gt;
Furthermore, given a set of matrices that are known to form a representation of the group, it is non-trivial to find the similarity transformation that will simultaneously block-diagonalize all of these matrices to enable the identification of irreducible blocks.&lt;br /&gt;
&lt;br /&gt;
====Equivalent Representations====&lt;br /&gt;
&lt;br /&gt;
Two representations &amp;lt;math&amp;gt;D^{(1)}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D^{(2)} \,\!&amp;lt;/math&amp;gt; are '''equivalent''' if and only if there is an invertible matrix &amp;lt;math&amp;gt;S \,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;D^{(1)} = SD^{(2)}S^{-1} \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We will only consider matrix representations.  In this case, the matrices will act on a vector space &amp;lt;math&amp;gt;\mathcal{V}\,\!&amp;lt;/math&amp;gt; called the '''representation space.'''&lt;br /&gt;
&lt;br /&gt;
===Miscellaneous Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 10: Stabilizer====&lt;br /&gt;
&lt;br /&gt;
The '''stabilizer''' of an element &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; of a set &amp;lt;math&amp;gt;\mathcal{M}\,\!&amp;lt;/math&amp;gt; is the subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; that leaves the element &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; fixed: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{S} = \{S\in \mathcal{S}|sm=m\}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.9}}&lt;br /&gt;
The stabilizer of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; is also called the '''isotropy group''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;, the '''isotropy subgroup''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;, the '''stationary subgroup''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;, or sometimes in physics, '''little group''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Centralizer====&lt;br /&gt;
&lt;br /&gt;
The '''centralizer''' subgroup of a group consists of elements of the group that commute with all elements of a certain set.&lt;br /&gt;
&lt;br /&gt;
====Definition 12: Pauli Group====&lt;br /&gt;
The '''Pauli Group''' on &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; qubits, denoted &amp;lt;math&amp;gt;\mathcal{P}_n\,\!&amp;lt;/math&amp;gt;, is the set of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; tensor products of the Pauli matrices &amp;lt;math&amp;gt;\mathbb{I}, X, Y, Z\,\!&amp;lt;/math&amp;gt; along with coefficients &amp;lt;math&amp;gt;\pm 1,\pm i\,\!&amp;lt;/math&amp;gt;.  This is an example of a group.  It is defined here due to its importance for quantum error correcting codes and the factors &amp;lt;math&amp;gt;\pm 1,\pm i\,\!&amp;lt;/math&amp;gt; are required for the closure property in the definition of a group.&lt;br /&gt;
&lt;br /&gt;
====Properties of the Pauli Group====&lt;br /&gt;
&lt;br /&gt;
Let us consider the Pauli group for 2 qubits with the tensor product symbols omitted.  The following are elements:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}\mathbb{I},\mathbb{I}X,\mathbb{I}Y,\mathbb{I}Z,X\mathbb{I},XX,XY,XZ,Y\mathbb{I},YX,YY,YZ,Z\mathbb{I},ZX,ZY,ZZ,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.10}}&lt;br /&gt;
as are all of these elements multiplied by &amp;lt;math&amp;gt;-1,\,\!&amp;lt;/math&amp;gt; and all of these elements multiplied by &amp;lt;math&amp;gt;i,\,\!&amp;lt;/math&amp;gt; as well as all of these elements multiplied by &amp;lt;math&amp;gt;-i.\,\!&amp;lt;/math&amp;gt;  Thus there are &amp;lt;math&amp;gt;4^3\,\!&amp;lt;/math&amp;gt; total elements of the group for two qubits.  In general there are &amp;lt;math&amp;gt;4\cdot 4^n\,\!&amp;lt;/math&amp;gt; elements for the Pauli group for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; qubits.&lt;br /&gt;
&lt;br /&gt;
One of the nice and interesting properties of the Pauli group is that every pair, say &amp;lt;math&amp;gt;A,B\,\!&amp;lt;/math&amp;gt;, of elements of the Pauli group either commutes &amp;lt;math&amp;gt;[A,B]= AB-BA =0\,\!&amp;lt;/math&amp;gt; or anti-commutes &amp;lt;math&amp;gt;\{A,B\} = AB+BA =0\,\!&amp;lt;/math&amp;gt;.  This turns out the be very useful.  &lt;br /&gt;
&lt;br /&gt;
Another notation for [[#eqD.10|Equation (D.10)]] is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I},X_2,Y_2,Z_2,X_1,X_1X_2,X_1Y_2,X_1Z_2,Y_1,Y_1X_2,Y_1Y_2,Y_1Z_2,Z_1,Z_1X_2,Z_1Y_2,Z_1Z_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.11}}&lt;br /&gt;
Clearly this index notation has an advantage for large products.  It also enables us to immediately see the weight of an operator.&lt;br /&gt;
&lt;br /&gt;
====Definition 13: Weight of an Operator====&lt;br /&gt;
&lt;br /&gt;
The '''weight of an operator''' is the number of non-identity elements in the tensor product.  &lt;br /&gt;
&lt;br /&gt;
This definition is most often used in the context of the Pauli Group.  Its importance is seen in quantum error correcting codes.&lt;br /&gt;
&lt;br /&gt;
====Definition 14: Generators of a Group====&lt;br /&gt;
&lt;br /&gt;
Let us consider a discrete group (or subgroup of a larger group).  There exists a subset of the group elements that will give all of the (sub)group elements through multiplication.  The elements in this subset are called '''generators''' of the group.  &lt;br /&gt;
&lt;br /&gt;
Note that the set of generators is not unique.  &lt;br /&gt;
&lt;br /&gt;
The generators are a very convenient set to use because it is a much smaller set than the whole group and many properties of the group can discovered using only the generators.  For example, if every generator of a subgroup acts on an object and leaves it invariant, then every element of the group will also leave it invariant since they are all given by products of the generators.  Thus one only needs to check whether or not the generators will leave an object invariant.  &lt;br /&gt;
&lt;br /&gt;
One example is the stabilizer subgroup where a set of generators stabilizes, or leaves invariant, the code words of the stabilizer code.&lt;br /&gt;
&lt;br /&gt;
====Definition 15: Normalizer====&lt;br /&gt;
&lt;br /&gt;
The '''normalizer''' of a set &amp;lt;math&amp;gt;\mathcal{M}\,\!&amp;lt;/math&amp;gt; is the subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; that leaves the set &amp;lt;math&amp;gt;\mathcal{M}\,\!&amp;lt;/math&amp;gt; fixed: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{S} = \{S\in \mathcal{S}|S\mathcal{M}=\mathcal{M}\}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.12}}&lt;br /&gt;
Note the difference between the centralizer, with which this should not be confused.  The centralizer leaves ''every element'' of the set fixed.  The elements of the normalizer contain the elements of the centralizer as a special case, but they can move elements around within the set.&lt;br /&gt;
&lt;br /&gt;
====Definition 16: Coset====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; be a subgroup and &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; be an element of the group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt;.  The left '''coset''' is a subset of the group &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G\mathcal{S} = \{GS|S\in\mathcal{S} \}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.13}}&lt;br /&gt;
&lt;br /&gt;
One can similarly define the right coset.  &lt;br /&gt;
&lt;br /&gt;
The importance of cosets is that they partition the group in a particular way.  If there is another coset, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
K\mathcal{S} = \{KS|S\in\mathcal{S} \},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.14}}&lt;br /&gt;
then either &amp;lt;math&amp;gt;G\mathcal{S}=K\mathcal{S}\,\!&amp;lt;/math&amp;gt; or they are disjoint sets, having no element in common.  (This is because &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; is a subgroup.  You could multiply by an element to show they are the same set.)&lt;br /&gt;
&lt;br /&gt;
===Infinite Order Groups: Lie Groups===&lt;br /&gt;
&lt;br /&gt;
All of the examples presented so far have been groups with finite order.  Groups that have infinite order can be described with one or more parameters. Groups that are differentiable with respect to those parameters are called ''Lie groups''.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Definition 17: Lie Group====&lt;br /&gt;
&lt;br /&gt;
A '''Lie group''' is a group that is also a differentiable manifold.  (See for example [[Bibliography#Cecile:book|Analysis, Manifolds, and Physics]]).  &lt;br /&gt;
&lt;br /&gt;
In this section, several examples of Lie groups are given.  In physics these groups correspond to a continuous set of symmetries, whereas the groups of finite order correspond to a discrete set of symmetries.&lt;br /&gt;
&lt;br /&gt;
====Example 7====&lt;br /&gt;
&lt;br /&gt;
The Lie group most often used as the introductory example is the group consisting of the set &amp;lt;math&amp;gt;e^{i\theta}\,\!&amp;lt;/math&amp;gt; for all &lt;br /&gt;
&amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  This group has an infinite number of elements (i.e. an infinite order) and one parameter, &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  The group is also a differentiable manifold---a circle.  Notice that this group is also isomorphic to the set of matrices &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
       \cos \theta &amp;amp; -\sin \theta \\&lt;br /&gt;
       \sin \theta &amp;amp; \cos \theta &lt;br /&gt;
\end{array}\right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.15}}&lt;br /&gt;
If this matrix were to act on a unit vector in the x-y plane, it would rotate that vector around in a circle; after &amp;lt;math&amp;gt;2\pi\,\!&amp;lt;/math&amp;gt;, the tip of the vector would sweep out a circle of unit radius.&lt;br /&gt;
&lt;br /&gt;
====Example 8====&lt;br /&gt;
&lt;br /&gt;
Another example of a Lie group, and one of the most important for quantum information, is the set of complex &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices&lt;br /&gt;
that satisfy&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U^\dagger U = \mathbb{I} = U U^\dagger. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.16}}&lt;br /&gt;
This group is called &amp;lt;math&amp;gt;U(2)\,\!&amp;lt;/math&amp;gt;  and is the set of ''unitary'' &lt;br /&gt;
&amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices (hence the &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;).  Notice that the determinant&lt;br /&gt;
of this set is &amp;lt;math&amp;gt;e^{i\alpha}\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\alpha\,\!&amp;lt;/math&amp;gt; is a real number, since  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
1 = \det(\mathbb{I}) = \det(U U^\dagger) = \det(U)\det(U^\dagger) &lt;br /&gt;
  = \det(U)(\det(U))^*. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.17}}&lt;br /&gt;
&lt;br /&gt;
There is a subgroup of this group that is often considered---the subgroup with determinant one.  This group is denoted &amp;lt;math&amp;gt;SU(2)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and is known as the ''special unitary group''.  The term unitary refers to the fact that &amp;lt;math&amp;gt;U^\dagger U = I = UU^\dagger\,\!&amp;lt;/math&amp;gt;, and the &amp;quot;S&amp;quot; for special indicates that it has determinant one.&lt;br /&gt;
&lt;br /&gt;
====Example 9====&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the unitary and special unitary groups&lt;br /&gt;
to &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; matrices.  These are denoted &amp;lt;math&amp;gt;U(N)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;SU(N)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
respectively.  In quantum computing, an important set of unitary groups is the set with &amp;lt;math&amp;gt;U(2^n)\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of qubits.  This is the set of all possible unitary transformations on a set of qubits.&lt;br /&gt;
&lt;br /&gt;
====Example 10====&lt;br /&gt;
&lt;br /&gt;
The complex General Linear group is the set of invertible &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; matrices with complex numbers as entries.  It is denoted &amp;lt;math&amp;gt;GL(N,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===More Representation Theory===&lt;br /&gt;
&lt;br /&gt;
In physics we are most often concerned with linear representations of groups that use linear operators to represent group elements; we represent these operators with matrices.  In this Appendix, the focus is entirely on these types of representations, although this is not always stated explicitly.  Although these comments have been made above for finite groups, they are worth reiterating due to their importance and because they also apply to infinite order groups, such as Lie groups.  Furthermore, definitions introduced for finite order groups are also applicable to Lie groups.&lt;br /&gt;
&lt;br /&gt;
Thus the previous discussion of representation theory applies to the representation of Lie groups.  A representation of a group can be &amp;quot;reduced&amp;quot; to block-diagonal form.  When these blocks cannot be further reduced, the blocks are called &amp;quot;irreducible&amp;quot;.  These irreducible blocks make up &amp;quot;irreducible representations.&amp;quot; Our study of representation theory is our concern with irreducible blocks and how to find them.  &lt;br /&gt;
&lt;br /&gt;
Clearly, a set of matrices that may be block-diagonalizable but has been acted upon by a highly non-trivial &amp;lt;math&amp;gt;S \,\!&amp;lt;/math&amp;gt; may well represent a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; for sets of matrices with many different dimensions and many different block-diagonal forms.  Therefore finding irreducible blocks and the similarity transformation that simultaneously block-diagonalizes all matrices of a given representation is highly non-trivial.  &lt;br /&gt;
&lt;br /&gt;
Before discussing the representation of Lie groups, there is another definition that is quite helpful.  &lt;br /&gt;
&lt;br /&gt;
====The Lie Algebra of a Lie Group====&lt;br /&gt;
&lt;br /&gt;
The Lie algebra of a Lie group is defined as the set of left-invariant vector fields on the manifold of the Lie group.  For our purposes, the Lie algebra will be described by the basis elements of the tangent space to the origin of the group that is isomorphic to the set of left-invariant vector fields.  To see how to relate the group and algebra and to see how this is useful, let us suppose that there is a Lie algebra corresponding to a Lie group that has a set of basis elements &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt;.  To describe the relation between the Lie group and Lie algebra, let &amp;lt;math&amp;gt;g\in\mathcal{G}\,\!&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;\{a_i\}\,\!&amp;lt;/math&amp;gt; be a set of parameters (which can be taken to be real).  Then an element of the Lie algebra is given by &amp;lt;math&amp;gt; \sum_i a_i\lambda_i\,\!&amp;lt;/math&amp;gt; and an element of the group written in terms of these parameters is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
g=\exp\left(-i\sum_i a_i \lambda_i\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.17}}&lt;br /&gt;
The tangent space to the origin is given by the derivative of &amp;lt;math&amp;gt;g \,\!&amp;lt;/math&amp;gt; with respect to the parameters &amp;lt;math&amp;gt; a_i\,\!&amp;lt;/math&amp;gt;.  In this way, one sees that the group is an analytic manifold.  There are several reasons why it is useful to consider the Lie algebra.  One is that it is often easier to analyze than the Lie group, and several important properties of the Lie group are able to be obtained from properties of the Lie algebra.  (For example, subalgebras correspond to subgroups.)&lt;br /&gt;
&lt;br /&gt;
====Representation Theory for Lie Groups====&lt;br /&gt;
&lt;br /&gt;
As with finite order groups, one of the primary objectives of this introduction to group theory is to enable one to find irreducible representations of a group from a given reducible one.  At the least, the objective should be to understand what this means, how one would go about it in principle, and how it is used in quantum physics and quantum computing.  &lt;br /&gt;
&lt;br /&gt;
Lie groups, represented by a set of matrices consisting of differentiable parameters, may also be described by matrices that are reducible to block-diagonal form with blocks that cannot be reduced further.  These irreducible blocks form irreducible representations of the group.  One may suppose that irreducible representations of Lie groups are more difficult to understand than finite groups due to the fact that there are an infinite number of matrices in the set of group elements.  This is certainly true, so one sometimes relies on the Lie algebra.  Suppose a set of elements &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt; of a Lie algebra obeys a particular set of commutation relations, say&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
[\lambda_i,\lambda_j] = 2i\sum_kf_{ijk}\lambda_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.18}}&lt;br /&gt;
where &amp;lt;math&amp;gt;f_{ijk}\,\!&amp;lt;/math&amp;gt; is some set of constants (and the factor of two is a non-standard convention).  Then any other set that obeys the same commutation relations is also a representation of the same Lie algebra.  The representation of the algebra can then give a representation of the group through exponentiation, although the representation may not be faithful.  &lt;br /&gt;
&lt;br /&gt;
Now let us suppose that there exists a similarity transformation &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; that will simultaneously block-diagonalize all elements of a group.  Then, observing that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
SgS^{-1}=S\exp\left(-i\sum_i a_i \lambda_i\right)S^{-1} = \exp\left(-i\sum_i a_i S\lambda_iS^{-1}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.19}}&lt;br /&gt;
it is clear that the same similarity transformation will block-diagonalize the elements of the algebra as well.&lt;br /&gt;
&lt;br /&gt;
====Some Useful Relations Among Lie Algebra Elements====&lt;br /&gt;
&lt;br /&gt;
A Lie algebra will obey the the commutation relations, [[#eqD.18|Equation (D.18)]].  However, since the emphasis here is the representation of groups in terms of matrices, there are several other useful relations that will be listed.  These relations apply to all Lie algebra elements of &amp;lt;math&amp;gt;SU(d)\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
We have chosen the following convention for the normalization of &lt;br /&gt;
the algebra of Hermitian matrices that represent generators of &amp;lt;math&amp;gt;SU(d)\,\!&amp;lt;/math&amp;gt;:  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\text{Tr}(\lambda_i\lambda_j) = 2\delta_{ij}.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.20}}&lt;br /&gt;
&lt;br /&gt;
The commutation and anti-commutation relations of the matrices &lt;br /&gt;
representing the basis for the Lie algebra can be summarized &lt;br /&gt;
by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_i \lambda_j = \frac{2}{d}\delta_{ij} + if _{ijk} \lambda_k &lt;br /&gt;
                      + d_{ijk}\lambda_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.21}}&lt;br /&gt;
where here, and throughout this section, a sum over repeated &lt;br /&gt;
indices is understood.  &lt;br /&gt;
&lt;br /&gt;
As with any Lie algebra, we have the Jacobi identity:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f_{ilm}f_{jkl} + f_{jlm}f_{kil} + f_{klm}f_{ijl} =0,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.22}}&lt;br /&gt;
which may also be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
[[\lambda_i,\lambda_j],\lambda_k]+ [[\lambda_j,\lambda_k],\lambda_i] + [[\lambda_k,\lambda_i],\lambda_j]  =0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.23}}&lt;br /&gt;
There is also a Jacobi-like identity,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f_{ilm}d_{jkl} + f_{jlm}d_{kil} + f_{klm}d_{ijl} =0,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.24}}&lt;br /&gt;
which was given by Macfarlane, et al. \cite{Macfarlane}. &lt;br /&gt;
&lt;br /&gt;
Also provided in cite{Macfarlane} are the following identities:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
d_{iik} &amp;amp;= 0, \\&lt;br /&gt;
d_{ijk}f_{ljk} &amp;amp;= 0,  \\&lt;br /&gt;
f_{ijk}f_{ljk} &amp;amp;= d\delta_{il},  \\&lt;br /&gt;
d_{ijk}d_{ljk} &amp;amp;= \frac{d^2 - 4}{d}\delta_{il},  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.25}}&lt;br /&gt;
and&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f_{ijm}f_{klm} = \frac{2}{d}(\delta_{ik}\delta_{jl} - \delta_{il}\delta_{jk}) &lt;br /&gt;
                  + (d_{ikm}d_{jlm} - d_{jkm}d_{ilm}) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.26}}&lt;br /&gt;
and finally&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
f_{piq}f_{qjr}f_{rkp} &amp;amp;= -\left(\frac{d}{2}\right)f_{ijk},\\&lt;br /&gt;
d_{piq}f_{qjr}f_{rkp} &amp;amp;= -\left(\frac{d}{2}\right)d_{ijk},\\&lt;br /&gt;
d_{piq}d_{qjr}f_{rkp} &amp;amp;= \left(\frac{d^2 - 4}{2d}\right)f_{ijk},\\&lt;br /&gt;
d_{piq}d_{qjr}d_{rkp} &amp;amp;= \left(\frac{d^2 - 12}{2d}\right)d_{ijk}.&lt;br /&gt;
\end{align} &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.27}}&lt;br /&gt;
The proofs of these are fairly straight-forward and are omitted.&lt;br /&gt;
&lt;br /&gt;
====Tensor Products of Representations====&lt;br /&gt;
&lt;br /&gt;
When one takes the tensor product of two representations, another representation results.  In general, this representation is reducible.  &lt;br /&gt;
&lt;br /&gt;
To see this, let &amp;lt;math&amp;gt; g_1,g_2,g_3,g_4 \in \mathcal{G}\,\!&amp;lt;/math&amp;gt;.  A tensor product of two group elements is  &amp;lt;math&amp;gt; g_1\otimes g_1 \in \mathcal{G}\otimes \mathcal{G} \,\!&amp;lt;/math&amp;gt;.  Certainly, when &amp;lt;math&amp;gt; g_1\cdot g_2 = g_3, \,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt; g_1\otimes g_1\cdot g_2 \otimes g_3 = g_3 \otimes g_3\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)  Therefore, the tensor product of two representations is another representation.  However, even if this is an irreducible representation, one would suspect that the tensor product is a reducible representation---this turns out to be true.  The task is to find the irreducible components.&lt;br /&gt;
&lt;br /&gt;
One very important example of this is used for the addition of angular momenta.  Before revisiting the more general case, this important example is discussed.&lt;br /&gt;
&lt;br /&gt;
====Addition of Angular Momenta====&lt;br /&gt;
&lt;br /&gt;
In the theory of angular momenta, quantum states are labelled according to their total angular momentum eigenstates and the z component of their angular momentum.  Let the total angular momentum square be given by the operator &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{J} = (J_x,J_y,J_z).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.26}}&lt;br /&gt;
These operators satisfy the commutation relations&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
[J_i,J_j] = i\epsilon_{ijk}J_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.27}}&lt;br /&gt;
where &amp;lt;math&amp;gt;i,j,k = 1,2,\text{ or }3, \,\!&amp;lt;/math&amp;gt; and the epsilon tensor is defined in [[Appendix C - Vectors and Linear Algebra#eqC.9|Equation C.9]].  A state &amp;lt;math&amp;gt;\left| j, m\right\rangle\,\!&amp;lt;/math&amp;gt; satisfies &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
J^2\left| j, m\right\rangle = j(j+1)\hbar^2\left| j, m\right\rangle, \;\; \text{and} \;\; J_z\left| j, m\right\rangle = m\hbar\left| j, m\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.28}}&lt;br /&gt;
where &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
J^2 = \vec{J} \cdot \vec{J} = J_x^2 + J_y^2 + J_z^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.29}}&lt;br /&gt;
The common problem is as follows.  Given two states &amp;lt;math&amp;gt;\left| j_1, m_1\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left| j_2, m_2\right\rangle\,\!&amp;lt;/math&amp;gt;, find the total angular momentum of the two states combined.  The objective is to find a new basis,  &amp;lt;math&amp;gt;\left| j, m, j_1, j_2\right\rangle\,\!&amp;lt;/math&amp;gt;, which is expressed in terms of the old basis.  In other words, we need to find the set of numbers &amp;lt;math&amp;gt;C(j_1,j_2,j,m|m_1,m_2,m)\,\!&amp;lt;/math&amp;gt; such that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left| j, m, j_1, j_2\right\rangle = \sum_{m_1,m_2} C(j_1,j_2,j,m|m_1,m_2,m) \left| j_1, m_1\right\rangle \left| j_2, m_2\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.30}}&lt;br /&gt;
The numbers &amp;lt;math&amp;gt;C(j_1,j_2,j,m|m_1,m_2,m)\,\!&amp;lt;/math&amp;gt; are called Clebsch-Gordan coefficients, or Wigner-Clebsch-Gordan coefficients.  These not only put the tensor product of the vectors into this special form, but they also block-diagonalize the tensor products of the operators.  The most common example of this is the addition of angular momentum of two spin-1/2 particles.  The result is a triplet (spin-1 representation) and a singlet (spin-0 representation).  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Concluding Remarks====&lt;br /&gt;
&lt;br /&gt;
'''To summarize''', matrix representations of a group are sets of matrices that represent the group in the sense that they follow the same multiplication law as the original group elements.  The representation may be reducible, meaning the set of matrices are all block-diagonalized by a single similarity transformation such that each individual block will represent a group element in its own representation.  If a representation (the set of matrices) cannot be transformed into by a single similarity transformation such that each matrix is comprised of a set of smaller blocks, then the representation is called irreducible.  If there is a isomorphism from the set of matrices to the original group, then the representation is faithful.&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1759</id>
		<title>Chapter 7 - Quantum Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1759"/>
		<updated>2011-11-28T14:28:04Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* A Basis for Errors */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
If information was stored redundantly in some set of quantum states, then it would be possible to use the redundancy to detect and correct errors.  Quantum error correcting codes aim to encode quantum information into states in just such a redundant fashion.  It is worth noting that classical error correcting codes and coding theory has been around a long time and many of the ideas and methods of quantum error correction are imported from classical error correction.  However, quantum error correction requires extra care when measuring to detect and correct errors because superpositions of states must be preserved.  In addition, qubits can experience errors that classical bits cannot.  (For example, there is no phase-flip error on a classical bit.)  This chapter contains an introduction to quantum error correction including simple examples of quantum error correcting codes.   &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Classical Code===&lt;br /&gt;
&lt;br /&gt;
Let us first consider a simple example of a classical error correcting code.  Consider a signal which is comprised only of zeroes and ones.  (For most of these notes, these are the only types of signals: bits and their quantum analogue, qubits.)  An error in a sequence of zeroes and ones would occur if the sender sends a 1 and the receiver receives a 0 for one element of the sequence, or the sender sends a 0 and the receiver receives a 1.  In other words, for this type of encoding, an error would be a &amp;quot;classical bit-flip error&amp;quot; which would turn a 0 into a 1 and a 1 into a 0.  A simple example of a classical error correcting code which protects against such bit-flip errors is the following code.  Rather than use the state 0, the state is encoded redundantly: the state 000 is used.  This is called an encoded zero state or a logical zero state.  Likewise, 111 is used as an encoded 1, or logical 1.  Now suppose one bit is flipped when the encoded state 111 is sent, and further suppose that it is the first bit which is flipped.  If one (and only one) of the bits is flipped, the encoded state could be fixed by flipping the outlier so that it agrees with the others.  &lt;br /&gt;
&lt;br /&gt;
Let us assume that each error is independent and has probability &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt;.  The probability that the bit is not flipped is then &amp;lt;math&amp;gt;1-p\,\!&amp;lt;/math&amp;gt;.  Since the probability is &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; that one bit flip occurs, then the probability that two are flipped is &amp;lt;math&amp;gt;3(1-p)p^2\,\!&amp;lt;/math&amp;gt; assuming that which are flipped is unknown.  The probability that three are flipped is &amp;lt;math&amp;gt;p^3\,\!&amp;lt;/math&amp;gt;.  So the code will help us if &amp;lt;math&amp;gt;p &amp;gt; 3(1-p)p^2 +p^3\,\!&amp;lt;/math&amp;gt; which happens when &amp;lt;math&amp;gt;p&amp;lt;1/2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
This example will be used below to find a simple bit-flip code for a quantum system.&lt;br /&gt;
&lt;br /&gt;
===Further Reading===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Appendix F - Classical Error Correcting Codes|Appendix F]] contains a brief introduction to classical error correction.  Many of the concepts and definitions in that appendix will be helpful for understanding the material in this chapter.  However, the chapter itself is somewhat self-contained.  When more explanation is required or desired, it will likely be helpful to read or reread [[Appendix F - Classical Error Correcting Codes|Appendix F]] and/or consult the references there.&lt;br /&gt;
&lt;br /&gt;
==Shor's Nine-Qubit Quantum Error Correcting Code==&lt;br /&gt;
&lt;br /&gt;
Shor's nine-qubit quantum error correcting code is important for several reasons.  Historically, it is important because it provides the first example of a quantum error correcting code which, in principle, can correct arbitrary single-qubit errors.  Pedagogically, it is important because it is an example which can be understood in terms of the simple classical error correcting code given above.  It also uses many of the standard assumptions of more general quantum error correcting codes.  Therefore, it is presented as our first quantum error correcting code and, as will be seen later, an example of what is called a stabilizer code, which is a very general category.  &lt;br /&gt;
&lt;br /&gt;
The Shor code is introduced in parts, bit-flip and phase-flip, and then in its entirety.  Since  the phase-flip code follows from the bit-flip code (as discussed below), the bit-flip code is discussed in great detail.  &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Quantum Code===&lt;br /&gt;
&lt;br /&gt;
The quantum bit-flip code uses three quantum states to encode one as does the classical bit-flip code above.  The state &amp;lt;math&amp;gt;  \left\vert 0\right\rangle \otimes \left\vert 0\right\rangle\otimes \left\vert 0\right\rangle = \left\vert 000\right\rangle = \left\vert 0_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is the logical state representing the zero state of the encoded qubit.  (The subscript L is to indicate that it is a logical state and the b indicates that it is a bit-flip code.  We will see below why this distinction is helpful.)  Similarly, &amp;lt;math&amp;gt;\left\vert 111\right\rangle = \left\vert 1_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is used for the logical one state.  &lt;br /&gt;
&lt;br /&gt;
====Encoding the Logical State====&lt;br /&gt;
&lt;br /&gt;
Note that one cannot just clone a state to produce redundancy due to the [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|No-Cloning Theorem]].  Also, the encoded state needs to preserve superpositions such as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle =  \alpha\left|0\right\rangle + \beta\left|1\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.1}}&lt;br /&gt;
To encode the state redundantly, cloning is not required.  The encoding can be accomplished using the &amp;lt;math&amp;gt; CNOT \,\!&amp;lt;/math&amp;gt; gate twice.  Simply apply &amp;lt;math&amp;gt; CNOT_{13} \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; CNOT_{12} \,\!&amp;lt;/math&amp;gt; to the following state of three qubits,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle\left|00\right\rangle =  (\alpha\left|0\right\rangle + \beta\left|1\right\rangle)\left|00\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This will produce  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi_L\right\rangle =  \alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle = \alpha\left|000\right\rangle + \beta\left|111\right\rangle . &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.2}}&lt;br /&gt;
The circuit diagram for this is given in [[#Figure 7.1|Figure 7.1]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccencode.jpg|300px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.1:  Circuit diagram for encoding a qubit into a 3-qubit bit-flip protected code.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Error Syndrome Extraction====&lt;br /&gt;
&lt;br /&gt;
Now a method for measurement and recovery is needed.  &lt;br /&gt;
The problem is that in quantum mechanics one cannot just measure the three states to see if they agree; a quantum state can be in a superposition of the (logical) zero state and the (logical) one state as above, and &lt;br /&gt;
a measurement of the first qubit to see if it is in the state zero or not will immediately produce the state &amp;lt;math&amp;gt; \left| 000 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\alpha|^2 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left| 111 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\beta|^2 \,\!&amp;lt;/math&amp;gt;, thus destroying the superposition of the qubit state.  The state would then contain only classical information.  (Essentially it is equivalent to the classical 000 or 111 binary state.)  Since we need to preserve arbitrary superpositions, we cannot use this method for determining whether or not an error occurred.  &lt;br /&gt;
&lt;br /&gt;
Now let us suppose that a bit-flip error occurs on &amp;lt;math&amp;gt;\left|\psi_L\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
The objective is to determine if the state has experienced a bit-flip error or not without ruining the superposition and, if it has an error, to determine which qubit experienced the error. This can be done by checking to see if the first two qubits are the same or not and then checking to see if the last two qubits are the same or not without ever determining whether the state is the logical zero, logical one, or a superposition of the two.  &lt;br /&gt;
&lt;br /&gt;
Let us examine this process in detail.  First, notice the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue 1 and &amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue -1.  Then any logical state is an eigenstate of the operator &amp;lt;math&amp;gt; \sigma_z\otimes \sigma_z\otimes I\,\!&amp;lt;/math&amp;gt; with eigenvalue of 1 if the first two qubits are the same and -1 if they differ.  For example, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle = (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|000\right\rangle + \beta\left|111\right\rangle) = (1)(\alpha\left|000\right\rangle + \beta\left|111\right\rangle).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.3}}&lt;br /&gt;
Of course the same is also true for the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt;.  However, suppose that a bit-flip error occurs on the first qubit, giving &amp;lt;math&amp;gt; (\sigma_x\otimes I\otimes I) \left\vert\psi_L\right\rangle = (\sigma_x\otimes I\otimes I)(\alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle) = \alpha\left|100\right\rangle + \beta\left|011\right\rangle\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle &amp;amp;= (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|100\right\rangle + \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp;= (-\alpha\left|100\right\rangle - \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp; = (-1)(\alpha\left|100\right\rangle + \beta\left|011\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.4}}&lt;br /&gt;
Notice that, in principle, we need not determine either &amp;lt;math&amp;gt; \alpha\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; \beta\,\!&amp;lt;/math&amp;gt;.  However, it does seem that the error can be detected.  Since determining the value of the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt; shows that the last two qubits agree, we know that the error occurred on the first qubit.  In fact, it is not difficult to convince yourself that measuring these two operators will determine which qubit experienced a bit-flip for any of the three.  Just like the classical bit-flip code, this will not indicate whether or not an error occurred on two qubits.  Thus the probability must be small, just like the case for the classical code.&lt;br /&gt;
&lt;br /&gt;
Now, we have the idea that we could determine the parity of the pairs of qubits to determine if they are the same or different.  But how would we determine this in practice?  A method for doing this is shown in [[#Figure 7.2|Figure 7.2]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccSyndrome.jpg|center|400px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.2: A method for extracting a bit-flip error syndrome from a 3-qubit bit-flip protected code.  The M's are measurements on the ancillary qubits, the results of which are recorded as &amp;lt;math&amp;gt;R_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Figure 7.2|Figure 7.2]] gives a circuit for determining the error, also known as a syndrome measurement.  In this example, a bit-flip error occurred on qubit 1 in the 3 qubit QECC.  This is represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate.  After 4 CNOT gates, the two ancillary qubits are measured.  A measurement in the &amp;lt;math&amp;gt;|0\rangle, |1\rangle\,\!&amp;lt;/math&amp;gt; basis gives a result of &amp;lt;math&amp;gt;|1\rangle\,\!&amp;lt;/math&amp;gt; for the top ancillary qubit and &amp;lt;math&amp;gt;|0\rangle\,\!&amp;lt;/math&amp;gt; for the bottom one.  This tells us that the first qubit has had a bit-flip error.  We then feed this information back into the system by implementing an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, thus correcting the error.  &lt;br /&gt;
&lt;br /&gt;
Notice that we have not determined the coefficients of the superposition of the logical zero and logical one states.  We have only determined that there was an error on the first qubit since it does not agree with the other two.  (Assuming that only one bit-flip error could have occurred.)&lt;br /&gt;
&lt;br /&gt;
====Continuous Sets of Errors====&lt;br /&gt;
&lt;br /&gt;
The error, in this case represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate, is not very realistic.  What would be more realistic is that the bit is not flipped completely; it is in a superposition of the zero state and one state.  In other words, we should properly consider the following state, where an error has occurred on the first qubit:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle &amp;amp;=  \alpha(a\left|0\right\rangle + b\left|1\right\rangle) \left|00\right\rangle + \beta(b\left|0\right\rangle + a\left|1\right\rangle)\left|11\right\rangle \\&lt;br /&gt;
 &amp;amp;= \alpha a\left|000\right\rangle + \alpha b\left|100\right\rangle + \beta b\left|011\right\rangle + \beta a\left|111\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.5}}&lt;br /&gt;
This is a rotation about the x-axis by an arbitrary angle with &lt;br /&gt;
&amp;lt;math&amp;gt;a=\cos \theta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b=i\sin\theta\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Transformations of a Qubit|Section C.5.1]].)  Now suppose that two ancillary qubits are attached to the state&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle = \alpha a\left|00000\right\rangle + \alpha b\left|10000\right\rangle + \beta b\left|01100\right\rangle + \beta a\left|11100\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.6}}&lt;br /&gt;
and the resulting state is put into the circuit that gives the error syndrome given in [[#Figure 7.2|Figure 7.2]].  Let &lt;br /&gt;
&amp;lt;math&amp;gt;V = CNOT_{1{a_1}} CNOT_{2{a_1}} CNOT_{2{a_2}} CNOT_{3{a_2}}\,\!&amp;lt;/math&amp;gt;. Then   &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
 V\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle &amp;amp;= (\alpha a\left|00000\right\rangle + \alpha b\left|10010\right\rangle + \beta b\left|01110\right\rangle + \beta a\left|11100\right\rangle) \\&lt;br /&gt;
           &amp;amp;= (\alpha \left|000\right\rangle + \beta\left|111\right\rangle)a\left\vert 00\right\rangle +(\alpha\left|100\right\rangle + \beta \left|011\right\rangle)b\left\vert 10\right\rangle  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.7}},&lt;br /&gt;
where the two ancillary qubits, denoted &amp;lt;math&amp;gt;a_1\,\!&amp;lt;/math&amp;gt; (for the first ancillary qubit which is on top in [[#Figure 7.2|Figure 7.2]]) and &amp;lt;math&amp;gt;a_2\,\!&amp;lt;/math&amp;gt; (for the second ancillary qubit which is on bottom in [[#Figure 7.2|Figure 7.2]]), will give the error syndrome.  The measurement of the second ancillary qubit always gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt;.  The measurement of the first gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|a|^2\,\!&amp;lt;/math&amp;gt; and, if this occurs, the system will be in its original state and there is no error.  However, if the measurement of the first ancillary qubit gives &amp;lt;math&amp;gt;\left|1\right\rangle\,\!&amp;lt;/math&amp;gt;, which it will with probability &amp;lt;math&amp;gt;|b|^2,\,\!&amp;lt;/math&amp;gt; then the system is left in the state &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\alpha\left|100\right\rangle + \beta\left|011\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.8}}&lt;br /&gt;
This indicates that a bit-flip error has occurred on the first qubit.  Such an error is easily corrected with an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, which will flip it.  &lt;br /&gt;
&lt;br /&gt;
Therefore any single-qubit bit-flip error can be corrected, since we will project into the basis of one bit-flip error and the syndrome measurement indicates which one.  In other words, we have made the error discrete using a projective measurement of the ancilla.&lt;br /&gt;
&lt;br /&gt;
===Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Phase-flip errors&amp;quot; are errors which change the sign of the &amp;lt;math&amp;gt; \left| 1\right\rangle\,\!&amp;lt;/math&amp;gt; state.  This is not a classical error as it does not occur on a classical bit.  However, it does occur on qubits that are not in the zero state.  Thus these errors must be treated.   &lt;br /&gt;
&lt;br /&gt;
Much of what works for the bit-flip errors also works for phase-flip errors once we are able to encode properly.  Let us consider the following states that we will used to encode our logical qubit: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left\vert \pm\right\rangle = \frac{1}{\sqrt{2}}(\left\vert 0 \right\rangle \pm \left\vert 1\right\rangle). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.9}}&lt;br /&gt;
In this case, when a &amp;quot;phase-flip&amp;quot; occurs, the &amp;lt;math&amp;gt; \left\vert + \right\rangle \,\!&amp;lt;/math&amp;gt; becomes a &amp;lt;math&amp;gt; \left\vert - \right\rangle \,\!&amp;lt;/math&amp;gt; or vice versa.  Therefore it is similar to the bit-flip error since there are two orthogonal states that are changed into one another by the error.  In this case the error operator is of the form &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt;.  As before, if a phase error occurs on the first qubit, then we can encode redundantly by letting &amp;lt;math&amp;gt; \left\vert 0_{pL} \right\rangle = \left\vert +++ \right\rangle \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left\vert 1_{pL} \right\rangle = \left\vert --- \right\rangle\,\!&amp;lt;/math&amp;gt;.  It is easy to see that this code will enable the detection and correction of one phase error just as the bit-flip code did for one bit-flip.  In this case we exchange the &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt; in the bit-flip code with a &amp;lt;math&amp;gt; \sigma_x \,\!&amp;lt;/math&amp;gt; for the phase-flip code and the process carries through as before.&lt;br /&gt;
&lt;br /&gt;
===Bit-flip and Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
Certainly if a phase-flip error does not have a classical analogue then the combination of bit- and phase-flip errors also does not.  It turns out that by having found a code that will protect against bit-flip errors and another against phase-flip errors, we are able to write down a code that will protect against both.  This was first given by Peter Shor [[Bibliography#Shor:QECC|Shor:1995]], but was also described by Carlton Caves in a very readable paper, [[Bibliography#Caves:QECC|Caves:1999]].  &lt;br /&gt;
&lt;br /&gt;
The way to protect against both is to combine the two codes and take the logical qubits to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle)\\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.10}}&lt;br /&gt;
One may also write this as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= \left\vert +_{bL}\right\rangle \left\vert +_{bL}\right\rangle  \left\vert +_{bL}\right\rangle \\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = \left\vert -_{bL}\right\rangle  \left\vert -_{bL}\right\rangle \left\vert -_{bL}\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.11}}&lt;br /&gt;
&lt;br /&gt;
This shows that there is a code which protects against bit-flip errors and phase-flip errors by using a redundant encoding comprised of the states that protect against bit flips and the states that protect against phase flips.&lt;br /&gt;
&lt;br /&gt;
==Quantum Error Correcting Codes: General Properties==&lt;br /&gt;
&lt;br /&gt;
Now that we have seen some examples of quantum error correcting codes, some natural questions come to mind.  Are there general rules for constructing quantum error correcting codes?  In the case of classical codes, there is a disjointness condition and a Hamming bound.  These let us know when it is not possible to construct a quantum error correcting code.  Here, the two analogues for quantum error correcting codes are given, although the disjointness condition is quite different for quantum error correcting codes.  &lt;br /&gt;
&lt;br /&gt;
===The Quantum Error Correcting Code Condition===&lt;br /&gt;
&lt;br /&gt;
Let us consider a quantum system undergoing some noisy evolution.  As described in [[Chapter 6 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|Section 6.2]] and [[Chapter 6 - Noise in Quantum Systems#Modelling Open System Evolution|Section 6.3]], such an open-system evolution can be described by a quantum operation acting on a density operator,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\rho^\prime= \sum_\alpha A_\alpha \rho A_\alpha^\dagger. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.12}}&lt;br /&gt;
The operator elements &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; can be used to express what is known as the quantum error correcting code condition&lt;br /&gt;
(See [[Bibliography#NielsenChuang:book|Nielsen and Chuang]],  or [[Bibliography#Nielsen/etal|Nielsen, et al:97]] for the original reference), &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
P  A^\dagger_\beta  A_\alpha P = d_{\alpha\beta}P, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.13}}&lt;br /&gt;
where the &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; are the operators from the operator-sum representation, and &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is a projector onto the code space.   &lt;br /&gt;
An equivalent expression is (see [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]]),&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\langle i_L| A^\dagger_\beta  A_\alpha |j_L\rangle = c_{\alpha\beta}\delta_{ij}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.14}}&lt;br /&gt;
This is the quantum analogue of the [[Appendix F - Classical Error Correcting Codes#eqF.5|disjointness condition]] for classical error correcting codes.  To interpret this, consider [[#eq7.14|Equation (7.14)]].  This says that if one error &amp;lt;math&amp;gt;A_\beta\,\!&amp;lt;/math&amp;gt; acts acts on a logical state &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; and another error (or possibly the same error) &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; acts on a different logical state &amp;lt;math&amp;gt;|j_L\rangle\,\!&amp;lt;/math&amp;gt;, then the two cannot be equal.  In fact, the statement is a bit different.  It tells us that there can be no overlap between two states.  If there were overlap, there would be some probability for a measurement to produce an ambiguous result.  It also tells us that for two different &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; the same error acting will produce the same result.  This is allowed by the superposition principle, but not something one finds in classical error correction.  Therefore, the analogy with the classical disjointness condition is very loose.  (See  [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]] for further explanation.)  &lt;br /&gt;
&lt;br /&gt;
One way to understand [[#eq7.13|Equation (7.13)]] is to show [[#eq7.14|Equation (7.14)]] is true if and only if [[#eq7.13|Equation (7.13)]] is true.  However, these results can be seen as part of a broader and more basic property of quantum systems related to the reversibility of a quantum operation as discussed by [[Bibliography#Nielsen/etal|Nielsen, et al:97]].&lt;br /&gt;
&lt;br /&gt;
===A Basis for Errors===&lt;br /&gt;
&lt;br /&gt;
Using the Pauli matrices and the identity for the errors, any error can be described as a tensor product of operators.  Each term in the tensor product will involve one of four operators, &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;, where the identity &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt; indicates that no error has occurred.  (See [[Chapter 6 - Noise in Quantum Systems#Examples|Section 6.5]].)  For example, suppose a code involves five qubits.  For each of the five qubits, suppose no error occurs on qubit 1, a bit-flip error occurs on qubits 2 and 3, a phase error occurs on qubit 4, and qubit 5 is affected by both types of errors.  This error operator would be &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}\otimes X_2\otimes X_3 \otimes Z_4 \otimes Y_5&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.15}}&lt;br /&gt;
or, using a short-hand notation, &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
X_2 X_3 Z_4 Y_5.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.16}}&lt;br /&gt;
This error operator is said to have weight four.  &lt;br /&gt;
&lt;br /&gt;
====Definition 1: weight of an operator====&lt;br /&gt;
&lt;br /&gt;
The '''weight of an operator''' is the number of non-identity elements in the tensor product.  &lt;br /&gt;
&lt;br /&gt;
This provides us with a basis for all errors that can occur.  This is enough, since the errors can be made discrete using the syndrome measurement process.&lt;br /&gt;
&lt;br /&gt;
====Definition 2: Distance of a Quantum Error Correcting Code====&lt;br /&gt;
&lt;br /&gt;
The distance of a quantum error correcting code is the minimum weight, greater than zero, of an element &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; of the Pauli group such that the quantum error correcting code condition fails (i.e., such that &amp;lt;math&amp;gt;\langle i_L |G|j_L \rangle = c\delta_{ij}\,\!&amp;lt;/math&amp;gt; is not satisfied).&lt;br /&gt;
&lt;br /&gt;
===Quantum Error Correction for &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; Errors===&lt;br /&gt;
&lt;br /&gt;
A quantum error correcting code that uses &amp;lt;math&amp;gt;n\;\!&amp;lt;/math&amp;gt; qubits to encode &amp;lt;math&amp;gt;k\;\!&amp;lt;/math&amp;gt; logical qubits and can correct up to &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors is denoted &amp;lt;math&amp;gt;[[n,k,2t+1]]\;\!&amp;lt;/math&amp;gt;.  This is similar to the classical code notation except that double brackets are used to distinguish the quantum code from the corresponding classical code.  Using &amp;lt;math&amp;gt;d=2t+1\;\!&amp;lt;/math&amp;gt;, this is also written &amp;lt;math&amp;gt;[[n,k,d]]\;\!&amp;lt;/math&amp;gt;.  When a code satisfies the more restrictive condition &amp;lt;math&amp;gt;c_{\alpha\beta}=0\;\!&amp;lt;/math&amp;gt; in [[#eq7.14|Equ. (7.14)]], the code is called non-degenerate.  Note that [[#eq7.14|Equ. (7.14)]] indicates the set of errors which needs to be corrected given by the operator elements of the operator-sum representation.  It turns out that one can choose the set of errors to be described by an orthogonal basis.  This is done using the unitary degree of freedom in the operator-sum representation from [[Chapter 6 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|Section 6.4]].  [[Bibliography#NielsenChuang:book|Nielsen and Chuang]] use this to show that the conditions [[#eq7.13|Equ. (7.13)]] are necessary and sufficient for the existence of a quantum error correcting code.  Thus the necessary and sufficient conditions for being able to correct &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors are given by [[#eq7.13|Equ. (7.13)]], or equivalently, [[#eq7.14|Equ. (7.14)]].&lt;br /&gt;
&lt;br /&gt;
===The Quantum Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
Like the classical Hamming bound ([[Appendix F - Classical Error Correcting Codes#The Hamming Bound|Section F.4]]), the quantum Hamming bound is a simple bound on the size of the code for correcting a given number of errors.  In other words, it provides a bound on the rate of the code, &amp;lt;math&amp;gt;k/n\;\!&amp;lt;/math&amp;gt;.  The main difference is that there are three types of errors that can occur to a qubit: the three Pauli matrices &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;.  So each error comes in three types.  The number of possible error operators of weight &amp;lt;math&amp;gt;t \;\!&amp;lt;/math&amp;gt; acting on a code of &amp;lt;math&amp;gt;n \;\!&amp;lt;/math&amp;gt; qubits is &amp;lt;math&amp;gt;3^t C(n,t)\;\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;C(n,t)\;\!&amp;lt;/math&amp;gt; is the binomial coefficient.  Therefore since every logical state (and every logical state with any error acting on it) must all be mutually orthogonal, the quantum Hamming bound states that this set must be less than or equal to the total number of states in the Hilbert space, which is &amp;lt;math&amp;gt;2^n\;\!&amp;lt;/math&amp;gt;.  That is,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
m\sum_{i=0}^t 3^i C(n,i) \leq 2^n,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.17}}&lt;br /&gt;
where &amp;lt;math&amp;gt;m\;\!&amp;lt;/math&amp;gt; is the number of code words.&lt;br /&gt;
&lt;br /&gt;
Just as in the classical case, when &amp;lt;math&amp;gt;m= 2^k\;\!&amp;lt;/math&amp;gt;, we may take the logarithm of the equation along with &amp;lt;math&amp;gt;n,t \;\!&amp;lt;/math&amp;gt; large to get&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
k/n\leq 1-(t/n)\log 3-H(t/n),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.18}}&lt;br /&gt;
where &amp;lt;math&amp;gt;H(x) = -x\log x -(1-x)\log(1-x)\;\!&amp;lt;/math&amp;gt; and all logarithms are base 2.  &lt;br /&gt;
&lt;br /&gt;
[[#eq7.17|Equation (7.17)]] tells us that the smallest possible code encoding one qubit such that it can be protected against one arbitrary error has 5 physical qubits encoding one logical one.  (Here &amp;lt;math&amp;gt;m=2 (k=1), t=1 \;\!&amp;lt;/math&amp;gt; so &amp;lt;math&amp;gt; n=5 \;\!&amp;lt;/math&amp;gt; .)&lt;br /&gt;
&lt;br /&gt;
==Stabilizer Codes==&lt;br /&gt;
&lt;br /&gt;
The mathematical definition of a stabilizer is given in [[Appendix D - Group Theory#Definition 10: Stabilizer|Section D.6.1]].  Loosely speaking, it is a subgroup of transformations that leave a particular point in space fixed.  The theory of stabilizer codes is based on this notion.  &lt;br /&gt;
&lt;br /&gt;
Stabilizer codes are a family of quantum error correcting codes which are describable by using the stabilizer of a state (really a set of states) in the Hilbert space.  They are distinguished for several reasons.  One, they form a large class of quantum error correcting codes.  Two, they are conveniently described by their operators rather than their states and show that this can generally be the case for many quantum error correcting codes.  Other reasons will be discussed later.&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
We will begin by revisiting the three-qubit quantum error correcting code presented in some detail in [[Chapter 7 - Quantum Error Correcting Codes#Bit-flip Errors: A Quantum Code|Section 7.2.1]].  Recall that a bit-flip error that has occurred on one of the three qubits used in the logical qubit would be detectable if we could measure the parity of pairs of qubits.  These operators could be chosen to be &amp;lt;math&amp;gt; Z_1Z_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; Z_2Z_3\,\!&amp;lt;/math&amp;gt;, although any two non-identical pair would work.  Note that the basis states  &amp;lt;math&amp;gt; |000\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; |111\rangle\,\!&amp;lt;/math&amp;gt;, as well as any linear combination of these states, are eigenstates of these operators with eigenvalue +1.  The states with a single correctable error are eigenstates of these operators, but one will have eigenvalue -1.  The operators that give one of the single qubit bit-flip errors are either &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt;.  This is the idea behind stabilizer quantum error correcting codes.  The stabilizers act as parity checks on the code words.  &lt;br /&gt;
&lt;br /&gt;
The stabilizer is a subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of the [[Appendix D - Group Theory#Definition 12: Pauli Group|Pauli group]], which is an abelian subgroup (this means all elements commute with each other).  However, the elements &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt; anti-commute with at least one element of the stabilizer.   So the parity check describable by saying that states with errors are eigenstates of the stabilizers with eigenvalue -1 is equivalent to saying that one of the stabilizer operators will anti-commute with an error operator. &lt;br /&gt;
&lt;br /&gt;
The elements of the stabilizer stabilize code words, that is, code words are eigenstates of the stabilizer operators with eigenvalue +1, and states with errors have eigenvalue -1 and this can always be chosen to be true for this class of quantum error correcting codes. Note that if &amp;lt;math&amp;gt;|\psi\rangle\,\!&amp;lt;/math&amp;gt; is a code word, &amp;lt;math&amp;gt;S\in \mathcal{S}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; is an error operator, then  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
SE|\psi\rangle &amp;amp;= S|\psi^\prime\rangle =(-1)|\psi^\prime\rangle \\&lt;br /&gt;
ES|\psi\rangle &amp;amp;= E|\psi\rangle = |\psi^\prime\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.19}}&lt;br /&gt;
or&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(-1)SE|\psi\rangle = ES|\psi\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.20}}&lt;br /&gt;
This says that &amp;lt;math&amp;gt;SE + ES =0\,\!&amp;lt;/math&amp;gt; when acting on the code words.  In other words, the operators anti-commute when &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; produces a state that has eigenvalues -1 and also when it is a state that &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; stabilizes.&lt;br /&gt;
&lt;br /&gt;
This is the basic idea of the stabilizer code construction to be discussed in general in the next section.&lt;br /&gt;
&lt;br /&gt;
===General Stabilizer Formalism===&lt;br /&gt;
&lt;br /&gt;
This brief section provides general definitions and theorems for stabilizer quantum error correcting codes.  The next section provides an explicit example.&lt;br /&gt;
&lt;br /&gt;
====Definition 3: Stabilizer Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt; \mathcal{S}\subset \mathcal{P}_n \,\!&amp;lt;/math&amp;gt; be an abelian subgroup of the Pauli group that does not contain &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt;.  Let &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S}) = \{|\psi\rangle \; |\; S|\psi\rangle=|\psi\rangle, \mbox{ for all } S\in \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S})\,\!&amp;lt;/math&amp;gt; is a stabilizer code and &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; is its stabilizer. &lt;br /&gt;
&lt;br /&gt;
This formalizes what was stated earlier, which is that all states of the code space are eigenstates of elements of the stabilizer subgroup with eigenvalue +1.  However, it also says more.  It tells us that any subgroup of the Pauli group that is abelian and does not contain the elements &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; can be used to construct a stabilizer code by simply choosing the set of states that are eigenstates with eigenvalues +1.  Another way of saying this is that the states are fixed, or invariant, under the action of the stabilizer elements.  Let us see why the restriction not allowing &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; must be included.  Suppose that &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt; was in the set &amp;lt;math&amp;gt; \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  It then follows that &amp;lt;math&amp;gt; -\mathbb{I}|\psi\rangle = |\psi\rangle\,\!&amp;lt;/math&amp;gt;.  Only the zero state satisfies this equation, so the code must contain no states other than the zero one.  (The states must be +1 eigenstates of every stabilizer element.)  Now, suppose one of the other two was in the stabilizer subgroup. This means that the element squared is also in the stabilizer, since it is a subgroup and must be closed under multiplication.  But the square of these gives &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt;, which cannot be in the set.  Thus none of these can be included.&lt;br /&gt;
&lt;br /&gt;
====Encoding/Decoding from Stabilizer Generators====&lt;br /&gt;
&lt;br /&gt;
Once one has obtained the stabilizer subgroup, it is left to find the codewords that are states with eigenvalue +1.  To do this, one only needs to ensure the generators of the stabilizer satisfy this condition, since the generators give all other stabilizer elements through multiplication. Therefore, if the state has eigenvalue +1 for all generators, it will also have eigenvalue +1 for all stabilizer elements.  &lt;br /&gt;
&lt;br /&gt;
For smaller codes, finding the set of states could be as easy as satisfying constraints given by the small number of generators.  Larger, more complicated codes may however require a lot of work to find the states.  Cleve and Gottesman gave an algorithm for finding the code words using a efficient gate array obtained from the stabilizer formalism.  http://arxiv.org/abs/quant-ph/9607030  &lt;br /&gt;
&lt;br /&gt;
It is worth noting that the decoding and error detection and correction steps also require work to find explicit circuits.  However, for many stabilizer codes, decoding is simply encoding in reverse.  (This is not so for every quantum error correcting code.)  &lt;br /&gt;
&lt;br /&gt;
Although these accomplishments are very important, more work is required to ensure circuits are fault-tolerant---that errors do not propagate or grow as the computation progresses.  If they were to develop without these constraints, then the computation would eventually fail.&lt;br /&gt;
&lt;br /&gt;
===A Return to Shor's Code===&lt;br /&gt;
&lt;br /&gt;
Let us consider the set of operators in [[#Table7.1|Table 7.1]] where each operator in the row is included, in order, in the tensor product that forms an element of the Pauli group.  These elements form the eight [[Appendix D - Group Theory#Definition 14: Generators of a Group|generators]] of stabilizer elements &amp;lt;math&amp;gt;S_i\,\!&amp;lt;/math&amp;gt;.  The order of the stabilizer subgroup is much larger than the set of generators, which is only 8.  Here they are taken as in the table, but the set is not unique.  This set is chosen to agree with our earlier choice of measurements.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.1: ''The rows give the Pauli matrices which are included in a tensor product, in order, in an element of the Pauli group.  Each column corresponds to the qubit, q1-q9, on which the operator in that column will act.''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|q 8&lt;br /&gt;
|q 9&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_7\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_8\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Having the generators of the stabilizer of the code, the objective is to construct the codewords, or the explicit states that are eigenstates of these operators with eigenvalues +1.  From the top row, it is clear that the first two qubits must be the same, whether zero or one, so that the parity is even.  Similarly, the second two must be the same, and thus the first three must be the same.  Similarly, the middle three and last three must also be the same.  The last two generators state that flipping the first six bits at once will produce the same state, and flipping the last six bits together will produce the same state.  Thinking of these in blocks (since the first six generators give blocks of three) tells us that there are states that are symmetric under the interchange of zeroes and ones in pairs of triplet blocks.  To break this into two parts, one may choose the symmetric and anti-symmetric combination of states that leads to the Shor code words given in [[#eq7.10|Equation (7.10)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CSS codes==&lt;br /&gt;
&lt;br /&gt;
There is a class of quantum error correcting codes called the CSS codes after their inventors  [[Bibliography#CalderbankNShor|Calderbank and Shor]], and [[Bibliography#Steane:prsl|Steane]].   These are also stabilizer codes, but their construction is different and somewhat informative due to the connection to classical error correction.  However, given that they are stabilizer codes, the stabilizer formalism and tools can be used for encoding, etc.  &lt;br /&gt;
&lt;br /&gt;
The CSS codes are constructed from two classical linear codes, say &amp;lt;math&amp;gt; \mathcal{C}_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \mathcal{C}_2\,\!&amp;lt;/math&amp;gt;.  This is done by taking advantage of the parity check matrices from the classical coding theory.  In this section, this construction is briefly described.  In the next section, the seven qubit CSS code is described.  &lt;br /&gt;
&lt;br /&gt;
Recall from the discussion of the [[Chapter 7 - Quantum Error Correcting Codes#Shor's Nine-Qubit Quantum Error Correcting Code|Shor code]] that a phase-flip code can be constructed from a bit-flip code by using Hadamard gates in order to change the basis from &amp;lt;math&amp;gt; |0\rangle,|1\rangle\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; |+\rangle,|-\rangle\,\!&amp;lt;/math&amp;gt;.  Thus all of the error detection and correction can be accomplished by translating from one basis to the other.  &lt;br /&gt;
&lt;br /&gt;
Keeping this in mind, a quantum error correcting code can be constructed from a classical error correcting code using the following trick.  (See [[Bibliography#Steane:prl|Steane]] or the [[Bibliography#Gottesman:rev09|review by Gottesman]].)  Take the classical parity check matrix &amp;lt;math&amp;gt; P_1\,\!&amp;lt;/math&amp;gt; for a classical error correcting &amp;lt;math&amp;gt;[n_1,k_1,d_1]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt;.  This will turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_1=(d_1-1)/2\,\!&amp;lt;/math&amp;gt; bit-flip errors, just as did the classical code.  Then, given another classical error correcting &amp;lt;math&amp;gt;[n_2,k_2,d_2]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_2\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt;.  This will give turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_2=(d_2-1)/2\,\!&amp;lt;/math&amp;gt; phase-flip errors.  This would give a stabilizer code with one possible caveat: the operators in the stabilizer all need to commute with each other.  The way to ensure this will happen, that the &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt; generators and  &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt; generators commute, is to combine the codes in a particular way.  &lt;br /&gt;
&lt;br /&gt;
The dual of a code (denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;) is also a code, and it is not too difficult to show that the parity check matrix for &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; is the generator matrix for &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;.  It turns out that if (and only if) &amp;lt;math&amp;gt;\mathcal{C}_2^\perp \subseteq \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, then the two codes combine to produce an &amp;lt;math&amp;gt;[[n,k_1+k_2-n,d]]\,\!&amp;lt;/math&amp;gt; stabilizer code, where &amp;lt;math&amp;gt;d\geq \text{min}(d_1,d_2)\,\!&amp;lt;/math&amp;gt;.  That is, the generators for each of the two codes will commute with each other.  &lt;br /&gt;
&lt;br /&gt;
Now the two codes, one to protect against bit-flips and one to protect against phase-flips, combine so that they can correct any error, including &amp;lt;math&amp;gt;Y\,\!&amp;lt;/math&amp;gt; errors that are composed of both a bit-flip and phase-flip.  Therefore the code can protect against both, and the minimum distance is the smaller of the distance of the two codes.  It could actually be higher if the code is degenerate.  &lt;br /&gt;
&lt;br /&gt;
===Steane's Seven Qubit Code===&lt;br /&gt;
&lt;br /&gt;
The seven qubit quantum error correcting code, originally described by Steane, is member of the class of CSS quantum error correcting codes.  In fact it is the smallest such code, and has  &amp;lt;math&amp;gt;\mathcal{C}_2 = \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;.  It is a &amp;lt;math&amp;gt;[[7,1,3]]\,\!&amp;lt;/math&amp;gt; quantum error correcting code, using 7 qubits to encode one logical (or data) qubit such that one arbitrary error can be detected and corrected.  This code has been studied extensively, since it is able to be made fault tolerant (explained below).  &lt;br /&gt;
&lt;br /&gt;
This code is actually based on the &amp;lt;math&amp;gt;[7,4,3]\,\!&amp;lt;/math&amp;gt; Hamming code discussed in [[Appendix F - Classical Error Correcting Codes|Appendix F]].  Let us first recall the parity check matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
P =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
\end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.21}}&lt;br /&gt;
and the generator matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
G =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.22}}&lt;br /&gt;
Relating this back to the stabilizer formalism, the generators can be written using the parity check matrix as described above.  They are given in [[#Table7.2|Table 7.2]].  The first three rows each give the elements of the tensor product, in order, for the stabilizer elements of a code that can protect against bit flips.  The next three give stabilizers for the phase-flip code.  From these one may get the code words.  The logical zero and one are given below.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.2: ''The first three rows give the stabilizers for the bit-flip error correcting code.  The next three are for the phase-flip code. (See also [[#Table7.1|Table 7.1]] for further explanation.)''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Steane's 7-qubit code encodes the logical zero using all even weight classical code vectors, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|0_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{even }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.23}}&lt;br /&gt;
The odd weight classical code vectors are used to encode the logical one state,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|1_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{odd }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.24}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#Introduction|Continue to '''Chapter 8 - Decoherence-Free/Noiseless Subsystems''']]&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
[[Chapter 10 - Fault-Tolerant Quantum Computing#Introduction|Skip to '''Chapter 10 - Fault-Tolerant Quantum Computing''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1758</id>
		<title>Chapter 7 - Quantum Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1758"/>
		<updated>2011-11-28T14:13:44Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Error Syndrome Extraction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
If information was stored redundantly in some set of quantum states, then it would be possible to use the redundancy to detect and correct errors.  Quantum error correcting codes aim to encode quantum information into states in just such a redundant fashion.  It is worth noting that classical error correcting codes and coding theory has been around a long time and many of the ideas and methods of quantum error correction are imported from classical error correction.  However, quantum error correction requires extra care when measuring to detect and correct errors because superpositions of states must be preserved.  In addition, qubits can experience errors that classical bits cannot.  (For example, there is no phase-flip error on a classical bit.)  This chapter contains an introduction to quantum error correction including simple examples of quantum error correcting codes.   &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Classical Code===&lt;br /&gt;
&lt;br /&gt;
Let us first consider a simple example of a classical error correcting code.  Consider a signal which is comprised only of zeroes and ones.  (For most of these notes, these are the only types of signals: bits and their quantum analogue, qubits.)  An error in a sequence of zeroes and ones would occur if the sender sends a 1 and the receiver receives a 0 for one element of the sequence, or the sender sends a 0 and the receiver receives a 1.  In other words, for this type of encoding, an error would be a &amp;quot;classical bit-flip error&amp;quot; which would turn a 0 into a 1 and a 1 into a 0.  A simple example of a classical error correcting code which protects against such bit-flip errors is the following code.  Rather than use the state 0, the state is encoded redundantly: the state 000 is used.  This is called an encoded zero state or a logical zero state.  Likewise, 111 is used as an encoded 1, or logical 1.  Now suppose one bit is flipped when the encoded state 111 is sent, and further suppose that it is the first bit which is flipped.  If one (and only one) of the bits is flipped, the encoded state could be fixed by flipping the outlier so that it agrees with the others.  &lt;br /&gt;
&lt;br /&gt;
Let us assume that each error is independent and has probability &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt;.  The probability that the bit is not flipped is then &amp;lt;math&amp;gt;1-p\,\!&amp;lt;/math&amp;gt;.  Since the probability is &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; that one bit flip occurs, then the probability that two are flipped is &amp;lt;math&amp;gt;3(1-p)p^2\,\!&amp;lt;/math&amp;gt; assuming that which are flipped is unknown.  The probability that three are flipped is &amp;lt;math&amp;gt;p^3\,\!&amp;lt;/math&amp;gt;.  So the code will help us if &amp;lt;math&amp;gt;p &amp;gt; 3(1-p)p^2 +p^3\,\!&amp;lt;/math&amp;gt; which happens when &amp;lt;math&amp;gt;p&amp;lt;1/2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
This example will be used below to find a simple bit-flip code for a quantum system.&lt;br /&gt;
&lt;br /&gt;
===Further Reading===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Appendix F - Classical Error Correcting Codes|Appendix F]] contains a brief introduction to classical error correction.  Many of the concepts and definitions in that appendix will be helpful for understanding the material in this chapter.  However, the chapter itself is somewhat self-contained.  When more explanation is required or desired, it will likely be helpful to read or reread [[Appendix F - Classical Error Correcting Codes|Appendix F]] and/or consult the references there.&lt;br /&gt;
&lt;br /&gt;
==Shor's Nine-Qubit Quantum Error Correcting Code==&lt;br /&gt;
&lt;br /&gt;
Shor's nine-qubit quantum error correcting code is important for several reasons.  Historically, it is important because it provides the first example of a quantum error correcting code which, in principle, can correct arbitrary single-qubit errors.  Pedagogically, it is important because it is an example which can be understood in terms of the simple classical error correcting code given above.  It also uses many of the standard assumptions of more general quantum error correcting codes.  Therefore, it is presented as our first quantum error correcting code and, as will be seen later, an example of what is called a stabilizer code, which is a very general category.  &lt;br /&gt;
&lt;br /&gt;
The Shor code is introduced in parts, bit-flip and phase-flip, and then in its entirety.  Since  the phase-flip code follows from the bit-flip code (as discussed below), the bit-flip code is discussed in great detail.  &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Quantum Code===&lt;br /&gt;
&lt;br /&gt;
The quantum bit-flip code uses three quantum states to encode one as does the classical bit-flip code above.  The state &amp;lt;math&amp;gt;  \left\vert 0\right\rangle \otimes \left\vert 0\right\rangle\otimes \left\vert 0\right\rangle = \left\vert 000\right\rangle = \left\vert 0_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is the logical state representing the zero state of the encoded qubit.  (The subscript L is to indicate that it is a logical state and the b indicates that it is a bit-flip code.  We will see below why this distinction is helpful.)  Similarly, &amp;lt;math&amp;gt;\left\vert 111\right\rangle = \left\vert 1_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is used for the logical one state.  &lt;br /&gt;
&lt;br /&gt;
====Encoding the Logical State====&lt;br /&gt;
&lt;br /&gt;
Note that one cannot just clone a state to produce redundancy due to the [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|No-Cloning Theorem]].  Also, the encoded state needs to preserve superpositions such as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle =  \alpha\left|0\right\rangle + \beta\left|1\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.1}}&lt;br /&gt;
To encode the state redundantly, cloning is not required.  The encoding can be accomplished using the &amp;lt;math&amp;gt; CNOT \,\!&amp;lt;/math&amp;gt; gate twice.  Simply apply &amp;lt;math&amp;gt; CNOT_{13} \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; CNOT_{12} \,\!&amp;lt;/math&amp;gt; to the following state of three qubits,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle\left|00\right\rangle =  (\alpha\left|0\right\rangle + \beta\left|1\right\rangle)\left|00\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This will produce  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi_L\right\rangle =  \alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle = \alpha\left|000\right\rangle + \beta\left|111\right\rangle . &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.2}}&lt;br /&gt;
The circuit diagram for this is given in [[#Figure 7.1|Figure 7.1]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccencode.jpg|300px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.1:  Circuit diagram for encoding a qubit into a 3-qubit bit-flip protected code.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Error Syndrome Extraction====&lt;br /&gt;
&lt;br /&gt;
Now a method for measurement and recovery is needed.  &lt;br /&gt;
The problem is that in quantum mechanics one cannot just measure the three states to see if they agree; a quantum state can be in a superposition of the (logical) zero state and the (logical) one state as above, and &lt;br /&gt;
a measurement of the first qubit to see if it is in the state zero or not will immediately produce the state &amp;lt;math&amp;gt; \left| 000 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\alpha|^2 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left| 111 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\beta|^2 \,\!&amp;lt;/math&amp;gt;, thus destroying the superposition of the qubit state.  The state would then contain only classical information.  (Essentially it is equivalent to the classical 000 or 111 binary state.)  Since we need to preserve arbitrary superpositions, we cannot use this method for determining whether or not an error occurred.  &lt;br /&gt;
&lt;br /&gt;
Now let us suppose that a bit-flip error occurs on &amp;lt;math&amp;gt;\left|\psi_L\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
The objective is to determine if the state has experienced a bit-flip error or not without ruining the superposition and, if it has an error, to determine which qubit experienced the error. This can be done by checking to see if the first two qubits are the same or not and then checking to see if the last two qubits are the same or not without ever determining whether the state is the logical zero, logical one, or a superposition of the two.  &lt;br /&gt;
&lt;br /&gt;
Let us examine this process in detail.  First, notice the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue 1 and &amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue -1.  Then any logical state is an eigenstate of the operator &amp;lt;math&amp;gt; \sigma_z\otimes \sigma_z\otimes I\,\!&amp;lt;/math&amp;gt; with eigenvalue of 1 if the first two qubits are the same and -1 if they differ.  For example, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle = (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|000\right\rangle + \beta\left|111\right\rangle) = (1)(\alpha\left|000\right\rangle + \beta\left|111\right\rangle).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.3}}&lt;br /&gt;
Of course the same is also true for the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt;.  However, suppose that a bit-flip error occurs on the first qubit, giving &amp;lt;math&amp;gt; (\sigma_x\otimes I\otimes I) \left\vert\psi_L\right\rangle = (\sigma_x\otimes I\otimes I)(\alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle) = \alpha\left|100\right\rangle + \beta\left|011\right\rangle\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle &amp;amp;= (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|100\right\rangle + \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp;= (-\alpha\left|100\right\rangle - \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp; = (-1)(\alpha\left|100\right\rangle + \beta\left|011\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.4}}&lt;br /&gt;
Notice that, in principle, we need not determine either &amp;lt;math&amp;gt; \alpha\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; \beta\,\!&amp;lt;/math&amp;gt;.  However, it does seem that the error can be detected.  Since determining the value of the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt; shows that the last two qubits agree, we know that the error occurred on the first qubit.  In fact, it is not difficult to convince yourself that measuring these two operators will determine which qubit experienced a bit-flip for any of the three.  Just like the classical bit-flip code, this will not indicate whether or not an error occurred on two qubits.  Thus the probability must be small, just like the case for the classical code.&lt;br /&gt;
&lt;br /&gt;
Now, we have the idea that we could determine the parity of the pairs of qubits to determine if they are the same or different.  But how would we determine this in practice?  A method for doing this is shown in [[#Figure 7.2|Figure 7.2]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccSyndrome.jpg|center|400px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.2: A method for extracting a bit-flip error syndrome from a 3-qubit bit-flip protected code.  The M's are measurements on the ancillary qubits, the results of which are recorded as &amp;lt;math&amp;gt;R_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Figure 7.2|Figure 7.2]] gives a circuit for determining the error, also known as a syndrome measurement.  In this example, a bit-flip error occurred on qubit 1 in the 3 qubit QECC.  This is represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate.  After 4 CNOT gates, the two ancillary qubits are measured.  A measurement in the &amp;lt;math&amp;gt;|0\rangle, |1\rangle\,\!&amp;lt;/math&amp;gt; basis gives a result of &amp;lt;math&amp;gt;|1\rangle\,\!&amp;lt;/math&amp;gt; for the top ancillary qubit and &amp;lt;math&amp;gt;|0\rangle\,\!&amp;lt;/math&amp;gt; for the bottom one.  This tells us that the first qubit has had a bit-flip error.  We then feed this information back into the system by implementing an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, thus correcting the error.  &lt;br /&gt;
&lt;br /&gt;
Notice that we have not determined the coefficients of the superposition of the logical zero and logical one states.  We have only determined that there was an error on the first qubit since it does not agree with the other two.  (Assuming that only one bit-flip error could have occurred.)&lt;br /&gt;
&lt;br /&gt;
====Continuous Sets of Errors====&lt;br /&gt;
&lt;br /&gt;
The error, in this case represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate, is not very realistic.  What would be more realistic is that the bit is not flipped completely; it is in a superposition of the zero state and one state.  In other words, we should properly consider the following state, where an error has occurred on the first qubit:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle &amp;amp;=  \alpha(a\left|0\right\rangle + b\left|1\right\rangle) \left|00\right\rangle + \beta(b\left|0\right\rangle + a\left|1\right\rangle)\left|11\right\rangle \\&lt;br /&gt;
 &amp;amp;= \alpha a\left|000\right\rangle + \alpha b\left|100\right\rangle + \beta b\left|011\right\rangle + \beta a\left|111\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.5}}&lt;br /&gt;
This is a rotation about the x-axis by an arbitrary angle with &lt;br /&gt;
&amp;lt;math&amp;gt;a=\cos \theta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b=i\sin\theta\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Transformations of a Qubit|Section C.5.1]].)  Now suppose that two ancillary qubits are attached to the state&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle = \alpha a\left|00000\right\rangle + \alpha b\left|10000\right\rangle + \beta b\left|01100\right\rangle + \beta a\left|11100\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.6}}&lt;br /&gt;
and the resulting state is put into the circuit that gives the error syndrome given in [[#Figure 7.2|Figure 7.2]].  Let &lt;br /&gt;
&amp;lt;math&amp;gt;V = CNOT_{1{a_1}} CNOT_{2{a_1}} CNOT_{2{a_2}} CNOT_{3{a_2}}\,\!&amp;lt;/math&amp;gt;. Then   &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
 V\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle &amp;amp;= (\alpha a\left|00000\right\rangle + \alpha b\left|10010\right\rangle + \beta b\left|01110\right\rangle + \beta a\left|11100\right\rangle) \\&lt;br /&gt;
           &amp;amp;= (\alpha \left|000\right\rangle + \beta\left|111\right\rangle)a\left\vert 00\right\rangle +(\alpha\left|100\right\rangle + \beta \left|011\right\rangle)b\left\vert 10\right\rangle  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.7}},&lt;br /&gt;
where the two ancillary qubits, denoted &amp;lt;math&amp;gt;a_1\,\!&amp;lt;/math&amp;gt; (for the first ancillary qubit which is on top in [[#Figure 7.2|Figure 7.2]]) and &amp;lt;math&amp;gt;a_2\,\!&amp;lt;/math&amp;gt; (for the second ancillary qubit which is on bottom in [[#Figure 7.2|Figure 7.2]]), will give the error syndrome.  The measurement of the second ancillary qubit always gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt;.  The measurement of the first gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|a|^2\,\!&amp;lt;/math&amp;gt; and, if this occurs, the system will be in its original state and there is no error.  However, if the measurement of the first ancillary qubit gives &amp;lt;math&amp;gt;\left|1\right\rangle\,\!&amp;lt;/math&amp;gt;, which it will with probability &amp;lt;math&amp;gt;|b|^2,\,\!&amp;lt;/math&amp;gt; then the system is left in the state &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\alpha\left|100\right\rangle + \beta\left|011\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.8}}&lt;br /&gt;
This indicates that a bit-flip error has occurred on the first qubit.  Such an error is easily corrected with an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, which will flip it.  &lt;br /&gt;
&lt;br /&gt;
Therefore any single-qubit bit-flip error can be corrected, since we will project into the basis of one bit-flip error and the syndrome measurement indicates which one.  In other words, we have made the error discrete using a projective measurement of the ancilla.&lt;br /&gt;
&lt;br /&gt;
===Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Phase-flip errors&amp;quot; are errors which change the sign of the &amp;lt;math&amp;gt; \left| 1\right\rangle\,\!&amp;lt;/math&amp;gt; state.  This is not a classical error as it does not occur on a classical bit.  However, it does occur on qubits that are not in the zero state.  Thus these errors must be treated.   &lt;br /&gt;
&lt;br /&gt;
Much of what works for the bit-flip errors also works for phase-flip errors once we are able to encode properly.  Let us consider the following states that we will used to encode our logical qubit: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left\vert \pm\right\rangle = \frac{1}{\sqrt{2}}(\left\vert 0 \right\rangle \pm \left\vert 1\right\rangle). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.9}}&lt;br /&gt;
In this case, when a &amp;quot;phase-flip&amp;quot; occurs, the &amp;lt;math&amp;gt; \left\vert + \right\rangle \,\!&amp;lt;/math&amp;gt; becomes a &amp;lt;math&amp;gt; \left\vert - \right\rangle \,\!&amp;lt;/math&amp;gt; or vice versa.  Therefore it is similar to the bit-flip error since there are two orthogonal states that are changed into one another by the error.  In this case the error operator is of the form &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt;.  As before, if a phase error occurs on the first qubit, then we can encode redundantly by letting &amp;lt;math&amp;gt; \left\vert 0_{pL} \right\rangle = \left\vert +++ \right\rangle \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left\vert 1_{pL} \right\rangle = \left\vert --- \right\rangle\,\!&amp;lt;/math&amp;gt;.  It is easy to see that this code will enable the detection and correction of one phase error just as the bit-flip code did for one bit-flip.  In this case we exchange the &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt; in the bit-flip code with a &amp;lt;math&amp;gt; \sigma_x \,\!&amp;lt;/math&amp;gt; for the phase-flip code and the process carries through as before.&lt;br /&gt;
&lt;br /&gt;
===Bit-flip and Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
Certainly if a phase-flip error does not have a classical analogue then the combination of bit- and phase-flip errors also does not.  It turns out that by having found a code that will protect against bit-flip errors and another against phase-flip errors, we are able to write down a code that will protect against both.  This was first given by Peter Shor [[Bibliography#Shor:QECC|Shor:1995]], but was also described by Carlton Caves in a very readable paper, [[Bibliography#Caves:QECC|Caves:1999]].  &lt;br /&gt;
&lt;br /&gt;
The way to protect against both is to combine the two codes and take the logical qubits to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle)\\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.10}}&lt;br /&gt;
One may also write this as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= \left\vert +_{bL}\right\rangle \left\vert +_{bL}\right\rangle  \left\vert +_{bL}\right\rangle \\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = \left\vert -_{bL}\right\rangle  \left\vert -_{bL}\right\rangle \left\vert -_{bL}\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.11}}&lt;br /&gt;
&lt;br /&gt;
This shows that there is a code which protects against bit-flip errors and phase-flip errors by using a redundant encoding comprised of the states that protect against bit flips and the states that protect against phase flips.&lt;br /&gt;
&lt;br /&gt;
==Quantum Error Correcting Codes: General Properties==&lt;br /&gt;
&lt;br /&gt;
Now that we have seen some examples of quantum error correcting codes, some natural questions come to mind.  Are there general rules for constructing quantum error correcting codes?  In the case of classical codes, there is a disjointness condition and a Hamming bound.  These let us know when it is not possible to construct a quantum error correcting code.  Here, the two analogues for quantum error correcting codes are given, although the disjointness condition is quite different for quantum error correcting codes.  &lt;br /&gt;
&lt;br /&gt;
===The Quantum Error Correcting Code Condition===&lt;br /&gt;
&lt;br /&gt;
Let us consider a quantum system undergoing some noisy evolution.  As described in [[Chapter 6 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|Section 6.2]] and [[Chapter 6 - Noise in Quantum Systems#Modelling Open System Evolution|Section 6.3]], such an open-system evolution can be described by a quantum operation acting on a density operator,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\rho^\prime= \sum_\alpha A_\alpha \rho A_\alpha^\dagger. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.12}}&lt;br /&gt;
The operator elements &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; can be used to express what is known as the quantum error correcting code condition&lt;br /&gt;
(See [[Bibliography#NielsenChuang:book|Nielsen and Chuang]],  or [[Bibliography#Nielsen/etal|Nielsen, et al:97]] for the original reference), &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
P  A^\dagger_\beta  A_\alpha P = d_{\alpha\beta}P, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.13}}&lt;br /&gt;
where the &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; are the operators from the operator-sum representation, and &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is a projector onto the code space.   &lt;br /&gt;
An equivalent expression is (see [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]]),&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\langle i_L| A^\dagger_\beta  A_\alpha |j_L\rangle = c_{\alpha\beta}\delta_{ij}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.14}}&lt;br /&gt;
This is the quantum analogue of the [[Appendix F - Classical Error Correcting Codes#eqF.5|disjointness condition]] for classical error correcting codes.  To interpret this, consider [[#eq7.14|Equation (7.14)]].  This says that if one error &amp;lt;math&amp;gt;A_\beta\,\!&amp;lt;/math&amp;gt; acts acts on a logical state &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; and another error (or possibly the same error) &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; acts on a different logical state &amp;lt;math&amp;gt;|j_L\rangle\,\!&amp;lt;/math&amp;gt;, then the two cannot be equal.  In fact, the statement is a bit different.  It tells us that there can be no overlap between two states.  If there were overlap, there would be some probability for a measurement to produce an ambiguous result.  It also tells us that for two different &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; the same error acting will produce the same result.  This is allowed by the superposition principle, but not something one finds in classical error correction.  Therefore, the analogy with the classical disjointness condition is very loose.  (See  [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]] for further explanation.)  &lt;br /&gt;
&lt;br /&gt;
One way to understand [[#eq7.13|Equation (7.13)]] is to show [[#eq7.14|Equation (7.14)]] is true if and only if [[#eq7.13|Equation (7.13)]] is true.  However, these results can be seen as part of a broader and more basic property of quantum systems related to the reversibility of a quantum operation as discussed by [[Bibliography#Nielsen/etal|Nielsen, et al:97]].&lt;br /&gt;
&lt;br /&gt;
===A Basis for Errors===&lt;br /&gt;
&lt;br /&gt;
Using the Pauli matrices and the identity for the errors, any error can be described as a tensor product of operators.  Each term in the tensor product will involve one of four operators, &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;, where the identity &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt; indicates that no error has occurred.  (See [[Chapter 6 - Noise in Quantum Systems#Examples|Section 6.5]].)  For example, suppose a code involves five qubits.  For each of the five qubits, suppose no error occurs on qubit 1, a bit-flip error occurs on qubits 2 and 3, a phase error occurs on qubit 4, and qubit 5 is affected by both types of errors.  This error operator would be &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}\otimes X_2\otimes X_3 \otimes Z_4 \otimes Y_5&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.15}}&lt;br /&gt;
or, using a short-hand notation, &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
X_2 X_3 Z_4 Y_5.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.16}}&lt;br /&gt;
This error operator is said to have weight five.  &lt;br /&gt;
&lt;br /&gt;
====Definition 1: weight of an operator====&lt;br /&gt;
&lt;br /&gt;
The '''weight of an operator''' is the number of non-identity elements in the tensor product.  &lt;br /&gt;
&lt;br /&gt;
This provides us with a basis for all errors that can occur.  This is enough, since the errors can be made discrete using the syndrome measurement process.&lt;br /&gt;
&lt;br /&gt;
====Definition 2: Distance of a Quantum Error Correcting Code====&lt;br /&gt;
&lt;br /&gt;
The distance of a quantum error correcting code is the minimum weight, greater than zero, of an element &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; of the Pauli group such that the quantum error correcting code condition fails (i.e., such that &amp;lt;math&amp;gt;\langle i_L |G|j_L \rangle = c\delta_{ij}\,\!&amp;lt;/math&amp;gt; is not satisfied).&lt;br /&gt;
&lt;br /&gt;
===Quantum Error Correction for &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; Errors===&lt;br /&gt;
&lt;br /&gt;
A quantum error correcting code that uses &amp;lt;math&amp;gt;n\;\!&amp;lt;/math&amp;gt; qubits to encode &amp;lt;math&amp;gt;k\;\!&amp;lt;/math&amp;gt; logical qubits and can correct up to &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors is denoted &amp;lt;math&amp;gt;[[n,k,2t+1]]\;\!&amp;lt;/math&amp;gt;.  This is similar to the classical code notation except that double brackets are used to distinguish the quantum code from the corresponding classical code.  Using &amp;lt;math&amp;gt;d=2t+1\;\!&amp;lt;/math&amp;gt;, this is also written &amp;lt;math&amp;gt;[[n,k,d]]\;\!&amp;lt;/math&amp;gt;.  When a code satisfies the more restrictive condition &amp;lt;math&amp;gt;c_{\alpha\beta}=0\;\!&amp;lt;/math&amp;gt; in [[#eq7.14|Equ. (7.14)]], the code is called non-degenerate.  Note that [[#eq7.14|Equ. (7.14)]] indicates the set of errors which needs to be corrected given by the operator elements of the operator-sum representation.  It turns out that one can choose the set of errors to be described by an orthogonal basis.  This is done using the unitary degree of freedom in the operator-sum representation from [[Chapter 6 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|Section 6.4]].  [[Bibliography#NielsenChuang:book|Nielsen and Chuang]] use this to show that the conditions [[#eq7.13|Equ. (7.13)]] are necessary and sufficient for the existence of a quantum error correcting code.  Thus the necessary and sufficient conditions for being able to correct &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors are given by [[#eq7.13|Equ. (7.13)]], or equivalently, [[#eq7.14|Equ. (7.14)]].&lt;br /&gt;
&lt;br /&gt;
===The Quantum Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
Like the classical Hamming bound ([[Appendix F - Classical Error Correcting Codes#The Hamming Bound|Section F.4]]), the quantum Hamming bound is a simple bound on the size of the code for correcting a given number of errors.  In other words, it provides a bound on the rate of the code, &amp;lt;math&amp;gt;k/n\;\!&amp;lt;/math&amp;gt;.  The main difference is that there are three types of errors that can occur to a qubit: the three Pauli matrices &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;.  So each error comes in three types.  The number of possible error operators of weight &amp;lt;math&amp;gt;t \;\!&amp;lt;/math&amp;gt; acting on a code of &amp;lt;math&amp;gt;n \;\!&amp;lt;/math&amp;gt; qubits is &amp;lt;math&amp;gt;3^t C(n,t)\;\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;C(n,t)\;\!&amp;lt;/math&amp;gt; is the binomial coefficient.  Therefore since every logical state (and every logical state with any error acting on it) must all be mutually orthogonal, the quantum Hamming bound states that this set must be less than or equal to the total number of states in the Hilbert space, which is &amp;lt;math&amp;gt;2^n\;\!&amp;lt;/math&amp;gt;.  That is,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
m\sum_{i=0}^t 3^i C(n,i) \leq 2^n,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.17}}&lt;br /&gt;
where &amp;lt;math&amp;gt;m\;\!&amp;lt;/math&amp;gt; is the number of code words.&lt;br /&gt;
&lt;br /&gt;
Just as in the classical case, when &amp;lt;math&amp;gt;m= 2^k\;\!&amp;lt;/math&amp;gt;, we may take the logarithm of the equation along with &amp;lt;math&amp;gt;n,t \;\!&amp;lt;/math&amp;gt; large to get&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
k/n\leq 1-(t/n)\log 3-H(t/n),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.18}}&lt;br /&gt;
where &amp;lt;math&amp;gt;H(x) = -x\log x -(1-x)\log(1-x)\;\!&amp;lt;/math&amp;gt; and all logarithms are base 2.  &lt;br /&gt;
&lt;br /&gt;
[[#eq7.17|Equation (7.17)]] tells us that the smallest possible code encoding one qubit such that it can be protected against one arbitrary error has 5 physical qubits encoding one logical one.  (Here &amp;lt;math&amp;gt;m=2 (k=1), t=1 \;\!&amp;lt;/math&amp;gt; so &amp;lt;math&amp;gt; n=5 \;\!&amp;lt;/math&amp;gt; .)&lt;br /&gt;
&lt;br /&gt;
==Stabilizer Codes==&lt;br /&gt;
&lt;br /&gt;
The mathematical definition of a stabilizer is given in [[Appendix D - Group Theory#Definition 10: Stabilizer|Section D.6.1]].  Loosely speaking, it is a subgroup of transformations that leave a particular point in space fixed.  The theory of stabilizer codes is based on this notion.  &lt;br /&gt;
&lt;br /&gt;
Stabilizer codes are a family of quantum error correcting codes which are describable by using the stabilizer of a state (really a set of states) in the Hilbert space.  They are distinguished for several reasons.  One, they form a large class of quantum error correcting codes.  Two, they are conveniently described by their operators rather than their states and show that this can generally be the case for many quantum error correcting codes.  Other reasons will be discussed later.&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
We will begin by revisiting the three-qubit quantum error correcting code presented in some detail in [[Chapter 7 - Quantum Error Correcting Codes#Bit-flip Errors: A Quantum Code|Section 7.2.1]].  Recall that a bit-flip error that has occurred on one of the three qubits used in the logical qubit would be detectable if we could measure the parity of pairs of qubits.  These operators could be chosen to be &amp;lt;math&amp;gt; Z_1Z_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; Z_2Z_3\,\!&amp;lt;/math&amp;gt;, although any two non-identical pair would work.  Note that the basis states  &amp;lt;math&amp;gt; |000\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; |111\rangle\,\!&amp;lt;/math&amp;gt;, as well as any linear combination of these states, are eigenstates of these operators with eigenvalue +1.  The states with a single correctable error are eigenstates of these operators, but one will have eigenvalue -1.  The operators that give one of the single qubit bit-flip errors are either &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt;.  This is the idea behind stabilizer quantum error correcting codes.  The stabilizers act as parity checks on the code words.  &lt;br /&gt;
&lt;br /&gt;
The stabilizer is a subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of the [[Appendix D - Group Theory#Definition 12: Pauli Group|Pauli group]], which is an abelian subgroup (this means all elements commute with each other).  However, the elements &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt; anti-commute with at least one element of the stabilizer.   So the parity check describable by saying that states with errors are eigenstates of the stabilizers with eigenvalue -1 is equivalent to saying that one of the stabilizer operators will anti-commute with an error operator. &lt;br /&gt;
&lt;br /&gt;
The elements of the stabilizer stabilize code words, that is, code words are eigenstates of the stabilizer operators with eigenvalue +1, and states with errors have eigenvalue -1 and this can always be chosen to be true for this class of quantum error correcting codes. Note that if &amp;lt;math&amp;gt;|\psi\rangle\,\!&amp;lt;/math&amp;gt; is a code word, &amp;lt;math&amp;gt;S\in \mathcal{S}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; is an error operator, then  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
SE|\psi\rangle &amp;amp;= S|\psi^\prime\rangle =(-1)|\psi^\prime\rangle \\&lt;br /&gt;
ES|\psi\rangle &amp;amp;= E|\psi\rangle = |\psi^\prime\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.19}}&lt;br /&gt;
or&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(-1)SE|\psi\rangle = ES|\psi\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.20}}&lt;br /&gt;
This says that &amp;lt;math&amp;gt;SE + ES =0\,\!&amp;lt;/math&amp;gt; when acting on the code words.  In other words, the operators anti-commute when &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; produces a state that has eigenvalues -1 and also when it is a state that &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; stabilizes.&lt;br /&gt;
&lt;br /&gt;
This is the basic idea of the stabilizer code construction to be discussed in general in the next section.&lt;br /&gt;
&lt;br /&gt;
===General Stabilizer Formalism===&lt;br /&gt;
&lt;br /&gt;
This brief section provides general definitions and theorems for stabilizer quantum error correcting codes.  The next section provides an explicit example.&lt;br /&gt;
&lt;br /&gt;
====Definition 3: Stabilizer Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt; \mathcal{S}\subset \mathcal{P}_n \,\!&amp;lt;/math&amp;gt; be an abelian subgroup of the Pauli group that does not contain &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt;.  Let &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S}) = \{|\psi\rangle \; |\; S|\psi\rangle=|\psi\rangle, \mbox{ for all } S\in \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S})\,\!&amp;lt;/math&amp;gt; is a stabilizer code and &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; is its stabilizer. &lt;br /&gt;
&lt;br /&gt;
This formalizes what was stated earlier, which is that all states of the code space are eigenstates of elements of the stabilizer subgroup with eigenvalue +1.  However, it also says more.  It tells us that any subgroup of the Pauli group that is abelian and does not contain the elements &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; can be used to construct a stabilizer code by simply choosing the set of states that are eigenstates with eigenvalues +1.  Another way of saying this is that the states are fixed, or invariant, under the action of the stabilizer elements.  Let us see why the restriction not allowing &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; must be included.  Suppose that &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt; was in the set &amp;lt;math&amp;gt; \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  It then follows that &amp;lt;math&amp;gt; -\mathbb{I}|\psi\rangle = |\psi\rangle\,\!&amp;lt;/math&amp;gt;.  Only the zero state satisfies this equation, so the code must contain no states other than the zero one.  (The states must be +1 eigenstates of every stabilizer element.)  Now, suppose one of the other two was in the stabilizer subgroup. This means that the element squared is also in the stabilizer, since it is a subgroup and must be closed under multiplication.  But the square of these gives &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt;, which cannot be in the set.  Thus none of these can be included.&lt;br /&gt;
&lt;br /&gt;
====Encoding/Decoding from Stabilizer Generators====&lt;br /&gt;
&lt;br /&gt;
Once one has obtained the stabilizer subgroup, it is left to find the codewords that are states with eigenvalue +1.  To do this, one only needs to ensure the generators of the stabilizer satisfy this condition, since the generators give all other stabilizer elements through multiplication. Therefore, if the state has eigenvalue +1 for all generators, it will also have eigenvalue +1 for all stabilizer elements.  &lt;br /&gt;
&lt;br /&gt;
For smaller codes, finding the set of states could be as easy as satisfying constraints given by the small number of generators.  Larger, more complicated codes may however require a lot of work to find the states.  Cleve and Gottesman gave an algorithm for finding the code words using a efficient gate array obtained from the stabilizer formalism.  http://arxiv.org/abs/quant-ph/9607030  &lt;br /&gt;
&lt;br /&gt;
It is worth noting that the decoding and error detection and correction steps also require work to find explicit circuits.  However, for many stabilizer codes, decoding is simply encoding in reverse.  (This is not so for every quantum error correcting code.)  &lt;br /&gt;
&lt;br /&gt;
Although these accomplishments are very important, more work is required to ensure circuits are fault-tolerant---that errors do not propagate or grow as the computation progresses.  If they were to develop without these constraints, then the computation would eventually fail.&lt;br /&gt;
&lt;br /&gt;
===A Return to Shor's Code===&lt;br /&gt;
&lt;br /&gt;
Let us consider the set of operators in [[#Table7.1|Table 7.1]] where each operator in the row is included, in order, in the tensor product that forms an element of the Pauli group.  These elements form the eight [[Appendix D - Group Theory#Definition 14: Generators of a Group|generators]] of stabilizer elements &amp;lt;math&amp;gt;S_i\,\!&amp;lt;/math&amp;gt;.  The order of the stabilizer subgroup is much larger than the set of generators, which is only 8.  Here they are taken as in the table, but the set is not unique.  This set is chosen to agree with our earlier choice of measurements.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.1: ''The rows give the Pauli matrices which are included in a tensor product, in order, in an element of the Pauli group.  Each column corresponds to the qubit, q1-q9, on which the operator in that column will act.''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|q 8&lt;br /&gt;
|q 9&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_7\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_8\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Having the generators of the stabilizer of the code, the objective is to construct the codewords, or the explicit states that are eigenstates of these operators with eigenvalues +1.  From the top row, it is clear that the first two qubits must be the same, whether zero or one, so that the parity is even.  Similarly, the second two must be the same, and thus the first three must be the same.  Similarly, the middle three and last three must also be the same.  The last two generators state that flipping the first six bits at once will produce the same state, and flipping the last six bits together will produce the same state.  Thinking of these in blocks (since the first six generators give blocks of three) tells us that there are states that are symmetric under the interchange of zeroes and ones in pairs of triplet blocks.  To break this into two parts, one may choose the symmetric and anti-symmetric combination of states that leads to the Shor code words given in [[#eq7.10|Equation (7.10)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CSS codes==&lt;br /&gt;
&lt;br /&gt;
There is a class of quantum error correcting codes called the CSS codes after their inventors  [[Bibliography#CalderbankNShor|Calderbank and Shor]], and [[Bibliography#Steane:prsl|Steane]].   These are also stabilizer codes, but their construction is different and somewhat informative due to the connection to classical error correction.  However, given that they are stabilizer codes, the stabilizer formalism and tools can be used for encoding, etc.  &lt;br /&gt;
&lt;br /&gt;
The CSS codes are constructed from two classical linear codes, say &amp;lt;math&amp;gt; \mathcal{C}_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \mathcal{C}_2\,\!&amp;lt;/math&amp;gt;.  This is done by taking advantage of the parity check matrices from the classical coding theory.  In this section, this construction is briefly described.  In the next section, the seven qubit CSS code is described.  &lt;br /&gt;
&lt;br /&gt;
Recall from the discussion of the [[Chapter 7 - Quantum Error Correcting Codes#Shor's Nine-Qubit Quantum Error Correcting Code|Shor code]] that a phase-flip code can be constructed from a bit-flip code by using Hadamard gates in order to change the basis from &amp;lt;math&amp;gt; |0\rangle,|1\rangle\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; |+\rangle,|-\rangle\,\!&amp;lt;/math&amp;gt;.  Thus all of the error detection and correction can be accomplished by translating from one basis to the other.  &lt;br /&gt;
&lt;br /&gt;
Keeping this in mind, a quantum error correcting code can be constructed from a classical error correcting code using the following trick.  (See [[Bibliography#Steane:prl|Steane]] or the [[Bibliography#Gottesman:rev09|review by Gottesman]].)  Take the classical parity check matrix &amp;lt;math&amp;gt; P_1\,\!&amp;lt;/math&amp;gt; for a classical error correcting &amp;lt;math&amp;gt;[n_1,k_1,d_1]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt;.  This will turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_1=(d_1-1)/2\,\!&amp;lt;/math&amp;gt; bit-flip errors, just as did the classical code.  Then, given another classical error correcting &amp;lt;math&amp;gt;[n_2,k_2,d_2]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_2\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt;.  This will give turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_2=(d_2-1)/2\,\!&amp;lt;/math&amp;gt; phase-flip errors.  This would give a stabilizer code with one possible caveat: the operators in the stabilizer all need to commute with each other.  The way to ensure this will happen, that the &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt; generators and  &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt; generators commute, is to combine the codes in a particular way.  &lt;br /&gt;
&lt;br /&gt;
The dual of a code (denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;) is also a code, and it is not too difficult to show that the parity check matrix for &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; is the generator matrix for &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;.  It turns out that if (and only if) &amp;lt;math&amp;gt;\mathcal{C}_2^\perp \subseteq \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, then the two codes combine to produce an &amp;lt;math&amp;gt;[[n,k_1+k_2-n,d]]\,\!&amp;lt;/math&amp;gt; stabilizer code, where &amp;lt;math&amp;gt;d\geq \text{min}(d_1,d_2)\,\!&amp;lt;/math&amp;gt;.  That is, the generators for each of the two codes will commute with each other.  &lt;br /&gt;
&lt;br /&gt;
Now the two codes, one to protect against bit-flips and one to protect against phase-flips, combine so that they can correct any error, including &amp;lt;math&amp;gt;Y\,\!&amp;lt;/math&amp;gt; errors that are composed of both a bit-flip and phase-flip.  Therefore the code can protect against both, and the minimum distance is the smaller of the distance of the two codes.  It could actually be higher if the code is degenerate.  &lt;br /&gt;
&lt;br /&gt;
===Steane's Seven Qubit Code===&lt;br /&gt;
&lt;br /&gt;
The seven qubit quantum error correcting code, originally described by Steane, is member of the class of CSS quantum error correcting codes.  In fact it is the smallest such code, and has  &amp;lt;math&amp;gt;\mathcal{C}_2 = \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;.  It is a &amp;lt;math&amp;gt;[[7,1,3]]\,\!&amp;lt;/math&amp;gt; quantum error correcting code, using 7 qubits to encode one logical (or data) qubit such that one arbitrary error can be detected and corrected.  This code has been studied extensively, since it is able to be made fault tolerant (explained below).  &lt;br /&gt;
&lt;br /&gt;
This code is actually based on the &amp;lt;math&amp;gt;[7,4,3]\,\!&amp;lt;/math&amp;gt; Hamming code discussed in [[Appendix F - Classical Error Correcting Codes|Appendix F]].  Let us first recall the parity check matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
P =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
\end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.21}}&lt;br /&gt;
and the generator matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
G =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.22}}&lt;br /&gt;
Relating this back to the stabilizer formalism, the generators can be written using the parity check matrix as described above.  They are given in [[#Table7.2|Table 7.2]].  The first three rows each give the elements of the tensor product, in order, for the stabilizer elements of a code that can protect against bit flips.  The next three give stabilizers for the phase-flip code.  From these one may get the code words.  The logical zero and one are given below.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.2: ''The first three rows give the stabilizers for the bit-flip error correcting code.  The next three are for the phase-flip code. (See also [[#Table7.1|Table 7.1]] for further explanation.)''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Steane's 7-qubit code encodes the logical zero using all even weight classical code vectors, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|0_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{even }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.23}}&lt;br /&gt;
The odd weight classical code vectors are used to encode the logical one state,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|1_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{odd }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.24}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#Introduction|Continue to '''Chapter 8 - Decoherence-Free/Noiseless Subsystems''']]&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
[[Chapter 10 - Fault-Tolerant Quantum Computing#Introduction|Skip to '''Chapter 10 - Fault-Tolerant Quantum Computing''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1757</id>
		<title>Chapter 7 - Quantum Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1757"/>
		<updated>2011-11-28T14:09:05Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Bit-flip Errors: A Classical Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
If information was stored redundantly in some set of quantum states, then it would be possible to use the redundancy to detect and correct errors.  Quantum error correcting codes aim to encode quantum information into states in just such a redundant fashion.  It is worth noting that classical error correcting codes and coding theory has been around a long time and many of the ideas and methods of quantum error correction are imported from classical error correction.  However, quantum error correction requires extra care when measuring to detect and correct errors because superpositions of states must be preserved.  In addition, qubits can experience errors that classical bits cannot.  (For example, there is no phase-flip error on a classical bit.)  This chapter contains an introduction to quantum error correction including simple examples of quantum error correcting codes.   &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Classical Code===&lt;br /&gt;
&lt;br /&gt;
Let us first consider a simple example of a classical error correcting code.  Consider a signal which is comprised only of zeroes and ones.  (For most of these notes, these are the only types of signals: bits and their quantum analogue, qubits.)  An error in a sequence of zeroes and ones would occur if the sender sends a 1 and the receiver receives a 0 for one element of the sequence, or the sender sends a 0 and the receiver receives a 1.  In other words, for this type of encoding, an error would be a &amp;quot;classical bit-flip error&amp;quot; which would turn a 0 into a 1 and a 1 into a 0.  A simple example of a classical error correcting code which protects against such bit-flip errors is the following code.  Rather than use the state 0, the state is encoded redundantly: the state 000 is used.  This is called an encoded zero state or a logical zero state.  Likewise, 111 is used as an encoded 1, or logical 1.  Now suppose one bit is flipped when the encoded state 111 is sent, and further suppose that it is the first bit which is flipped.  If one (and only one) of the bits is flipped, the encoded state could be fixed by flipping the outlier so that it agrees with the others.  &lt;br /&gt;
&lt;br /&gt;
Let us assume that each error is independent and has probability &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt;.  The probability that the bit is not flipped is then &amp;lt;math&amp;gt;1-p\,\!&amp;lt;/math&amp;gt;.  Since the probability is &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; that one bit flip occurs, then the probability that two are flipped is &amp;lt;math&amp;gt;3(1-p)p^2\,\!&amp;lt;/math&amp;gt; assuming that which are flipped is unknown.  The probability that three are flipped is &amp;lt;math&amp;gt;p^3\,\!&amp;lt;/math&amp;gt;.  So the code will help us if &amp;lt;math&amp;gt;p &amp;gt; 3(1-p)p^2 +p^3\,\!&amp;lt;/math&amp;gt; which happens when &amp;lt;math&amp;gt;p&amp;lt;1/2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
This example will be used below to find a simple bit-flip code for a quantum system.&lt;br /&gt;
&lt;br /&gt;
===Further Reading===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Appendix F - Classical Error Correcting Codes|Appendix F]] contains a brief introduction to classical error correction.  Many of the concepts and definitions in that appendix will be helpful for understanding the material in this chapter.  However, the chapter itself is somewhat self-contained.  When more explanation is required or desired, it will likely be helpful to read or reread [[Appendix F - Classical Error Correcting Codes|Appendix F]] and/or consult the references there.&lt;br /&gt;
&lt;br /&gt;
==Shor's Nine-Qubit Quantum Error Correcting Code==&lt;br /&gt;
&lt;br /&gt;
Shor's nine-qubit quantum error correcting code is important for several reasons.  Historically, it is important because it provides the first example of a quantum error correcting code which, in principle, can correct arbitrary single-qubit errors.  Pedagogically, it is important because it is an example which can be understood in terms of the simple classical error correcting code given above.  It also uses many of the standard assumptions of more general quantum error correcting codes.  Therefore, it is presented as our first quantum error correcting code and, as will be seen later, an example of what is called a stabilizer code, which is a very general category.  &lt;br /&gt;
&lt;br /&gt;
The Shor code is introduced in parts, bit-flip and phase-flip, and then in its entirety.  Since  the phase-flip code follows from the bit-flip code (as discussed below), the bit-flip code is discussed in great detail.  &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Quantum Code===&lt;br /&gt;
&lt;br /&gt;
The quantum bit-flip code uses three quantum states to encode one as does the classical bit-flip code above.  The state &amp;lt;math&amp;gt;  \left\vert 0\right\rangle \otimes \left\vert 0\right\rangle\otimes \left\vert 0\right\rangle = \left\vert 000\right\rangle = \left\vert 0_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is the logical state representing the zero state of the encoded qubit.  (The subscript L is to indicate that it is a logical state and the b indicates that it is a bit-flip code.  We will see below why this distinction is helpful.)  Similarly, &amp;lt;math&amp;gt;\left\vert 111\right\rangle = \left\vert 1_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is used for the logical one state.  &lt;br /&gt;
&lt;br /&gt;
====Encoding the Logical State====&lt;br /&gt;
&lt;br /&gt;
Note that one cannot just clone a state to produce redundancy due to the [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|No-Cloning Theorem]].  Also, the encoded state needs to preserve superpositions such as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle =  \alpha\left|0\right\rangle + \beta\left|1\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.1}}&lt;br /&gt;
To encode the state redundantly, cloning is not required.  The encoding can be accomplished using the &amp;lt;math&amp;gt; CNOT \,\!&amp;lt;/math&amp;gt; gate twice.  Simply apply &amp;lt;math&amp;gt; CNOT_{13} \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; CNOT_{12} \,\!&amp;lt;/math&amp;gt; to the following state of three qubits,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle\left|00\right\rangle =  (\alpha\left|0\right\rangle + \beta\left|1\right\rangle)\left|00\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This will produce  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi_L\right\rangle =  \alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle = \alpha\left|000\right\rangle + \beta\left|111\right\rangle . &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.2}}&lt;br /&gt;
The circuit diagram for this is given in [[#Figure 7.1|Figure 7.1]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccencode.jpg|300px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.1:  Circuit diagram for encoding a qubit into a 3-qubit bit-flip protected code.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Error Syndrome Extraction====&lt;br /&gt;
&lt;br /&gt;
Now a method for measurement and recovery is needed.  &lt;br /&gt;
The problem is that in quantum mechanics one cannot just measure the three states to see if they agree; a quantum state can be in a superposition of the (logical) zero state and the (logical) one state as above, and &lt;br /&gt;
a measurement of the first qubit to see if it is in the state zero or not will immediately produce the state &amp;lt;math&amp;gt; \left| 000 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\alpha|^2 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left| 111 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\beta|^2 \,\!&amp;lt;/math&amp;gt;, thus destroying the superposition of the qubit state.  The state would then be one that can be described as containing only classical information.  (Essentially it is equivalent to the classical 000 or 111 binary state.)  Since we need to preserve arbitrary superpositions, we cannot use this method for determining whether or not an error occurred.  &lt;br /&gt;
&lt;br /&gt;
Now let us suppose that a bit-flip error occurs on &amp;lt;math&amp;gt;\left|\psi_L\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
The objective is to determine if the state has experienced a bit-flip error or not without ruining the superposition and, if it has an error, to determine which qubit experienced the error. This can be done by checking to see if the first two qubits are the same or not and then checking to see if the last two qubits are the same or not without ever determining whether the state is the logical zero, logical one, or a superposition of the two.  &lt;br /&gt;
&lt;br /&gt;
Let us examine this process in detail.  First, notice the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue 1 and &amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue -1.  Then any logical state is an eigenstate of the operator &amp;lt;math&amp;gt; \sigma_z\otimes \sigma_z\otimes I\,\!&amp;lt;/math&amp;gt; with eigenvalue of 1 if the first two qubits are the same and -1 if they differ.  For example, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle = (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|000\right\rangle + \beta\left|111\right\rangle) = (1)(\alpha\left|000\right\rangle + \beta\left|111\right\rangle).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.3}}&lt;br /&gt;
Of course the same is also true for the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt;.  However, suppose that a bit-flip error occurs on the first qubit, giving &amp;lt;math&amp;gt; (\sigma_x\otimes I\otimes I) \left\vert\psi_L\right\rangle = (\sigma_x\otimes I\otimes I)(\alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle) = \alpha\left|100\right\rangle + \beta\left|011\right\rangle\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle &amp;amp;= (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|100\right\rangle + \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp;= (-\alpha\left|100\right\rangle - \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp; = (-1)(\alpha\left|100\right\rangle + \beta\left|011\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.4}}&lt;br /&gt;
Notice that, in principle, we need not determine either &amp;lt;math&amp;gt; \alpha\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; \beta\,\!&amp;lt;/math&amp;gt;.  However, it does seem that the error can be detected.  Since determining the value of the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt; shows that the last two qubits agree, we know that the error occurred on the first qubit.  In fact, it is not difficult to convince yourself that measuring these two operators will determine which qubit experienced a bit-flip for any of the three.  Just like the classical bit-flip code, this will not indicate whether or not an error occurred on two qubits.  Thus the probability must be small, just like the case for the classical code.&lt;br /&gt;
&lt;br /&gt;
Now, we have the idea that we could determine the parity of the pairs of qubits to determine if they are the same or different.  But how would we determine this in practice?  A method for doing this is shown in [[#Figure 7.2|Figure 7.2]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccSyndrome.jpg|center|400px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.2: A method for extracting a bit-flip error syndrome from a 3-qubit bit-flip protected code.  The M's are measurements on the ancillary qubits, the results of which are recorded as &amp;lt;math&amp;gt;R_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Figure 7.2|Figure 7.2]] gives a circuit for determining the error, also known as a syndrome measurement.  In this example, a bit-flip error occurred on qubit 1 in the 3 qubit QECC.  This is represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate.  After 4 CNOT gates, the two ancillary qubits are measured.  A measurement in the &amp;lt;math&amp;gt;|0\rangle, |1\rangle\,\!&amp;lt;/math&amp;gt; basis gives a result of &amp;lt;math&amp;gt;|1\rangle\,\!&amp;lt;/math&amp;gt; for the top ancillary qubit and &amp;lt;math&amp;gt;|0\rangle\,\!&amp;lt;/math&amp;gt; for the bottom one.  This tells us that the first qubit has had a bit-flip error.  We then feed this information back into the system by implementing an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, thus correcting the error.  &lt;br /&gt;
&lt;br /&gt;
Notice that we have not determined the coefficients of the superposition of the logical zero and logical one states.  We have only determined that there was an error on the first qubit since it does not agree with the other two.  (Assuming that only one bit-flip error could have occurred.)&lt;br /&gt;
&lt;br /&gt;
====Continuous Sets of Errors====&lt;br /&gt;
&lt;br /&gt;
The error, in this case represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate, is not very realistic.  What would be more realistic is that the bit is not flipped completely; it is in a superposition of the zero state and one state.  In other words, we should properly consider the following state, where an error has occurred on the first qubit:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle &amp;amp;=  \alpha(a\left|0\right\rangle + b\left|1\right\rangle) \left|00\right\rangle + \beta(b\left|0\right\rangle + a\left|1\right\rangle)\left|11\right\rangle \\&lt;br /&gt;
 &amp;amp;= \alpha a\left|000\right\rangle + \alpha b\left|100\right\rangle + \beta b\left|011\right\rangle + \beta a\left|111\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.5}}&lt;br /&gt;
This is a rotation about the x-axis by an arbitrary angle with &lt;br /&gt;
&amp;lt;math&amp;gt;a=\cos \theta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b=i\sin\theta\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Transformations of a Qubit|Section C.5.1]].)  Now suppose that two ancillary qubits are attached to the state&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle = \alpha a\left|00000\right\rangle + \alpha b\left|10000\right\rangle + \beta b\left|01100\right\rangle + \beta a\left|11100\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.6}}&lt;br /&gt;
and the resulting state is put into the circuit that gives the error syndrome given in [[#Figure 7.2|Figure 7.2]].  Let &lt;br /&gt;
&amp;lt;math&amp;gt;V = CNOT_{1{a_1}} CNOT_{2{a_1}} CNOT_{2{a_2}} CNOT_{3{a_2}}\,\!&amp;lt;/math&amp;gt;. Then   &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
 V\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle &amp;amp;= (\alpha a\left|00000\right\rangle + \alpha b\left|10010\right\rangle + \beta b\left|01110\right\rangle + \beta a\left|11100\right\rangle) \\&lt;br /&gt;
           &amp;amp;= (\alpha \left|000\right\rangle + \beta\left|111\right\rangle)a\left\vert 00\right\rangle +(\alpha\left|100\right\rangle + \beta \left|011\right\rangle)b\left\vert 10\right\rangle  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.7}},&lt;br /&gt;
where the two ancillary qubits, denoted &amp;lt;math&amp;gt;a_1\,\!&amp;lt;/math&amp;gt; (for the first ancillary qubit which is on top in [[#Figure 7.2|Figure 7.2]]) and &amp;lt;math&amp;gt;a_2\,\!&amp;lt;/math&amp;gt; (for the second ancillary qubit which is on bottom in [[#Figure 7.2|Figure 7.2]]), will give the error syndrome.  The measurement of the second ancillary qubit always gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt;.  The measurement of the first gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|a|^2\,\!&amp;lt;/math&amp;gt; and, if this occurs, the system will be in its original state and there is no error.  However, if the measurement of the first ancillary qubit gives &amp;lt;math&amp;gt;\left|1\right\rangle\,\!&amp;lt;/math&amp;gt;, which it will with probability &amp;lt;math&amp;gt;|b|^2,\,\!&amp;lt;/math&amp;gt; then the system is left in the state &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\alpha\left|100\right\rangle + \beta\left|011\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.8}}&lt;br /&gt;
This indicates that a bit-flip error has occurred on the first qubit.  Such an error is easily corrected with an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, which will flip it.  &lt;br /&gt;
&lt;br /&gt;
Therefore any single-qubit bit-flip error can be corrected, since we will project into the basis of one bit-flip error and the syndrome measurement indicates which one.  In other words, we have made the error discrete using a projective measurement of the ancilla.&lt;br /&gt;
&lt;br /&gt;
===Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Phase-flip errors&amp;quot; are errors which change the sign of the &amp;lt;math&amp;gt; \left| 1\right\rangle\,\!&amp;lt;/math&amp;gt; state.  This is not a classical error as it does not occur on a classical bit.  However, it does occur on qubits that are not in the zero state.  Thus these errors must be treated.   &lt;br /&gt;
&lt;br /&gt;
Much of what works for the bit-flip errors also works for phase-flip errors once we are able to encode properly.  Let us consider the following states that we will used to encode our logical qubit: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left\vert \pm\right\rangle = \frac{1}{\sqrt{2}}(\left\vert 0 \right\rangle \pm \left\vert 1\right\rangle). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.9}}&lt;br /&gt;
In this case, when a &amp;quot;phase-flip&amp;quot; occurs, the &amp;lt;math&amp;gt; \left\vert + \right\rangle \,\!&amp;lt;/math&amp;gt; becomes a &amp;lt;math&amp;gt; \left\vert - \right\rangle \,\!&amp;lt;/math&amp;gt; or vice versa.  Therefore it is similar to the bit-flip error since there are two orthogonal states that are changed into one another by the error.  In this case the error operator is of the form &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt;.  As before, if a phase error occurs on the first qubit, then we can encode redundantly by letting &amp;lt;math&amp;gt; \left\vert 0_{pL} \right\rangle = \left\vert +++ \right\rangle \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left\vert 1_{pL} \right\rangle = \left\vert --- \right\rangle\,\!&amp;lt;/math&amp;gt;.  It is easy to see that this code will enable the detection and correction of one phase error just as the bit-flip code did for one bit-flip.  In this case we exchange the &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt; in the bit-flip code with a &amp;lt;math&amp;gt; \sigma_x \,\!&amp;lt;/math&amp;gt; for the phase-flip code and the process carries through as before.&lt;br /&gt;
&lt;br /&gt;
===Bit-flip and Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
Certainly if a phase-flip error does not have a classical analogue then the combination of bit- and phase-flip errors also does not.  It turns out that by having found a code that will protect against bit-flip errors and another against phase-flip errors, we are able to write down a code that will protect against both.  This was first given by Peter Shor [[Bibliography#Shor:QECC|Shor:1995]], but was also described by Carlton Caves in a very readable paper, [[Bibliography#Caves:QECC|Caves:1999]].  &lt;br /&gt;
&lt;br /&gt;
The way to protect against both is to combine the two codes and take the logical qubits to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle)\\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.10}}&lt;br /&gt;
One may also write this as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= \left\vert +_{bL}\right\rangle \left\vert +_{bL}\right\rangle  \left\vert +_{bL}\right\rangle \\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = \left\vert -_{bL}\right\rangle  \left\vert -_{bL}\right\rangle \left\vert -_{bL}\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.11}}&lt;br /&gt;
&lt;br /&gt;
This shows that there is a code which protects against bit-flip errors and phase-flip errors by using a redundant encoding comprised of the states that protect against bit flips and the states that protect against phase flips.&lt;br /&gt;
&lt;br /&gt;
==Quantum Error Correcting Codes: General Properties==&lt;br /&gt;
&lt;br /&gt;
Now that we have seen some examples of quantum error correcting codes, some natural questions come to mind.  Are there general rules for constructing quantum error correcting codes?  In the case of classical codes, there is a disjointness condition and a Hamming bound.  These let us know when it is not possible to construct a quantum error correcting code.  Here, the two analogues for quantum error correcting codes are given, although the disjointness condition is quite different for quantum error correcting codes.  &lt;br /&gt;
&lt;br /&gt;
===The Quantum Error Correcting Code Condition===&lt;br /&gt;
&lt;br /&gt;
Let us consider a quantum system undergoing some noisy evolution.  As described in [[Chapter 6 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|Section 6.2]] and [[Chapter 6 - Noise in Quantum Systems#Modelling Open System Evolution|Section 6.3]], such an open-system evolution can be described by a quantum operation acting on a density operator,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\rho^\prime= \sum_\alpha A_\alpha \rho A_\alpha^\dagger. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.12}}&lt;br /&gt;
The operator elements &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; can be used to express what is known as the quantum error correcting code condition&lt;br /&gt;
(See [[Bibliography#NielsenChuang:book|Nielsen and Chuang]],  or [[Bibliography#Nielsen/etal|Nielsen, et al:97]] for the original reference), &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
P  A^\dagger_\beta  A_\alpha P = d_{\alpha\beta}P, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.13}}&lt;br /&gt;
where the &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; are the operators from the operator-sum representation, and &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is a projector onto the code space.   &lt;br /&gt;
An equivalent expression is (see [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]]),&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\langle i_L| A^\dagger_\beta  A_\alpha |j_L\rangle = c_{\alpha\beta}\delta_{ij}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.14}}&lt;br /&gt;
This is the quantum analogue of the [[Appendix F - Classical Error Correcting Codes#eqF.5|disjointness condition]] for classical error correcting codes.  To interpret this, consider [[#eq7.14|Equation (7.14)]].  This says that if one error &amp;lt;math&amp;gt;A_\beta\,\!&amp;lt;/math&amp;gt; acts acts on a logical state &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; and another error (or possibly the same error) &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; acts on a different logical state &amp;lt;math&amp;gt;|j_L\rangle\,\!&amp;lt;/math&amp;gt;, then the two cannot be equal.  In fact, the statement is a bit different.  It tells us that there can be no overlap between two states.  If there were overlap, there would be some probability for a measurement to produce an ambiguous result.  It also tells us that for two different &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; the same error acting will produce the same result.  This is allowed by the superposition principle, but not something one finds in classical error correction.  Therefore, the analogy with the classical disjointness condition is very loose.  (See  [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]] for further explanation.)  &lt;br /&gt;
&lt;br /&gt;
One way to understand [[#eq7.13|Equation (7.13)]] is to show [[#eq7.14|Equation (7.14)]] is true if and only if [[#eq7.13|Equation (7.13)]] is true.  However, these results can be seen as part of a broader and more basic property of quantum systems related to the reversibility of a quantum operation as discussed by [[Bibliography#Nielsen/etal|Nielsen, et al:97]].&lt;br /&gt;
&lt;br /&gt;
===A Basis for Errors===&lt;br /&gt;
&lt;br /&gt;
Using the Pauli matrices and the identity for the errors, any error can be described as a tensor product of operators.  Each term in the tensor product will involve one of four operators, &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;, where the identity &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt; indicates that no error has occurred.  (See [[Chapter 6 - Noise in Quantum Systems#Examples|Section 6.5]].)  For example, suppose a code involves five qubits.  For each of the five qubits, suppose no error occurs on qubit 1, a bit-flip error occurs on qubits 2 and 3, a phase error occurs on qubit 4, and qubit 5 is affected by both types of errors.  This error operator would be &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}\otimes X_2\otimes X_3 \otimes Z_4 \otimes Y_5&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.15}}&lt;br /&gt;
or, using a short-hand notation, &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
X_2 X_3 Z_4 Y_5.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.16}}&lt;br /&gt;
This error operator is said to have weight five.  &lt;br /&gt;
&lt;br /&gt;
====Definition 1: weight of an operator====&lt;br /&gt;
&lt;br /&gt;
The '''weight of an operator''' is the number of non-identity elements in the tensor product.  &lt;br /&gt;
&lt;br /&gt;
This provides us with a basis for all errors that can occur.  This is enough, since the errors can be made discrete using the syndrome measurement process.&lt;br /&gt;
&lt;br /&gt;
====Definition 2: Distance of a Quantum Error Correcting Code====&lt;br /&gt;
&lt;br /&gt;
The distance of a quantum error correcting code is the minimum weight, greater than zero, of an element &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; of the Pauli group such that the quantum error correcting code condition fails (i.e., such that &amp;lt;math&amp;gt;\langle i_L |G|j_L \rangle = c\delta_{ij}\,\!&amp;lt;/math&amp;gt; is not satisfied).&lt;br /&gt;
&lt;br /&gt;
===Quantum Error Correction for &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; Errors===&lt;br /&gt;
&lt;br /&gt;
A quantum error correcting code that uses &amp;lt;math&amp;gt;n\;\!&amp;lt;/math&amp;gt; qubits to encode &amp;lt;math&amp;gt;k\;\!&amp;lt;/math&amp;gt; logical qubits and can correct up to &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors is denoted &amp;lt;math&amp;gt;[[n,k,2t+1]]\;\!&amp;lt;/math&amp;gt;.  This is similar to the classical code notation except that double brackets are used to distinguish the quantum code from the corresponding classical code.  Using &amp;lt;math&amp;gt;d=2t+1\;\!&amp;lt;/math&amp;gt;, this is also written &amp;lt;math&amp;gt;[[n,k,d]]\;\!&amp;lt;/math&amp;gt;.  When a code satisfies the more restrictive condition &amp;lt;math&amp;gt;c_{\alpha\beta}=0\;\!&amp;lt;/math&amp;gt; in [[#eq7.14|Equ. (7.14)]], the code is called non-degenerate.  Note that [[#eq7.14|Equ. (7.14)]] indicates the set of errors which needs to be corrected given by the operator elements of the operator-sum representation.  It turns out that one can choose the set of errors to be described by an orthogonal basis.  This is done using the unitary degree of freedom in the operator-sum representation from [[Chapter 6 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|Section 6.4]].  [[Bibliography#NielsenChuang:book|Nielsen and Chuang]] use this to show that the conditions [[#eq7.13|Equ. (7.13)]] are necessary and sufficient for the existence of a quantum error correcting code.  Thus the necessary and sufficient conditions for being able to correct &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors are given by [[#eq7.13|Equ. (7.13)]], or equivalently, [[#eq7.14|Equ. (7.14)]].&lt;br /&gt;
&lt;br /&gt;
===The Quantum Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
Like the classical Hamming bound ([[Appendix F - Classical Error Correcting Codes#The Hamming Bound|Section F.4]]), the quantum Hamming bound is a simple bound on the size of the code for correcting a given number of errors.  In other words, it provides a bound on the rate of the code, &amp;lt;math&amp;gt;k/n\;\!&amp;lt;/math&amp;gt;.  The main difference is that there are three types of errors that can occur to a qubit: the three Pauli matrices &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;.  So each error comes in three types.  The number of possible error operators of weight &amp;lt;math&amp;gt;t \;\!&amp;lt;/math&amp;gt; acting on a code of &amp;lt;math&amp;gt;n \;\!&amp;lt;/math&amp;gt; qubits is &amp;lt;math&amp;gt;3^t C(n,t)\;\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;C(n,t)\;\!&amp;lt;/math&amp;gt; is the binomial coefficient.  Therefore since every logical state (and every logical state with any error acting on it) must all be mutually orthogonal, the quantum Hamming bound states that this set must be less than or equal to the total number of states in the Hilbert space, which is &amp;lt;math&amp;gt;2^n\;\!&amp;lt;/math&amp;gt;.  That is,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
m\sum_{i=0}^t 3^i C(n,i) \leq 2^n,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.17}}&lt;br /&gt;
where &amp;lt;math&amp;gt;m\;\!&amp;lt;/math&amp;gt; is the number of code words.&lt;br /&gt;
&lt;br /&gt;
Just as in the classical case, when &amp;lt;math&amp;gt;m= 2^k\;\!&amp;lt;/math&amp;gt;, we may take the logarithm of the equation along with &amp;lt;math&amp;gt;n,t \;\!&amp;lt;/math&amp;gt; large to get&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
k/n\leq 1-(t/n)\log 3-H(t/n),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.18}}&lt;br /&gt;
where &amp;lt;math&amp;gt;H(x) = -x\log x -(1-x)\log(1-x)\;\!&amp;lt;/math&amp;gt; and all logarithms are base 2.  &lt;br /&gt;
&lt;br /&gt;
[[#eq7.17|Equation (7.17)]] tells us that the smallest possible code encoding one qubit such that it can be protected against one arbitrary error has 5 physical qubits encoding one logical one.  (Here &amp;lt;math&amp;gt;m=2 (k=1), t=1 \;\!&amp;lt;/math&amp;gt; so &amp;lt;math&amp;gt; n=5 \;\!&amp;lt;/math&amp;gt; .)&lt;br /&gt;
&lt;br /&gt;
==Stabilizer Codes==&lt;br /&gt;
&lt;br /&gt;
The mathematical definition of a stabilizer is given in [[Appendix D - Group Theory#Definition 10: Stabilizer|Section D.6.1]].  Loosely speaking, it is a subgroup of transformations that leave a particular point in space fixed.  The theory of stabilizer codes is based on this notion.  &lt;br /&gt;
&lt;br /&gt;
Stabilizer codes are a family of quantum error correcting codes which are describable by using the stabilizer of a state (really a set of states) in the Hilbert space.  They are distinguished for several reasons.  One, they form a large class of quantum error correcting codes.  Two, they are conveniently described by their operators rather than their states and show that this can generally be the case for many quantum error correcting codes.  Other reasons will be discussed later.&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
We will begin by revisiting the three-qubit quantum error correcting code presented in some detail in [[Chapter 7 - Quantum Error Correcting Codes#Bit-flip Errors: A Quantum Code|Section 7.2.1]].  Recall that a bit-flip error that has occurred on one of the three qubits used in the logical qubit would be detectable if we could measure the parity of pairs of qubits.  These operators could be chosen to be &amp;lt;math&amp;gt; Z_1Z_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; Z_2Z_3\,\!&amp;lt;/math&amp;gt;, although any two non-identical pair would work.  Note that the basis states  &amp;lt;math&amp;gt; |000\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; |111\rangle\,\!&amp;lt;/math&amp;gt;, as well as any linear combination of these states, are eigenstates of these operators with eigenvalue +1.  The states with a single correctable error are eigenstates of these operators, but one will have eigenvalue -1.  The operators that give one of the single qubit bit-flip errors are either &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt;.  This is the idea behind stabilizer quantum error correcting codes.  The stabilizers act as parity checks on the code words.  &lt;br /&gt;
&lt;br /&gt;
The stabilizer is a subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of the [[Appendix D - Group Theory#Definition 12: Pauli Group|Pauli group]], which is an abelian subgroup (this means all elements commute with each other).  However, the elements &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt; anti-commute with at least one element of the stabilizer.   So the parity check describable by saying that states with errors are eigenstates of the stabilizers with eigenvalue -1 is equivalent to saying that one of the stabilizer operators will anti-commute with an error operator. &lt;br /&gt;
&lt;br /&gt;
The elements of the stabilizer stabilize code words, that is, code words are eigenstates of the stabilizer operators with eigenvalue +1, and states with errors have eigenvalue -1 and this can always be chosen to be true for this class of quantum error correcting codes. Note that if &amp;lt;math&amp;gt;|\psi\rangle\,\!&amp;lt;/math&amp;gt; is a code word, &amp;lt;math&amp;gt;S\in \mathcal{S}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; is an error operator, then  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
SE|\psi\rangle &amp;amp;= S|\psi^\prime\rangle =(-1)|\psi^\prime\rangle \\&lt;br /&gt;
ES|\psi\rangle &amp;amp;= E|\psi\rangle = |\psi^\prime\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.19}}&lt;br /&gt;
or&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(-1)SE|\psi\rangle = ES|\psi\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.20}}&lt;br /&gt;
This says that &amp;lt;math&amp;gt;SE + ES =0\,\!&amp;lt;/math&amp;gt; when acting on the code words.  In other words, the operators anti-commute when &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; produces a state that has eigenvalues -1 and also when it is a state that &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; stabilizes.&lt;br /&gt;
&lt;br /&gt;
This is the basic idea of the stabilizer code construction to be discussed in general in the next section.&lt;br /&gt;
&lt;br /&gt;
===General Stabilizer Formalism===&lt;br /&gt;
&lt;br /&gt;
This brief section provides general definitions and theorems for stabilizer quantum error correcting codes.  The next section provides an explicit example.&lt;br /&gt;
&lt;br /&gt;
====Definition 3: Stabilizer Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt; \mathcal{S}\subset \mathcal{P}_n \,\!&amp;lt;/math&amp;gt; be an abelian subgroup of the Pauli group that does not contain &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt;.  Let &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S}) = \{|\psi\rangle \; |\; S|\psi\rangle=|\psi\rangle, \mbox{ for all } S\in \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S})\,\!&amp;lt;/math&amp;gt; is a stabilizer code and &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; is its stabilizer. &lt;br /&gt;
&lt;br /&gt;
This formalizes what was stated earlier, which is that all states of the code space are eigenstates of elements of the stabilizer subgroup with eigenvalue +1.  However, it also says more.  It tells us that any subgroup of the Pauli group that is abelian and does not contain the elements &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; can be used to construct a stabilizer code by simply choosing the set of states that are eigenstates with eigenvalues +1.  Another way of saying this is that the states are fixed, or invariant, under the action of the stabilizer elements.  Let us see why the restriction not allowing &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; must be included.  Suppose that &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt; was in the set &amp;lt;math&amp;gt; \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  It then follows that &amp;lt;math&amp;gt; -\mathbb{I}|\psi\rangle = |\psi\rangle\,\!&amp;lt;/math&amp;gt;.  Only the zero state satisfies this equation, so the code must contain no states other than the zero one.  (The states must be +1 eigenstates of every stabilizer element.)  Now, suppose one of the other two was in the stabilizer subgroup. This means that the element squared is also in the stabilizer, since it is a subgroup and must be closed under multiplication.  But the square of these gives &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt;, which cannot be in the set.  Thus none of these can be included.&lt;br /&gt;
&lt;br /&gt;
====Encoding/Decoding from Stabilizer Generators====&lt;br /&gt;
&lt;br /&gt;
Once one has obtained the stabilizer subgroup, it is left to find the codewords that are states with eigenvalue +1.  To do this, one only needs to ensure the generators of the stabilizer satisfy this condition, since the generators give all other stabilizer elements through multiplication. Therefore, if the state has eigenvalue +1 for all generators, it will also have eigenvalue +1 for all stabilizer elements.  &lt;br /&gt;
&lt;br /&gt;
For smaller codes, finding the set of states could be as easy as satisfying constraints given by the small number of generators.  Larger, more complicated codes may however require a lot of work to find the states.  Cleve and Gottesman gave an algorithm for finding the code words using a efficient gate array obtained from the stabilizer formalism.  http://arxiv.org/abs/quant-ph/9607030  &lt;br /&gt;
&lt;br /&gt;
It is worth noting that the decoding and error detection and correction steps also require work to find explicit circuits.  However, for many stabilizer codes, decoding is simply encoding in reverse.  (This is not so for every quantum error correcting code.)  &lt;br /&gt;
&lt;br /&gt;
Although these accomplishments are very important, more work is required to ensure circuits are fault-tolerant---that errors do not propagate or grow as the computation progresses.  If they were to develop without these constraints, then the computation would eventually fail.&lt;br /&gt;
&lt;br /&gt;
===A Return to Shor's Code===&lt;br /&gt;
&lt;br /&gt;
Let us consider the set of operators in [[#Table7.1|Table 7.1]] where each operator in the row is included, in order, in the tensor product that forms an element of the Pauli group.  These elements form the eight [[Appendix D - Group Theory#Definition 14: Generators of a Group|generators]] of stabilizer elements &amp;lt;math&amp;gt;S_i\,\!&amp;lt;/math&amp;gt;.  The order of the stabilizer subgroup is much larger than the set of generators, which is only 8.  Here they are taken as in the table, but the set is not unique.  This set is chosen to agree with our earlier choice of measurements.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.1: ''The rows give the Pauli matrices which are included in a tensor product, in order, in an element of the Pauli group.  Each column corresponds to the qubit, q1-q9, on which the operator in that column will act.''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|q 8&lt;br /&gt;
|q 9&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_7\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_8\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Having the generators of the stabilizer of the code, the objective is to construct the codewords, or the explicit states that are eigenstates of these operators with eigenvalues +1.  From the top row, it is clear that the first two qubits must be the same, whether zero or one, so that the parity is even.  Similarly, the second two must be the same, and thus the first three must be the same.  Similarly, the middle three and last three must also be the same.  The last two generators state that flipping the first six bits at once will produce the same state, and flipping the last six bits together will produce the same state.  Thinking of these in blocks (since the first six generators give blocks of three) tells us that there are states that are symmetric under the interchange of zeroes and ones in pairs of triplet blocks.  To break this into two parts, one may choose the symmetric and anti-symmetric combination of states that leads to the Shor code words given in [[#eq7.10|Equation (7.10)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CSS codes==&lt;br /&gt;
&lt;br /&gt;
There is a class of quantum error correcting codes called the CSS codes after their inventors  [[Bibliography#CalderbankNShor|Calderbank and Shor]], and [[Bibliography#Steane:prsl|Steane]].   These are also stabilizer codes, but their construction is different and somewhat informative due to the connection to classical error correction.  However, given that they are stabilizer codes, the stabilizer formalism and tools can be used for encoding, etc.  &lt;br /&gt;
&lt;br /&gt;
The CSS codes are constructed from two classical linear codes, say &amp;lt;math&amp;gt; \mathcal{C}_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \mathcal{C}_2\,\!&amp;lt;/math&amp;gt;.  This is done by taking advantage of the parity check matrices from the classical coding theory.  In this section, this construction is briefly described.  In the next section, the seven qubit CSS code is described.  &lt;br /&gt;
&lt;br /&gt;
Recall from the discussion of the [[Chapter 7 - Quantum Error Correcting Codes#Shor's Nine-Qubit Quantum Error Correcting Code|Shor code]] that a phase-flip code can be constructed from a bit-flip code by using Hadamard gates in order to change the basis from &amp;lt;math&amp;gt; |0\rangle,|1\rangle\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; |+\rangle,|-\rangle\,\!&amp;lt;/math&amp;gt;.  Thus all of the error detection and correction can be accomplished by translating from one basis to the other.  &lt;br /&gt;
&lt;br /&gt;
Keeping this in mind, a quantum error correcting code can be constructed from a classical error correcting code using the following trick.  (See [[Bibliography#Steane:prl|Steane]] or the [[Bibliography#Gottesman:rev09|review by Gottesman]].)  Take the classical parity check matrix &amp;lt;math&amp;gt; P_1\,\!&amp;lt;/math&amp;gt; for a classical error correcting &amp;lt;math&amp;gt;[n_1,k_1,d_1]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt;.  This will turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_1=(d_1-1)/2\,\!&amp;lt;/math&amp;gt; bit-flip errors, just as did the classical code.  Then, given another classical error correcting &amp;lt;math&amp;gt;[n_2,k_2,d_2]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_2\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt;.  This will give turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_2=(d_2-1)/2\,\!&amp;lt;/math&amp;gt; phase-flip errors.  This would give a stabilizer code with one possible caveat: the operators in the stabilizer all need to commute with each other.  The way to ensure this will happen, that the &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt; generators and  &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt; generators commute, is to combine the codes in a particular way.  &lt;br /&gt;
&lt;br /&gt;
The dual of a code (denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;) is also a code, and it is not too difficult to show that the parity check matrix for &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; is the generator matrix for &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;.  It turns out that if (and only if) &amp;lt;math&amp;gt;\mathcal{C}_2^\perp \subseteq \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, then the two codes combine to produce an &amp;lt;math&amp;gt;[[n,k_1+k_2-n,d]]\,\!&amp;lt;/math&amp;gt; stabilizer code, where &amp;lt;math&amp;gt;d\geq \text{min}(d_1,d_2)\,\!&amp;lt;/math&amp;gt;.  That is, the generators for each of the two codes will commute with each other.  &lt;br /&gt;
&lt;br /&gt;
Now the two codes, one to protect against bit-flips and one to protect against phase-flips, combine so that they can correct any error, including &amp;lt;math&amp;gt;Y\,\!&amp;lt;/math&amp;gt; errors that are composed of both a bit-flip and phase-flip.  Therefore the code can protect against both, and the minimum distance is the smaller of the distance of the two codes.  It could actually be higher if the code is degenerate.  &lt;br /&gt;
&lt;br /&gt;
===Steane's Seven Qubit Code===&lt;br /&gt;
&lt;br /&gt;
The seven qubit quantum error correcting code, originally described by Steane, is member of the class of CSS quantum error correcting codes.  In fact it is the smallest such code, and has  &amp;lt;math&amp;gt;\mathcal{C}_2 = \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;.  It is a &amp;lt;math&amp;gt;[[7,1,3]]\,\!&amp;lt;/math&amp;gt; quantum error correcting code, using 7 qubits to encode one logical (or data) qubit such that one arbitrary error can be detected and corrected.  This code has been studied extensively, since it is able to be made fault tolerant (explained below).  &lt;br /&gt;
&lt;br /&gt;
This code is actually based on the &amp;lt;math&amp;gt;[7,4,3]\,\!&amp;lt;/math&amp;gt; Hamming code discussed in [[Appendix F - Classical Error Correcting Codes|Appendix F]].  Let us first recall the parity check matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
P =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
\end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.21}}&lt;br /&gt;
and the generator matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
G =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.22}}&lt;br /&gt;
Relating this back to the stabilizer formalism, the generators can be written using the parity check matrix as described above.  They are given in [[#Table7.2|Table 7.2]].  The first three rows each give the elements of the tensor product, in order, for the stabilizer elements of a code that can protect against bit flips.  The next three give stabilizers for the phase-flip code.  From these one may get the code words.  The logical zero and one are given below.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.2: ''The first three rows give the stabilizers for the bit-flip error correcting code.  The next three are for the phase-flip code. (See also [[#Table7.1|Table 7.1]] for further explanation.)''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Steane's 7-qubit code encodes the logical zero using all even weight classical code vectors, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|0_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{even }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.23}}&lt;br /&gt;
The odd weight classical code vectors are used to encode the logical one state,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|1_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{odd }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.24}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#Introduction|Continue to '''Chapter 8 - Decoherence-Free/Noiseless Subsystems''']]&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
[[Chapter 10 - Fault-Tolerant Quantum Computing#Introduction|Skip to '''Chapter 10 - Fault-Tolerant Quantum Computing''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1756</id>
		<title>Chapter 7 - Quantum Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1756"/>
		<updated>2011-11-28T14:06:17Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
If information was stored redundantly in some set of quantum states, then it would be possible to use the redundancy to detect and correct errors.  Quantum error correcting codes aim to encode quantum information into states in just such a redundant fashion.  It is worth noting that classical error correcting codes and coding theory has been around a long time and many of the ideas and methods of quantum error correction are imported from classical error correction.  However, quantum error correction requires extra care when measuring to detect and correct errors because superpositions of states must be preserved.  In addition, qubits can experience errors that classical bits cannot.  (For example, there is no phase-flip error on a classical bit.)  This chapter contains an introduction to quantum error correction including simple examples of quantum error correcting codes.   &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Classical Code===&lt;br /&gt;
&lt;br /&gt;
Let us first consider a simple example of a classical error correcting code.  Consider a signal which is comprised only of zeroes and ones.  (For most of these notes, these are the only types of signals: bits and their quantum analogue, qubits.)  An error in a sequence of zeroes and ones would occur if the sender sends a 1 and the receiver receives a 0 for one element of the sequence, or the sender sends a 0 and the receiver receives a 1.  In other words, for this type of encoding, an error would be a &amp;quot;classical bit-flip error&amp;quot; which would turn a 0 into a 1 and a 1 into a 0.  A simple example of a classical error correcting code which protects against such bit-flip errors is the following code.  Rather than use the state 0, the state is encoded redundantly: the state 000 is used.  This is called an encoded zero state or a logical zero state.  Likewise, 111 is used as an encoded 1, or logical 1.  Now suppose one bit is flipped when the encoded state 111 is sent, and further suppose that it is the first bit which is flipped.  If one (and only one) of the bits is flipped, the encoded state could be fixed by flipping the outlier so that it agrees with the others.  &lt;br /&gt;
&lt;br /&gt;
Let us assume that each error is independent and has probability &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt;.  The probability that the bit is not flipped is then &amp;lt;math&amp;gt;1-p\,\!&amp;lt;/math&amp;gt;.  Since the probability is &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; that one bit flip occurs, then the probability that two are flipped is &amp;lt;math&amp;gt;3(1-p)p^2\,\!&amp;lt;/math&amp;gt; assuming that which one is unknown.  The probability that three are flipped is &amp;lt;math&amp;gt;p^3\,\!&amp;lt;/math&amp;gt;.  So the code will help us if &amp;lt;math&amp;gt;p &amp;gt; 3(1-p)p^2 +p^3\,\!&amp;lt;/math&amp;gt; which happens when &amp;lt;math&amp;gt;p&amp;lt;1/2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
This example will be used below to find a simple bit-flip code for a quantum system.&lt;br /&gt;
&lt;br /&gt;
===Further Reading===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Appendix F - Classical Error Correcting Codes|Appendix F]] contains a brief introduction to classical error correction.  Many of the concepts and definitions in that appendix will be helpful for understanding the material in this chapter.  However, the chapter itself is somewhat self-contained.  When more explanation is required or desired, it will likely be helpful to read or reread [[Appendix F - Classical Error Correcting Codes|Appendix F]] and/or consult the references there.&lt;br /&gt;
&lt;br /&gt;
==Shor's Nine-Qubit Quantum Error Correcting Code==&lt;br /&gt;
&lt;br /&gt;
Shor's nine-qubit quantum error correcting code is important for several reasons.  Historically, it is important because it provides the first example of a quantum error correcting code which, in principle, can correct arbitrary single-qubit errors.  Pedagogically, it is important because it is an example which can be understood in terms of the simple classical error correcting code given above.  It also uses many of the standard assumptions of more general quantum error correcting codes.  Therefore, it is presented as our first quantum error correcting code and, as will be seen later, an example of what is called a stabilizer code, which is a very general category.  &lt;br /&gt;
&lt;br /&gt;
The Shor code is introduced in parts, bit-flip and phase-flip, and then in its entirety.  Since  the phase-flip code follows from the bit-flip code (as discussed below), the bit-flip code is discussed in great detail.  &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Quantum Code===&lt;br /&gt;
&lt;br /&gt;
The quantum bit-flip code uses three quantum states to encode one as does the classical bit-flip code above.  The state &amp;lt;math&amp;gt;  \left\vert 0\right\rangle \otimes \left\vert 0\right\rangle\otimes \left\vert 0\right\rangle = \left\vert 000\right\rangle = \left\vert 0_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is the logical state representing the zero state of the encoded qubit.  (The subscript L is to indicate that it is a logical state and the b indicates that it is a bit-flip code.  We will see below why this distinction is helpful.)  Similarly, &amp;lt;math&amp;gt;\left\vert 111\right\rangle = \left\vert 1_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is used for the logical one state.  &lt;br /&gt;
&lt;br /&gt;
====Encoding the Logical State====&lt;br /&gt;
&lt;br /&gt;
Note that one cannot just clone a state to produce redundancy due to the [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|No-Cloning Theorem]].  Also, the encoded state needs to preserve superpositions such as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle =  \alpha\left|0\right\rangle + \beta\left|1\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.1}}&lt;br /&gt;
To encode the state redundantly, cloning is not required.  The encoding can be accomplished using the &amp;lt;math&amp;gt; CNOT \,\!&amp;lt;/math&amp;gt; gate twice.  Simply apply &amp;lt;math&amp;gt; CNOT_{13} \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; CNOT_{12} \,\!&amp;lt;/math&amp;gt; to the following state of three qubits,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle\left|00\right\rangle =  (\alpha\left|0\right\rangle + \beta\left|1\right\rangle)\left|00\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This will produce  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi_L\right\rangle =  \alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle = \alpha\left|000\right\rangle + \beta\left|111\right\rangle . &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.2}}&lt;br /&gt;
The circuit diagram for this is given in [[#Figure 7.1|Figure 7.1]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccencode.jpg|300px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.1:  Circuit diagram for encoding a qubit into a 3-qubit bit-flip protected code.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Error Syndrome Extraction====&lt;br /&gt;
&lt;br /&gt;
Now a method for measurement and recovery is needed.  &lt;br /&gt;
The problem is that in quantum mechanics one cannot just measure the three states to see if they agree; a quantum state can be in a superposition of the (logical) zero state and the (logical) one state as above, and &lt;br /&gt;
a measurement of the first qubit to see if it is in the state zero or not will immediately produce the state &amp;lt;math&amp;gt; \left| 000 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\alpha|^2 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left| 111 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\beta|^2 \,\!&amp;lt;/math&amp;gt;, thus destroying the superposition of the qubit state.  The state would then be one that can be described as containing only classical information.  (Essentially it is equivalent to the classical 000 or 111 binary state.)  Since we need to preserve arbitrary superpositions, we cannot use this method for determining whether or not an error occurred.  &lt;br /&gt;
&lt;br /&gt;
Now let us suppose that a bit-flip error occurs on &amp;lt;math&amp;gt;\left|\psi_L\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
The objective is to determine if the state has experienced a bit-flip error or not without ruining the superposition and, if it has an error, to determine which qubit experienced the error. This can be done by checking to see if the first two qubits are the same or not and then checking to see if the last two qubits are the same or not without ever determining whether the state is the logical zero, logical one, or a superposition of the two.  &lt;br /&gt;
&lt;br /&gt;
Let us examine this process in detail.  First, notice the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue 1 and &amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue -1.  Then any logical state is an eigenstate of the operator &amp;lt;math&amp;gt; \sigma_z\otimes \sigma_z\otimes I\,\!&amp;lt;/math&amp;gt; with eigenvalue of 1 if the first two qubits are the same and -1 if they differ.  For example, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle = (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|000\right\rangle + \beta\left|111\right\rangle) = (1)(\alpha\left|000\right\rangle + \beta\left|111\right\rangle).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.3}}&lt;br /&gt;
Of course the same is also true for the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt;.  However, suppose that a bit-flip error occurs on the first qubit, giving &amp;lt;math&amp;gt; (\sigma_x\otimes I\otimes I) \left\vert\psi_L\right\rangle = (\sigma_x\otimes I\otimes I)(\alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle) = \alpha\left|100\right\rangle + \beta\left|011\right\rangle\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle &amp;amp;= (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|100\right\rangle + \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp;= (-\alpha\left|100\right\rangle - \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp; = (-1)(\alpha\left|100\right\rangle + \beta\left|011\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.4}}&lt;br /&gt;
Notice that, in principle, we need not determine either &amp;lt;math&amp;gt; \alpha\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; \beta\,\!&amp;lt;/math&amp;gt;.  However, it does seem that the error can be detected.  Since determining the value of the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt; shows that the last two qubits agree, we know that the error occurred on the first qubit.  In fact, it is not difficult to convince yourself that measuring these two operators will determine which qubit experienced a bit-flip for any of the three.  Just like the classical bit-flip code, this will not indicate whether or not an error occurred on two qubits.  Thus the probability must be small, just like the case for the classical code.&lt;br /&gt;
&lt;br /&gt;
Now, we have the idea that we could determine the parity of the pairs of qubits to determine if they are the same or different.  But how would we determine this in practice?  A method for doing this is shown in [[#Figure 7.2|Figure 7.2]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccSyndrome.jpg|center|400px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.2: A method for extracting a bit-flip error syndrome from a 3-qubit bit-flip protected code.  The M's are measurements on the ancillary qubits, the results of which are recorded as &amp;lt;math&amp;gt;R_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Figure 7.2|Figure 7.2]] gives a circuit for determining the error, also known as a syndrome measurement.  In this example, a bit-flip error occurred on qubit 1 in the 3 qubit QECC.  This is represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate.  After 4 CNOT gates, the two ancillary qubits are measured.  A measurement in the &amp;lt;math&amp;gt;|0\rangle, |1\rangle\,\!&amp;lt;/math&amp;gt; basis gives a result of &amp;lt;math&amp;gt;|1\rangle\,\!&amp;lt;/math&amp;gt; for the top ancillary qubit and &amp;lt;math&amp;gt;|0\rangle\,\!&amp;lt;/math&amp;gt; for the bottom one.  This tells us that the first qubit has had a bit-flip error.  We then feed this information back into the system by implementing an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, thus correcting the error.  &lt;br /&gt;
&lt;br /&gt;
Notice that we have not determined the coefficients of the superposition of the logical zero and logical one states.  We have only determined that there was an error on the first qubit since it does not agree with the other two.  (Assuming that only one bit-flip error could have occurred.)&lt;br /&gt;
&lt;br /&gt;
====Continuous Sets of Errors====&lt;br /&gt;
&lt;br /&gt;
The error, in this case represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate, is not very realistic.  What would be more realistic is that the bit is not flipped completely; it is in a superposition of the zero state and one state.  In other words, we should properly consider the following state, where an error has occurred on the first qubit:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle &amp;amp;=  \alpha(a\left|0\right\rangle + b\left|1\right\rangle) \left|00\right\rangle + \beta(b\left|0\right\rangle + a\left|1\right\rangle)\left|11\right\rangle \\&lt;br /&gt;
 &amp;amp;= \alpha a\left|000\right\rangle + \alpha b\left|100\right\rangle + \beta b\left|011\right\rangle + \beta a\left|111\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.5}}&lt;br /&gt;
This is a rotation about the x-axis by an arbitrary angle with &lt;br /&gt;
&amp;lt;math&amp;gt;a=\cos \theta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b=i\sin\theta\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Transformations of a Qubit|Section C.5.1]].)  Now suppose that two ancillary qubits are attached to the state&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle = \alpha a\left|00000\right\rangle + \alpha b\left|10000\right\rangle + \beta b\left|01100\right\rangle + \beta a\left|11100\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.6}}&lt;br /&gt;
and the resulting state is put into the circuit that gives the error syndrome given in [[#Figure 7.2|Figure 7.2]].  Let &lt;br /&gt;
&amp;lt;math&amp;gt;V = CNOT_{1{a_1}} CNOT_{2{a_1}} CNOT_{2{a_2}} CNOT_{3{a_2}}\,\!&amp;lt;/math&amp;gt;. Then   &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
 V\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle &amp;amp;= (\alpha a\left|00000\right\rangle + \alpha b\left|10010\right\rangle + \beta b\left|01110\right\rangle + \beta a\left|11100\right\rangle) \\&lt;br /&gt;
           &amp;amp;= (\alpha \left|000\right\rangle + \beta\left|111\right\rangle)a\left\vert 00\right\rangle +(\alpha\left|100\right\rangle + \beta \left|011\right\rangle)b\left\vert 10\right\rangle  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.7}},&lt;br /&gt;
where the two ancillary qubits, denoted &amp;lt;math&amp;gt;a_1\,\!&amp;lt;/math&amp;gt; (for the first ancillary qubit which is on top in [[#Figure 7.2|Figure 7.2]]) and &amp;lt;math&amp;gt;a_2\,\!&amp;lt;/math&amp;gt; (for the second ancillary qubit which is on bottom in [[#Figure 7.2|Figure 7.2]]), will give the error syndrome.  The measurement of the second ancillary qubit always gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt;.  The measurement of the first gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|a|^2\,\!&amp;lt;/math&amp;gt; and, if this occurs, the system will be in its original state and there is no error.  However, if the measurement of the first ancillary qubit gives &amp;lt;math&amp;gt;\left|1\right\rangle\,\!&amp;lt;/math&amp;gt;, which it will with probability &amp;lt;math&amp;gt;|b|^2,\,\!&amp;lt;/math&amp;gt; then the system is left in the state &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\alpha\left|100\right\rangle + \beta\left|011\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.8}}&lt;br /&gt;
This indicates that a bit-flip error has occurred on the first qubit.  Such an error is easily corrected with an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, which will flip it.  &lt;br /&gt;
&lt;br /&gt;
Therefore any single-qubit bit-flip error can be corrected, since we will project into the basis of one bit-flip error and the syndrome measurement indicates which one.  In other words, we have made the error discrete using a projective measurement of the ancilla.&lt;br /&gt;
&lt;br /&gt;
===Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Phase-flip errors&amp;quot; are errors which change the sign of the &amp;lt;math&amp;gt; \left| 1\right\rangle\,\!&amp;lt;/math&amp;gt; state.  This is not a classical error as it does not occur on a classical bit.  However, it does occur on qubits that are not in the zero state.  Thus these errors must be treated.   &lt;br /&gt;
&lt;br /&gt;
Much of what works for the bit-flip errors also works for phase-flip errors once we are able to encode properly.  Let us consider the following states that we will used to encode our logical qubit: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left\vert \pm\right\rangle = \frac{1}{\sqrt{2}}(\left\vert 0 \right\rangle \pm \left\vert 1\right\rangle). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.9}}&lt;br /&gt;
In this case, when a &amp;quot;phase-flip&amp;quot; occurs, the &amp;lt;math&amp;gt; \left\vert + \right\rangle \,\!&amp;lt;/math&amp;gt; becomes a &amp;lt;math&amp;gt; \left\vert - \right\rangle \,\!&amp;lt;/math&amp;gt; or vice versa.  Therefore it is similar to the bit-flip error since there are two orthogonal states that are changed into one another by the error.  In this case the error operator is of the form &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt;.  As before, if a phase error occurs on the first qubit, then we can encode redundantly by letting &amp;lt;math&amp;gt; \left\vert 0_{pL} \right\rangle = \left\vert +++ \right\rangle \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left\vert 1_{pL} \right\rangle = \left\vert --- \right\rangle\,\!&amp;lt;/math&amp;gt;.  It is easy to see that this code will enable the detection and correction of one phase error just as the bit-flip code did for one bit-flip.  In this case we exchange the &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt; in the bit-flip code with a &amp;lt;math&amp;gt; \sigma_x \,\!&amp;lt;/math&amp;gt; for the phase-flip code and the process carries through as before.&lt;br /&gt;
&lt;br /&gt;
===Bit-flip and Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
Certainly if a phase-flip error does not have a classical analogue then the combination of bit- and phase-flip errors also does not.  It turns out that by having found a code that will protect against bit-flip errors and another against phase-flip errors, we are able to write down a code that will protect against both.  This was first given by Peter Shor [[Bibliography#Shor:QECC|Shor:1995]], but was also described by Carlton Caves in a very readable paper, [[Bibliography#Caves:QECC|Caves:1999]].  &lt;br /&gt;
&lt;br /&gt;
The way to protect against both is to combine the two codes and take the logical qubits to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle)\\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.10}}&lt;br /&gt;
One may also write this as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= \left\vert +_{bL}\right\rangle \left\vert +_{bL}\right\rangle  \left\vert +_{bL}\right\rangle \\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = \left\vert -_{bL}\right\rangle  \left\vert -_{bL}\right\rangle \left\vert -_{bL}\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.11}}&lt;br /&gt;
&lt;br /&gt;
This shows that there is a code which protects against bit-flip errors and phase-flip errors by using a redundant encoding comprised of the states that protect against bit flips and the states that protect against phase flips.&lt;br /&gt;
&lt;br /&gt;
==Quantum Error Correcting Codes: General Properties==&lt;br /&gt;
&lt;br /&gt;
Now that we have seen some examples of quantum error correcting codes, some natural questions come to mind.  Are there general rules for constructing quantum error correcting codes?  In the case of classical codes, there is a disjointness condition and a Hamming bound.  These let us know when it is not possible to construct a quantum error correcting code.  Here, the two analogues for quantum error correcting codes are given, although the disjointness condition is quite different for quantum error correcting codes.  &lt;br /&gt;
&lt;br /&gt;
===The Quantum Error Correcting Code Condition===&lt;br /&gt;
&lt;br /&gt;
Let us consider a quantum system undergoing some noisy evolution.  As described in [[Chapter 6 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|Section 6.2]] and [[Chapter 6 - Noise in Quantum Systems#Modelling Open System Evolution|Section 6.3]], such an open-system evolution can be described by a quantum operation acting on a density operator,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\rho^\prime= \sum_\alpha A_\alpha \rho A_\alpha^\dagger. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.12}}&lt;br /&gt;
The operator elements &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; can be used to express what is known as the quantum error correcting code condition&lt;br /&gt;
(See [[Bibliography#NielsenChuang:book|Nielsen and Chuang]],  or [[Bibliography#Nielsen/etal|Nielsen, et al:97]] for the original reference), &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
P  A^\dagger_\beta  A_\alpha P = d_{\alpha\beta}P, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.13}}&lt;br /&gt;
where the &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; are the operators from the operator-sum representation, and &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is a projector onto the code space.   &lt;br /&gt;
An equivalent expression is (see [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]]),&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\langle i_L| A^\dagger_\beta  A_\alpha |j_L\rangle = c_{\alpha\beta}\delta_{ij}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.14}}&lt;br /&gt;
This is the quantum analogue of the [[Appendix F - Classical Error Correcting Codes#eqF.5|disjointness condition]] for classical error correcting codes.  To interpret this, consider [[#eq7.14|Equation (7.14)]].  This says that if one error &amp;lt;math&amp;gt;A_\beta\,\!&amp;lt;/math&amp;gt; acts acts on a logical state &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; and another error (or possibly the same error) &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; acts on a different logical state &amp;lt;math&amp;gt;|j_L\rangle\,\!&amp;lt;/math&amp;gt;, then the two cannot be equal.  In fact, the statement is a bit different.  It tells us that there can be no overlap between two states.  If there were overlap, there would be some probability for a measurement to produce an ambiguous result.  It also tells us that for two different &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; the same error acting will produce the same result.  This is allowed by the superposition principle, but not something one finds in classical error correction.  Therefore, the analogy with the classical disjointness condition is very loose.  (See  [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]] for further explanation.)  &lt;br /&gt;
&lt;br /&gt;
One way to understand [[#eq7.13|Equation (7.13)]] is to show [[#eq7.14|Equation (7.14)]] is true if and only if [[#eq7.13|Equation (7.13)]] is true.  However, these results can be seen as part of a broader and more basic property of quantum systems related to the reversibility of a quantum operation as discussed by [[Bibliography#Nielsen/etal|Nielsen, et al:97]].&lt;br /&gt;
&lt;br /&gt;
===A Basis for Errors===&lt;br /&gt;
&lt;br /&gt;
Using the Pauli matrices and the identity for the errors, any error can be described as a tensor product of operators.  Each term in the tensor product will involve one of four operators, &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;, where the identity &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt; indicates that no error has occurred.  (See [[Chapter 6 - Noise in Quantum Systems#Examples|Section 6.5]].)  For example, suppose a code involves five qubits.  For each of the five qubits, suppose no error occurs on qubit 1, a bit-flip error occurs on qubits 2 and 3, a phase error occurs on qubit 4, and qubit 5 is affected by both types of errors.  This error operator would be &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}\otimes X_2\otimes X_3 \otimes Z_4 \otimes Y_5&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.15}}&lt;br /&gt;
or, using a short-hand notation, &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
X_2 X_3 Z_4 Y_5.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.16}}&lt;br /&gt;
This error operator is said to have weight five.  &lt;br /&gt;
&lt;br /&gt;
====Definition 1: weight of an operator====&lt;br /&gt;
&lt;br /&gt;
The '''weight of an operator''' is the number of non-identity elements in the tensor product.  &lt;br /&gt;
&lt;br /&gt;
This provides us with a basis for all errors that can occur.  This is enough, since the errors can be made discrete using the syndrome measurement process.&lt;br /&gt;
&lt;br /&gt;
====Definition 2: Distance of a Quantum Error Correcting Code====&lt;br /&gt;
&lt;br /&gt;
The distance of a quantum error correcting code is the minimum weight, greater than zero, of an element &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; of the Pauli group such that the quantum error correcting code condition fails (i.e., such that &amp;lt;math&amp;gt;\langle i_L |G|j_L \rangle = c\delta_{ij}\,\!&amp;lt;/math&amp;gt; is not satisfied).&lt;br /&gt;
&lt;br /&gt;
===Quantum Error Correction for &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; Errors===&lt;br /&gt;
&lt;br /&gt;
A quantum error correcting code that uses &amp;lt;math&amp;gt;n\;\!&amp;lt;/math&amp;gt; qubits to encode &amp;lt;math&amp;gt;k\;\!&amp;lt;/math&amp;gt; logical qubits and can correct up to &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors is denoted &amp;lt;math&amp;gt;[[n,k,2t+1]]\;\!&amp;lt;/math&amp;gt;.  This is similar to the classical code notation except that double brackets are used to distinguish the quantum code from the corresponding classical code.  Using &amp;lt;math&amp;gt;d=2t+1\;\!&amp;lt;/math&amp;gt;, this is also written &amp;lt;math&amp;gt;[[n,k,d]]\;\!&amp;lt;/math&amp;gt;.  When a code satisfies the more restrictive condition &amp;lt;math&amp;gt;c_{\alpha\beta}=0\;\!&amp;lt;/math&amp;gt; in [[#eq7.14|Equ. (7.14)]], the code is called non-degenerate.  Note that [[#eq7.14|Equ. (7.14)]] indicates the set of errors which needs to be corrected given by the operator elements of the operator-sum representation.  It turns out that one can choose the set of errors to be described by an orthogonal basis.  This is done using the unitary degree of freedom in the operator-sum representation from [[Chapter 6 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|Section 6.4]].  [[Bibliography#NielsenChuang:book|Nielsen and Chuang]] use this to show that the conditions [[#eq7.13|Equ. (7.13)]] are necessary and sufficient for the existence of a quantum error correcting code.  Thus the necessary and sufficient conditions for being able to correct &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors are given by [[#eq7.13|Equ. (7.13)]], or equivalently, [[#eq7.14|Equ. (7.14)]].&lt;br /&gt;
&lt;br /&gt;
===The Quantum Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
Like the classical Hamming bound ([[Appendix F - Classical Error Correcting Codes#The Hamming Bound|Section F.4]]), the quantum Hamming bound is a simple bound on the size of the code for correcting a given number of errors.  In other words, it provides a bound on the rate of the code, &amp;lt;math&amp;gt;k/n\;\!&amp;lt;/math&amp;gt;.  The main difference is that there are three types of errors that can occur to a qubit: the three Pauli matrices &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;.  So each error comes in three types.  The number of possible error operators of weight &amp;lt;math&amp;gt;t \;\!&amp;lt;/math&amp;gt; acting on a code of &amp;lt;math&amp;gt;n \;\!&amp;lt;/math&amp;gt; qubits is &amp;lt;math&amp;gt;3^t C(n,t)\;\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;C(n,t)\;\!&amp;lt;/math&amp;gt; is the binomial coefficient.  Therefore since every logical state (and every logical state with any error acting on it) must all be mutually orthogonal, the quantum Hamming bound states that this set must be less than or equal to the total number of states in the Hilbert space, which is &amp;lt;math&amp;gt;2^n\;\!&amp;lt;/math&amp;gt;.  That is,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
m\sum_{i=0}^t 3^i C(n,i) \leq 2^n,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.17}}&lt;br /&gt;
where &amp;lt;math&amp;gt;m\;\!&amp;lt;/math&amp;gt; is the number of code words.&lt;br /&gt;
&lt;br /&gt;
Just as in the classical case, when &amp;lt;math&amp;gt;m= 2^k\;\!&amp;lt;/math&amp;gt;, we may take the logarithm of the equation along with &amp;lt;math&amp;gt;n,t \;\!&amp;lt;/math&amp;gt; large to get&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
k/n\leq 1-(t/n)\log 3-H(t/n),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.18}}&lt;br /&gt;
where &amp;lt;math&amp;gt;H(x) = -x\log x -(1-x)\log(1-x)\;\!&amp;lt;/math&amp;gt; and all logarithms are base 2.  &lt;br /&gt;
&lt;br /&gt;
[[#eq7.17|Equation (7.17)]] tells us that the smallest possible code encoding one qubit such that it can be protected against one arbitrary error has 5 physical qubits encoding one logical one.  (Here &amp;lt;math&amp;gt;m=2 (k=1), t=1 \;\!&amp;lt;/math&amp;gt; so &amp;lt;math&amp;gt; n=5 \;\!&amp;lt;/math&amp;gt; .)&lt;br /&gt;
&lt;br /&gt;
==Stabilizer Codes==&lt;br /&gt;
&lt;br /&gt;
The mathematical definition of a stabilizer is given in [[Appendix D - Group Theory#Definition 10: Stabilizer|Section D.6.1]].  Loosely speaking, it is a subgroup of transformations that leave a particular point in space fixed.  The theory of stabilizer codes is based on this notion.  &lt;br /&gt;
&lt;br /&gt;
Stabilizer codes are a family of quantum error correcting codes which are describable by using the stabilizer of a state (really a set of states) in the Hilbert space.  They are distinguished for several reasons.  One, they form a large class of quantum error correcting codes.  Two, they are conveniently described by their operators rather than their states and show that this can generally be the case for many quantum error correcting codes.  Other reasons will be discussed later.&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
We will begin by revisiting the three-qubit quantum error correcting code presented in some detail in [[Chapter 7 - Quantum Error Correcting Codes#Bit-flip Errors: A Quantum Code|Section 7.2.1]].  Recall that a bit-flip error that has occurred on one of the three qubits used in the logical qubit would be detectable if we could measure the parity of pairs of qubits.  These operators could be chosen to be &amp;lt;math&amp;gt; Z_1Z_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; Z_2Z_3\,\!&amp;lt;/math&amp;gt;, although any two non-identical pair would work.  Note that the basis states  &amp;lt;math&amp;gt; |000\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; |111\rangle\,\!&amp;lt;/math&amp;gt;, as well as any linear combination of these states, are eigenstates of these operators with eigenvalue +1.  The states with a single correctable error are eigenstates of these operators, but one will have eigenvalue -1.  The operators that give one of the single qubit bit-flip errors are either &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt;.  This is the idea behind stabilizer quantum error correcting codes.  The stabilizers act as parity checks on the code words.  &lt;br /&gt;
&lt;br /&gt;
The stabilizer is a subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of the [[Appendix D - Group Theory#Definition 12: Pauli Group|Pauli group]], which is an abelian subgroup (this means all elements commute with each other).  However, the elements &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt; anti-commute with at least one element of the stabilizer.   So the parity check describable by saying that states with errors are eigenstates of the stabilizers with eigenvalue -1 is equivalent to saying that one of the stabilizer operators will anti-commute with an error operator. &lt;br /&gt;
&lt;br /&gt;
The elements of the stabilizer stabilize code words, that is, code words are eigenstates of the stabilizer operators with eigenvalue +1, and states with errors have eigenvalue -1 and this can always be chosen to be true for this class of quantum error correcting codes. Note that if &amp;lt;math&amp;gt;|\psi\rangle\,\!&amp;lt;/math&amp;gt; is a code word, &amp;lt;math&amp;gt;S\in \mathcal{S}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; is an error operator, then  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
SE|\psi\rangle &amp;amp;= S|\psi^\prime\rangle =(-1)|\psi^\prime\rangle \\&lt;br /&gt;
ES|\psi\rangle &amp;amp;= E|\psi\rangle = |\psi^\prime\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.19}}&lt;br /&gt;
or&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(-1)SE|\psi\rangle = ES|\psi\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.20}}&lt;br /&gt;
This says that &amp;lt;math&amp;gt;SE + ES =0\,\!&amp;lt;/math&amp;gt; when acting on the code words.  In other words, the operators anti-commute when &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; produces a state that has eigenvalues -1 and also when it is a state that &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; stabilizes.&lt;br /&gt;
&lt;br /&gt;
This is the basic idea of the stabilizer code construction to be discussed in general in the next section.&lt;br /&gt;
&lt;br /&gt;
===General Stabilizer Formalism===&lt;br /&gt;
&lt;br /&gt;
This brief section provides general definitions and theorems for stabilizer quantum error correcting codes.  The next section provides an explicit example.&lt;br /&gt;
&lt;br /&gt;
====Definition 3: Stabilizer Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt; \mathcal{S}\subset \mathcal{P}_n \,\!&amp;lt;/math&amp;gt; be an abelian subgroup of the Pauli group that does not contain &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt;.  Let &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S}) = \{|\psi\rangle \; |\; S|\psi\rangle=|\psi\rangle, \mbox{ for all } S\in \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S})\,\!&amp;lt;/math&amp;gt; is a stabilizer code and &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; is its stabilizer. &lt;br /&gt;
&lt;br /&gt;
This formalizes what was stated earlier, which is that all states of the code space are eigenstates of elements of the stabilizer subgroup with eigenvalue +1.  However, it also says more.  It tells us that any subgroup of the Pauli group that is abelian and does not contain the elements &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; can be used to construct a stabilizer code by simply choosing the set of states that are eigenstates with eigenvalues +1.  Another way of saying this is that the states are fixed, or invariant, under the action of the stabilizer elements.  Let us see why the restriction not allowing &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; must be included.  Suppose that &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt; was in the set &amp;lt;math&amp;gt; \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  It then follows that &amp;lt;math&amp;gt; -\mathbb{I}|\psi\rangle = |\psi\rangle\,\!&amp;lt;/math&amp;gt;.  Only the zero state satisfies this equation, so the code must contain no states other than the zero one.  (The states must be +1 eigenstates of every stabilizer element.)  Now, suppose one of the other two was in the stabilizer subgroup. This means that the element squared is also in the stabilizer, since it is a subgroup and must be closed under multiplication.  But the square of these gives &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt;, which cannot be in the set.  Thus none of these can be included.&lt;br /&gt;
&lt;br /&gt;
====Encoding/Decoding from Stabilizer Generators====&lt;br /&gt;
&lt;br /&gt;
Once one has obtained the stabilizer subgroup, it is left to find the codewords that are states with eigenvalue +1.  To do this, one only needs to ensure the generators of the stabilizer satisfy this condition, since the generators give all other stabilizer elements through multiplication. Therefore, if the state has eigenvalue +1 for all generators, it will also have eigenvalue +1 for all stabilizer elements.  &lt;br /&gt;
&lt;br /&gt;
For smaller codes, finding the set of states could be as easy as satisfying constraints given by the small number of generators.  Larger, more complicated codes may however require a lot of work to find the states.  Cleve and Gottesman gave an algorithm for finding the code words using a efficient gate array obtained from the stabilizer formalism.  http://arxiv.org/abs/quant-ph/9607030  &lt;br /&gt;
&lt;br /&gt;
It is worth noting that the decoding and error detection and correction steps also require work to find explicit circuits.  However, for many stabilizer codes, decoding is simply encoding in reverse.  (This is not so for every quantum error correcting code.)  &lt;br /&gt;
&lt;br /&gt;
Although these accomplishments are very important, more work is required to ensure circuits are fault-tolerant---that errors do not propagate or grow as the computation progresses.  If they were to develop without these constraints, then the computation would eventually fail.&lt;br /&gt;
&lt;br /&gt;
===A Return to Shor's Code===&lt;br /&gt;
&lt;br /&gt;
Let us consider the set of operators in [[#Table7.1|Table 7.1]] where each operator in the row is included, in order, in the tensor product that forms an element of the Pauli group.  These elements form the eight [[Appendix D - Group Theory#Definition 14: Generators of a Group|generators]] of stabilizer elements &amp;lt;math&amp;gt;S_i\,\!&amp;lt;/math&amp;gt;.  The order of the stabilizer subgroup is much larger than the set of generators, which is only 8.  Here they are taken as in the table, but the set is not unique.  This set is chosen to agree with our earlier choice of measurements.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.1: ''The rows give the Pauli matrices which are included in a tensor product, in order, in an element of the Pauli group.  Each column corresponds to the qubit, q1-q9, on which the operator in that column will act.''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|q 8&lt;br /&gt;
|q 9&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_7\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_8\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Having the generators of the stabilizer of the code, the objective is to construct the codewords, or the explicit states that are eigenstates of these operators with eigenvalues +1.  From the top row, it is clear that the first two qubits must be the same, whether zero or one, so that the parity is even.  Similarly, the second two must be the same, and thus the first three must be the same.  Similarly, the middle three and last three must also be the same.  The last two generators state that flipping the first six bits at once will produce the same state, and flipping the last six bits together will produce the same state.  Thinking of these in blocks (since the first six generators give blocks of three) tells us that there are states that are symmetric under the interchange of zeroes and ones in pairs of triplet blocks.  To break this into two parts, one may choose the symmetric and anti-symmetric combination of states that leads to the Shor code words given in [[#eq7.10|Equation (7.10)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CSS codes==&lt;br /&gt;
&lt;br /&gt;
There is a class of quantum error correcting codes called the CSS codes after their inventors  [[Bibliography#CalderbankNShor|Calderbank and Shor]], and [[Bibliography#Steane:prsl|Steane]].   These are also stabilizer codes, but their construction is different and somewhat informative due to the connection to classical error correction.  However, given that they are stabilizer codes, the stabilizer formalism and tools can be used for encoding, etc.  &lt;br /&gt;
&lt;br /&gt;
The CSS codes are constructed from two classical linear codes, say &amp;lt;math&amp;gt; \mathcal{C}_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \mathcal{C}_2\,\!&amp;lt;/math&amp;gt;.  This is done by taking advantage of the parity check matrices from the classical coding theory.  In this section, this construction is briefly described.  In the next section, the seven qubit CSS code is described.  &lt;br /&gt;
&lt;br /&gt;
Recall from the discussion of the [[Chapter 7 - Quantum Error Correcting Codes#Shor's Nine-Qubit Quantum Error Correcting Code|Shor code]] that a phase-flip code can be constructed from a bit-flip code by using Hadamard gates in order to change the basis from &amp;lt;math&amp;gt; |0\rangle,|1\rangle\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; |+\rangle,|-\rangle\,\!&amp;lt;/math&amp;gt;.  Thus all of the error detection and correction can be accomplished by translating from one basis to the other.  &lt;br /&gt;
&lt;br /&gt;
Keeping this in mind, a quantum error correcting code can be constructed from a classical error correcting code using the following trick.  (See [[Bibliography#Steane:prl|Steane]] or the [[Bibliography#Gottesman:rev09|review by Gottesman]].)  Take the classical parity check matrix &amp;lt;math&amp;gt; P_1\,\!&amp;lt;/math&amp;gt; for a classical error correcting &amp;lt;math&amp;gt;[n_1,k_1,d_1]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt;.  This will turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_1=(d_1-1)/2\,\!&amp;lt;/math&amp;gt; bit-flip errors, just as did the classical code.  Then, given another classical error correcting &amp;lt;math&amp;gt;[n_2,k_2,d_2]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_2\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt;.  This will give turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_2=(d_2-1)/2\,\!&amp;lt;/math&amp;gt; phase-flip errors.  This would give a stabilizer code with one possible caveat: the operators in the stabilizer all need to commute with each other.  The way to ensure this will happen, that the &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt; generators and  &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt; generators commute, is to combine the codes in a particular way.  &lt;br /&gt;
&lt;br /&gt;
The dual of a code (denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;) is also a code, and it is not too difficult to show that the parity check matrix for &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; is the generator matrix for &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;.  It turns out that if (and only if) &amp;lt;math&amp;gt;\mathcal{C}_2^\perp \subseteq \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, then the two codes combine to produce an &amp;lt;math&amp;gt;[[n,k_1+k_2-n,d]]\,\!&amp;lt;/math&amp;gt; stabilizer code, where &amp;lt;math&amp;gt;d\geq \text{min}(d_1,d_2)\,\!&amp;lt;/math&amp;gt;.  That is, the generators for each of the two codes will commute with each other.  &lt;br /&gt;
&lt;br /&gt;
Now the two codes, one to protect against bit-flips and one to protect against phase-flips, combine so that they can correct any error, including &amp;lt;math&amp;gt;Y\,\!&amp;lt;/math&amp;gt; errors that are composed of both a bit-flip and phase-flip.  Therefore the code can protect against both, and the minimum distance is the smaller of the distance of the two codes.  It could actually be higher if the code is degenerate.  &lt;br /&gt;
&lt;br /&gt;
===Steane's Seven Qubit Code===&lt;br /&gt;
&lt;br /&gt;
The seven qubit quantum error correcting code, originally described by Steane, is member of the class of CSS quantum error correcting codes.  In fact it is the smallest such code, and has  &amp;lt;math&amp;gt;\mathcal{C}_2 = \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;.  It is a &amp;lt;math&amp;gt;[[7,1,3]]\,\!&amp;lt;/math&amp;gt; quantum error correcting code, using 7 qubits to encode one logical (or data) qubit such that one arbitrary error can be detected and corrected.  This code has been studied extensively, since it is able to be made fault tolerant (explained below).  &lt;br /&gt;
&lt;br /&gt;
This code is actually based on the &amp;lt;math&amp;gt;[7,4,3]\,\!&amp;lt;/math&amp;gt; Hamming code discussed in [[Appendix F - Classical Error Correcting Codes|Appendix F]].  Let us first recall the parity check matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
P =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
\end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.21}}&lt;br /&gt;
and the generator matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
G =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.22}}&lt;br /&gt;
Relating this back to the stabilizer formalism, the generators can be written using the parity check matrix as described above.  They are given in [[#Table7.2|Table 7.2]].  The first three rows each give the elements of the tensor product, in order, for the stabilizer elements of a code that can protect against bit flips.  The next three give stabilizers for the phase-flip code.  From these one may get the code words.  The logical zero and one are given below.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.2: ''The first three rows give the stabilizers for the bit-flip error correcting code.  The next three are for the phase-flip code. (See also [[#Table7.1|Table 7.1]] for further explanation.)''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Steane's 7-qubit code encodes the logical zero using all even weight classical code vectors, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|0_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{even }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.23}}&lt;br /&gt;
The odd weight classical code vectors are used to encode the logical one state,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|1_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{odd }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.24}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#Introduction|Continue to '''Chapter 8 - Decoherence-Free/Noiseless Subsystems''']]&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
[[Chapter 10 - Fault-Tolerant Quantum Computing#Introduction|Skip to '''Chapter 10 - Fault-Tolerant Quantum Computing''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1755</id>
		<title>Chapter 7 - Quantum Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1755"/>
		<updated>2011-11-28T14:05:06Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
If information was stored redundantly in some set of quantum states, then it would be possible to use the redundancy to detect and correct errors.  Quantum error correcting codes aim to encode quantum information into states in just such a redundant fashion.  It is worth noting that classical error correcting codes and coding theory has been around a long time and much of the ideas and methods of quantum error correction are imported from classical error correction.  However, quantum error correction requires extra care when measuring to detect and correct errors because superpositions of states must be preserved.  In addition, qubits can experience errors that classical bits cannot.  (For example, there is no phase-flip error on a classical bit.)  This chapter contains an introduction to quantum error correction including simple examples of quantum error correcting codes.   &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Classical Code===&lt;br /&gt;
&lt;br /&gt;
Let us first consider a simple example of a classical error correcting code.  Consider a signal which is comprised only of zeroes and ones.  (For most of these notes, these are the only types of signals: bits and their quantum analogue, qubits.)  An error in a sequence of zeroes and ones would occur if the sender sends a 1 and the receiver receives a 0 for one element of the sequence, or the sender sends a 0 and the receiver receives a 1.  In other words, for this type of encoding, an error would be a &amp;quot;classical bit-flip error&amp;quot; which would turn a 0 into a 1 and a 1 into a 0.  A simple example of a classical error correcting code which protects against such bit-flip errors is the following code.  Rather than use the state 0, the state is encoded redundantly: the state 000 is used.  This is called an encoded zero state or a logical zero state.  Likewise, 111 is used as an encoded 1, or logical 1.  Now suppose one bit is flipped when the encoded state 111 is sent, and further suppose that it is the first bit which is flipped.  If one (and only one) of the bits is flipped, the encoded state could be fixed by flipping the outlier so that it agrees with the others.  &lt;br /&gt;
&lt;br /&gt;
Let us assume that each error is independent and has probability &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt;.  The probability that the bit is not flipped is then &amp;lt;math&amp;gt;1-p\,\!&amp;lt;/math&amp;gt;.  Since the probability is &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; that one bit flip occurs, then the probability that two are flipped is &amp;lt;math&amp;gt;3(1-p)p^2\,\!&amp;lt;/math&amp;gt; assuming that which one is unknown.  The probability that three are flipped is &amp;lt;math&amp;gt;p^3\,\!&amp;lt;/math&amp;gt;.  So the code will help us if &amp;lt;math&amp;gt;p &amp;gt; 3(1-p)p^2 +p^3\,\!&amp;lt;/math&amp;gt; which happens when &amp;lt;math&amp;gt;p&amp;lt;1/2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
This example will be used below to find a simple bit-flip code for a quantum system.&lt;br /&gt;
&lt;br /&gt;
===Further Reading===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Appendix F - Classical Error Correcting Codes|Appendix F]] contains a brief introduction to classical error correction.  Many of the concepts and definitions in that appendix will be helpful for understanding the material in this chapter.  However, the chapter itself is somewhat self-contained.  When more explanation is required or desired, it will likely be helpful to read or reread [[Appendix F - Classical Error Correcting Codes|Appendix F]] and/or consult the references there.&lt;br /&gt;
&lt;br /&gt;
==Shor's Nine-Qubit Quantum Error Correcting Code==&lt;br /&gt;
&lt;br /&gt;
Shor's nine-qubit quantum error correcting code is important for several reasons.  Historically, it is important because it provides the first example of a quantum error correcting code which, in principle, can correct arbitrary single-qubit errors.  Pedagogically, it is important because it is an example which can be understood in terms of the simple classical error correcting code given above.  It also uses many of the standard assumptions of more general quantum error correcting codes.  Therefore, it is presented as our first quantum error correcting code and, as will be seen later, an example of what is called a stabilizer code, which is a very general category.  &lt;br /&gt;
&lt;br /&gt;
The Shor code is introduced in parts, bit-flip and phase-flip, and then in its entirety.  Since  the phase-flip code follows from the bit-flip code (as discussed below), the bit-flip code is discussed in great detail.  &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Quantum Code===&lt;br /&gt;
&lt;br /&gt;
The quantum bit-flip code uses three quantum states to encode one as does the classical bit-flip code above.  The state &amp;lt;math&amp;gt;  \left\vert 0\right\rangle \otimes \left\vert 0\right\rangle\otimes \left\vert 0\right\rangle = \left\vert 000\right\rangle = \left\vert 0_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is the logical state representing the zero state of the encoded qubit.  (The subscript L is to indicate that it is a logical state and the b indicates that it is a bit-flip code.  We will see below why this distinction is helpful.)  Similarly, &amp;lt;math&amp;gt;\left\vert 111\right\rangle = \left\vert 1_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is used for the logical one state.  &lt;br /&gt;
&lt;br /&gt;
====Encoding the Logical State====&lt;br /&gt;
&lt;br /&gt;
Note that one cannot just clone a state to produce redundancy due to the [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|No-Cloning Theorem]].  Also, the encoded state needs to preserve superpositions such as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle =  \alpha\left|0\right\rangle + \beta\left|1\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.1}}&lt;br /&gt;
To encode the state redundantly, cloning is not required.  The encoding can be accomplished using the &amp;lt;math&amp;gt; CNOT \,\!&amp;lt;/math&amp;gt; gate twice.  Simply apply &amp;lt;math&amp;gt; CNOT_{13} \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; CNOT_{12} \,\!&amp;lt;/math&amp;gt; to the following state of three qubits,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle\left|00\right\rangle =  (\alpha\left|0\right\rangle + \beta\left|1\right\rangle)\left|00\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This will produce  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi_L\right\rangle =  \alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle = \alpha\left|000\right\rangle + \beta\left|111\right\rangle . &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.2}}&lt;br /&gt;
The circuit diagram for this is given in [[#Figure 7.1|Figure 7.1]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccencode.jpg|300px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.1:  Circuit diagram for encoding a qubit into a 3-qubit bit-flip protected code.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Error Syndrome Extraction====&lt;br /&gt;
&lt;br /&gt;
Now a method for measurement and recovery is needed.  &lt;br /&gt;
The problem is that in quantum mechanics one cannot just measure the three states to see if they agree; a quantum state can be in a superposition of the (logical) zero state and the (logical) one state as above, and &lt;br /&gt;
a measurement of the first qubit to see if it is in the state zero or not will immediately produce the state &amp;lt;math&amp;gt; \left| 000 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\alpha|^2 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left| 111 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\beta|^2 \,\!&amp;lt;/math&amp;gt;, thus destroying the superposition of the qubit state.  The state would then be one that can be described as containing only classical information.  (Essentially it is equivalent to the classical 000 or 111 binary state.)  Since we need to preserve arbitrary superpositions, we cannot use this method for determining whether or not an error occurred.  &lt;br /&gt;
&lt;br /&gt;
Now let us suppose that a bit-flip error occurs on &amp;lt;math&amp;gt;\left|\psi_L\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
The objective is to determine if the state has experienced a bit-flip error or not without ruining the superposition and, if it has an error, to determine which qubit experienced the error. This can be done by checking to see if the first two qubits are the same or not and then checking to see if the last two qubits are the same or not without ever determining whether the state is the logical zero, logical one, or a superposition of the two.  &lt;br /&gt;
&lt;br /&gt;
Let us examine this process in detail.  First, notice the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue 1 and &amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue -1.  Then any logical state is an eigenstate of the operator &amp;lt;math&amp;gt; \sigma_z\otimes \sigma_z\otimes I\,\!&amp;lt;/math&amp;gt; with eigenvalue of 1 if the first two qubits are the same and -1 if they differ.  For example, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle = (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|000\right\rangle + \beta\left|111\right\rangle) = (1)(\alpha\left|000\right\rangle + \beta\left|111\right\rangle).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.3}}&lt;br /&gt;
Of course the same is also true for the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt;.  However, suppose that a bit-flip error occurs on the first qubit, giving &amp;lt;math&amp;gt; (\sigma_x\otimes I\otimes I) \left\vert\psi_L\right\rangle = (\sigma_x\otimes I\otimes I)(\alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle) = \alpha\left|100\right\rangle + \beta\left|011\right\rangle\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle &amp;amp;= (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|100\right\rangle + \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp;= (-\alpha\left|100\right\rangle - \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp; = (-1)(\alpha\left|100\right\rangle + \beta\left|011\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.4}}&lt;br /&gt;
Notice that, in principle, we need not determine either &amp;lt;math&amp;gt; \alpha\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; \beta\,\!&amp;lt;/math&amp;gt;.  However, it does seem that the error can be detected.  Since determining the value of the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt; shows that the last two qubits agree, we know that the error occurred on the first qubit.  In fact, it is not difficult to convince yourself that measuring these two operators will determine which qubit experienced a bit-flip for any of the three.  Just like the classical bit-flip code, this will not indicate whether or not an error occurred on two qubits.  Thus the probability must be small, just like the case for the classical code.&lt;br /&gt;
&lt;br /&gt;
Now, we have the idea that we could determine the parity of the pairs of qubits to determine if they are the same or different.  But how would we determine this in practice?  A method for doing this is shown in [[#Figure 7.2|Figure 7.2]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccSyndrome.jpg|center|400px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.2: A method for extracting a bit-flip error syndrome from a 3-qubit bit-flip protected code.  The M's are measurements on the ancillary qubits, the results of which are recorded as &amp;lt;math&amp;gt;R_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Figure 7.2|Figure 7.2]] gives a circuit for determining the error, also known as a syndrome measurement.  In this example, a bit-flip error occurred on qubit 1 in the 3 qubit QECC.  This is represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate.  After 4 CNOT gates, the two ancillary qubits are measured.  A measurement in the &amp;lt;math&amp;gt;|0\rangle, |1\rangle\,\!&amp;lt;/math&amp;gt; basis gives a result of &amp;lt;math&amp;gt;|1\rangle\,\!&amp;lt;/math&amp;gt; for the top ancillary qubit and &amp;lt;math&amp;gt;|0\rangle\,\!&amp;lt;/math&amp;gt; for the bottom one.  This tells us that the first qubit has had a bit-flip error.  We then feed this information back into the system by implementing an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, thus correcting the error.  &lt;br /&gt;
&lt;br /&gt;
Notice that we have not determined the coefficients of the superposition of the logical zero and logical one states.  We have only determined that there was an error on the first qubit since it does not agree with the other two.  (Assuming that only one bit-flip error could have occurred.)&lt;br /&gt;
&lt;br /&gt;
====Continuous Sets of Errors====&lt;br /&gt;
&lt;br /&gt;
The error, in this case represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate, is not very realistic.  What would be more realistic is that the bit is not flipped completely; it is in a superposition of the zero state and one state.  In other words, we should properly consider the following state, where an error has occurred on the first qubit:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle &amp;amp;=  \alpha(a\left|0\right\rangle + b\left|1\right\rangle) \left|00\right\rangle + \beta(b\left|0\right\rangle + a\left|1\right\rangle)\left|11\right\rangle \\&lt;br /&gt;
 &amp;amp;= \alpha a\left|000\right\rangle + \alpha b\left|100\right\rangle + \beta b\left|011\right\rangle + \beta a\left|111\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.5}}&lt;br /&gt;
This is a rotation about the x-axis by an arbitrary angle with &lt;br /&gt;
&amp;lt;math&amp;gt;a=\cos \theta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b=i\sin\theta\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Transformations of a Qubit|Section C.5.1]].)  Now suppose that two ancillary qubits are attached to the state&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle = \alpha a\left|00000\right\rangle + \alpha b\left|10000\right\rangle + \beta b\left|01100\right\rangle + \beta a\left|11100\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.6}}&lt;br /&gt;
and the resulting state is put into the circuit that gives the error syndrome given in [[#Figure 7.2|Figure 7.2]].  Let &lt;br /&gt;
&amp;lt;math&amp;gt;V = CNOT_{1{a_1}} CNOT_{2{a_1}} CNOT_{2{a_2}} CNOT_{3{a_2}}\,\!&amp;lt;/math&amp;gt;. Then   &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
 V\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle &amp;amp;= (\alpha a\left|00000\right\rangle + \alpha b\left|10010\right\rangle + \beta b\left|01110\right\rangle + \beta a\left|11100\right\rangle) \\&lt;br /&gt;
           &amp;amp;= (\alpha \left|000\right\rangle + \beta\left|111\right\rangle)a\left\vert 00\right\rangle +(\alpha\left|100\right\rangle + \beta \left|011\right\rangle)b\left\vert 10\right\rangle  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.7}},&lt;br /&gt;
where the two ancillary qubits, denoted &amp;lt;math&amp;gt;a_1\,\!&amp;lt;/math&amp;gt; (for the first ancillary qubit which is on top in [[#Figure 7.2|Figure 7.2]]) and &amp;lt;math&amp;gt;a_2\,\!&amp;lt;/math&amp;gt; (for the second ancillary qubit which is on bottom in [[#Figure 7.2|Figure 7.2]]), will give the error syndrome.  The measurement of the second ancillary qubit always gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt;.  The measurement of the first gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|a|^2\,\!&amp;lt;/math&amp;gt; and, if this occurs, the system will be in its original state and there is no error.  However, if the measurement of the first ancillary qubit gives &amp;lt;math&amp;gt;\left|1\right\rangle\,\!&amp;lt;/math&amp;gt;, which it will with probability &amp;lt;math&amp;gt;|b|^2,\,\!&amp;lt;/math&amp;gt; then the system is left in the state &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\alpha\left|100\right\rangle + \beta\left|011\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.8}}&lt;br /&gt;
This indicates that a bit-flip error has occurred on the first qubit.  Such an error is easily corrected with an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, which will flip it.  &lt;br /&gt;
&lt;br /&gt;
Therefore any single-qubit bit-flip error can be corrected, since we will project into the basis of one bit-flip error and the syndrome measurement indicates which one.  In other words, we have made the error discrete using a projective measurement of the ancilla.&lt;br /&gt;
&lt;br /&gt;
===Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Phase-flip errors&amp;quot; are errors which change the sign of the &amp;lt;math&amp;gt; \left| 1\right\rangle\,\!&amp;lt;/math&amp;gt; state.  This is not a classical error as it does not occur on a classical bit.  However, it does occur on qubits that are not in the zero state.  Thus these errors must be treated.   &lt;br /&gt;
&lt;br /&gt;
Much of what works for the bit-flip errors also works for phase-flip errors once we are able to encode properly.  Let us consider the following states that we will used to encode our logical qubit: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left\vert \pm\right\rangle = \frac{1}{\sqrt{2}}(\left\vert 0 \right\rangle \pm \left\vert 1\right\rangle). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.9}}&lt;br /&gt;
In this case, when a &amp;quot;phase-flip&amp;quot; occurs, the &amp;lt;math&amp;gt; \left\vert + \right\rangle \,\!&amp;lt;/math&amp;gt; becomes a &amp;lt;math&amp;gt; \left\vert - \right\rangle \,\!&amp;lt;/math&amp;gt; or vice versa.  Therefore it is similar to the bit-flip error since there are two orthogonal states that are changed into one another by the error.  In this case the error operator is of the form &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt;.  As before, if a phase error occurs on the first qubit, then we can encode redundantly by letting &amp;lt;math&amp;gt; \left\vert 0_{pL} \right\rangle = \left\vert +++ \right\rangle \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left\vert 1_{pL} \right\rangle = \left\vert --- \right\rangle\,\!&amp;lt;/math&amp;gt;.  It is easy to see that this code will enable the detection and correction of one phase error just as the bit-flip code did for one bit-flip.  In this case we exchange the &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt; in the bit-flip code with a &amp;lt;math&amp;gt; \sigma_x \,\!&amp;lt;/math&amp;gt; for the phase-flip code and the process carries through as before.&lt;br /&gt;
&lt;br /&gt;
===Bit-flip and Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
Certainly if a phase-flip error does not have a classical analogue then the combination of bit- and phase-flip errors also does not.  It turns out that by having found a code that will protect against bit-flip errors and another against phase-flip errors, we are able to write down a code that will protect against both.  This was first given by Peter Shor [[Bibliography#Shor:QECC|Shor:1995]], but was also described by Carlton Caves in a very readable paper, [[Bibliography#Caves:QECC|Caves:1999]].  &lt;br /&gt;
&lt;br /&gt;
The way to protect against both is to combine the two codes and take the logical qubits to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle)\\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.10}}&lt;br /&gt;
One may also write this as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= \left\vert +_{bL}\right\rangle \left\vert +_{bL}\right\rangle  \left\vert +_{bL}\right\rangle \\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = \left\vert -_{bL}\right\rangle  \left\vert -_{bL}\right\rangle \left\vert -_{bL}\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.11}}&lt;br /&gt;
&lt;br /&gt;
This shows that there is a code which protects against bit-flip errors and phase-flip errors by using a redundant encoding comprised of the states that protect against bit flips and the states that protect against phase flips.&lt;br /&gt;
&lt;br /&gt;
==Quantum Error Correcting Codes: General Properties==&lt;br /&gt;
&lt;br /&gt;
Now that we have seen some examples of quantum error correcting codes, some natural questions come to mind.  Are there general rules for constructing quantum error correcting codes?  In the case of classical codes, there is a disjointness condition and a Hamming bound.  These let us know when it is not possible to construct a quantum error correcting code.  Here, the two analogues for quantum error correcting codes are given, although the disjointness condition is quite different for quantum error correcting codes.  &lt;br /&gt;
&lt;br /&gt;
===The Quantum Error Correcting Code Condition===&lt;br /&gt;
&lt;br /&gt;
Let us consider a quantum system undergoing some noisy evolution.  As described in [[Chapter 6 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|Section 6.2]] and [[Chapter 6 - Noise in Quantum Systems#Modelling Open System Evolution|Section 6.3]], such an open-system evolution can be described by a quantum operation acting on a density operator,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\rho^\prime= \sum_\alpha A_\alpha \rho A_\alpha^\dagger. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.12}}&lt;br /&gt;
The operator elements &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; can be used to express what is known as the quantum error correcting code condition&lt;br /&gt;
(See [[Bibliography#NielsenChuang:book|Nielsen and Chuang]],  or [[Bibliography#Nielsen/etal|Nielsen, et al:97]] for the original reference), &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
P  A^\dagger_\beta  A_\alpha P = d_{\alpha\beta}P, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.13}}&lt;br /&gt;
where the &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; are the operators from the operator-sum representation, and &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is a projector onto the code space.   &lt;br /&gt;
An equivalent expression is (see [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]]),&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\langle i_L| A^\dagger_\beta  A_\alpha |j_L\rangle = c_{\alpha\beta}\delta_{ij}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.14}}&lt;br /&gt;
This is the quantum analogue of the [[Appendix F - Classical Error Correcting Codes#eqF.5|disjointness condition]] for classical error correcting codes.  To interpret this, consider [[#eq7.14|Equation (7.14)]].  This says that if one error &amp;lt;math&amp;gt;A_\beta\,\!&amp;lt;/math&amp;gt; acts acts on a logical state &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; and another error (or possibly the same error) &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; acts on a different logical state &amp;lt;math&amp;gt;|j_L\rangle\,\!&amp;lt;/math&amp;gt;, then the two cannot be equal.  In fact, the statement is a bit different.  It tells us that there can be no overlap between two states.  If there were overlap, there would be some probability for a measurement to produce an ambiguous result.  It also tells us that for two different &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; the same error acting will produce the same result.  This is allowed by the superposition principle, but not something one finds in classical error correction.  Therefore, the analogy with the classical disjointness condition is very loose.  (See  [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]] for further explanation.)  &lt;br /&gt;
&lt;br /&gt;
One way to understand [[#eq7.13|Equation (7.13)]] is to show [[#eq7.14|Equation (7.14)]] is true if and only if [[#eq7.13|Equation (7.13)]] is true.  However, these results can be seen as part of a broader and more basic property of quantum systems related to the reversibility of a quantum operation as discussed by [[Bibliography#Nielsen/etal|Nielsen, et al:97]].&lt;br /&gt;
&lt;br /&gt;
===A Basis for Errors===&lt;br /&gt;
&lt;br /&gt;
Using the Pauli matrices and the identity for the errors, any error can be described as a tensor product of operators.  Each term in the tensor product will involve one of four operators, &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;, where the identity &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt; indicates that no error has occurred.  (See [[Chapter 6 - Noise in Quantum Systems#Examples|Section 6.5]].)  For example, suppose a code involves five qubits.  For each of the five qubits, suppose no error occurs on qubit 1, a bit-flip error occurs on qubits 2 and 3, a phase error occurs on qubit 4, and qubit 5 is affected by both types of errors.  This error operator would be &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}\otimes X_2\otimes X_3 \otimes Z_4 \otimes Y_5&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.15}}&lt;br /&gt;
or, using a short-hand notation, &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
X_2 X_3 Z_4 Y_5.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.16}}&lt;br /&gt;
This error operator is said to have weight five.  &lt;br /&gt;
&lt;br /&gt;
====Definition 1: weight of an operator====&lt;br /&gt;
&lt;br /&gt;
The '''weight of an operator''' is the number of non-identity elements in the tensor product.  &lt;br /&gt;
&lt;br /&gt;
This provides us with a basis for all errors that can occur.  This is enough, since the errors can be made discrete using the syndrome measurement process.&lt;br /&gt;
&lt;br /&gt;
====Definition 2: Distance of a Quantum Error Correcting Code====&lt;br /&gt;
&lt;br /&gt;
The distance of a quantum error correcting code is the minimum weight, greater than zero, of an element &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; of the Pauli group such that the quantum error correcting code condition fails (i.e., such that &amp;lt;math&amp;gt;\langle i_L |G|j_L \rangle = c\delta_{ij}\,\!&amp;lt;/math&amp;gt; is not satisfied).&lt;br /&gt;
&lt;br /&gt;
===Quantum Error Correction for &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; Errors===&lt;br /&gt;
&lt;br /&gt;
A quantum error correcting code that uses &amp;lt;math&amp;gt;n\;\!&amp;lt;/math&amp;gt; qubits to encode &amp;lt;math&amp;gt;k\;\!&amp;lt;/math&amp;gt; logical qubits and can correct up to &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors is denoted &amp;lt;math&amp;gt;[[n,k,2t+1]]\;\!&amp;lt;/math&amp;gt;.  This is similar to the classical code notation except that double brackets are used to distinguish the quantum code from the corresponding classical code.  Using &amp;lt;math&amp;gt;d=2t+1\;\!&amp;lt;/math&amp;gt;, this is also written &amp;lt;math&amp;gt;[[n,k,d]]\;\!&amp;lt;/math&amp;gt;.  When a code satisfies the more restrictive condition &amp;lt;math&amp;gt;c_{\alpha\beta}=0\;\!&amp;lt;/math&amp;gt; in [[#eq7.14|Equ. (7.14)]], the code is called non-degenerate.  Note that [[#eq7.14|Equ. (7.14)]] indicates the set of errors which needs to be corrected given by the operator elements of the operator-sum representation.  It turns out that one can choose the set of errors to be described by an orthogonal basis.  This is done using the unitary degree of freedom in the operator-sum representation from [[Chapter 6 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|Section 6.4]].  [[Bibliography#NielsenChuang:book|Nielsen and Chuang]] use this to show that the conditions [[#eq7.13|Equ. (7.13)]] are necessary and sufficient for the existence of a quantum error correcting code.  Thus the necessary and sufficient conditions for being able to correct &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors are given by [[#eq7.13|Equ. (7.13)]], or equivalently, [[#eq7.14|Equ. (7.14)]].&lt;br /&gt;
&lt;br /&gt;
===The Quantum Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
Like the classical Hamming bound ([[Appendix F - Classical Error Correcting Codes#The Hamming Bound|Section F.4]]), the quantum Hamming bound is a simple bound on the size of the code for correcting a given number of errors.  In other words, it provides a bound on the rate of the code, &amp;lt;math&amp;gt;k/n\;\!&amp;lt;/math&amp;gt;.  The main difference is that there are three types of errors that can occur to a qubit: the three Pauli matrices &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;.  So each error comes in three types.  The number of possible error operators of weight &amp;lt;math&amp;gt;t \;\!&amp;lt;/math&amp;gt; acting on a code of &amp;lt;math&amp;gt;n \;\!&amp;lt;/math&amp;gt; qubits is &amp;lt;math&amp;gt;3^t C(n,t)\;\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;C(n,t)\;\!&amp;lt;/math&amp;gt; is the binomial coefficient.  Therefore since every logical state (and every logical state with any error acting on it) must all be mutually orthogonal, the quantum Hamming bound states that this set must be less than or equal to the total number of states in the Hilbert space, which is &amp;lt;math&amp;gt;2^n\;\!&amp;lt;/math&amp;gt;.  That is,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
m\sum_{i=0}^t 3^i C(n,i) \leq 2^n,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.17}}&lt;br /&gt;
where &amp;lt;math&amp;gt;m\;\!&amp;lt;/math&amp;gt; is the number of code words.&lt;br /&gt;
&lt;br /&gt;
Just as in the classical case, when &amp;lt;math&amp;gt;m= 2^k\;\!&amp;lt;/math&amp;gt;, we may take the logarithm of the equation along with &amp;lt;math&amp;gt;n,t \;\!&amp;lt;/math&amp;gt; large to get&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
k/n\leq 1-(t/n)\log 3-H(t/n),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.18}}&lt;br /&gt;
where &amp;lt;math&amp;gt;H(x) = -x\log x -(1-x)\log(1-x)\;\!&amp;lt;/math&amp;gt; and all logarithms are base 2.  &lt;br /&gt;
&lt;br /&gt;
[[#eq7.17|Equation (7.17)]] tells us that the smallest possible code encoding one qubit such that it can be protected against one arbitrary error has 5 physical qubits encoding one logical one.  (Here &amp;lt;math&amp;gt;m=2 (k=1), t=1 \;\!&amp;lt;/math&amp;gt; so &amp;lt;math&amp;gt; n=5 \;\!&amp;lt;/math&amp;gt; .)&lt;br /&gt;
&lt;br /&gt;
==Stabilizer Codes==&lt;br /&gt;
&lt;br /&gt;
The mathematical definition of a stabilizer is given in [[Appendix D - Group Theory#Definition 10: Stabilizer|Section D.6.1]].  Loosely speaking, it is a subgroup of transformations that leave a particular point in space fixed.  The theory of stabilizer codes is based on this notion.  &lt;br /&gt;
&lt;br /&gt;
Stabilizer codes are a family of quantum error correcting codes which are describable by using the stabilizer of a state (really a set of states) in the Hilbert space.  They are distinguished for several reasons.  One, they form a large class of quantum error correcting codes.  Two, they are conveniently described by their operators rather than their states and show that this can generally be the case for many quantum error correcting codes.  Other reasons will be discussed later.&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
We will begin by revisiting the three-qubit quantum error correcting code presented in some detail in [[Chapter 7 - Quantum Error Correcting Codes#Bit-flip Errors: A Quantum Code|Section 7.2.1]].  Recall that a bit-flip error that has occurred on one of the three qubits used in the logical qubit would be detectable if we could measure the parity of pairs of qubits.  These operators could be chosen to be &amp;lt;math&amp;gt; Z_1Z_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; Z_2Z_3\,\!&amp;lt;/math&amp;gt;, although any two non-identical pair would work.  Note that the basis states  &amp;lt;math&amp;gt; |000\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; |111\rangle\,\!&amp;lt;/math&amp;gt;, as well as any linear combination of these states, are eigenstates of these operators with eigenvalue +1.  The states with a single correctable error are eigenstates of these operators, but one will have eigenvalue -1.  The operators that give one of the single qubit bit-flip errors are either &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt;.  This is the idea behind stabilizer quantum error correcting codes.  The stabilizers act as parity checks on the code words.  &lt;br /&gt;
&lt;br /&gt;
The stabilizer is a subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of the [[Appendix D - Group Theory#Definition 12: Pauli Group|Pauli group]], which is an abelian subgroup (this means all elements commute with each other).  However, the elements &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt; anti-commute with at least one element of the stabilizer.   So the parity check describable by saying that states with errors are eigenstates of the stabilizers with eigenvalue -1 is equivalent to saying that one of the stabilizer operators will anti-commute with an error operator. &lt;br /&gt;
&lt;br /&gt;
The elements of the stabilizer stabilize code words, that is, code words are eigenstates of the stabilizer operators with eigenvalue +1, and states with errors have eigenvalue -1 and this can always be chosen to be true for this class of quantum error correcting codes. Note that if &amp;lt;math&amp;gt;|\psi\rangle\,\!&amp;lt;/math&amp;gt; is a code word, &amp;lt;math&amp;gt;S\in \mathcal{S}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; is an error operator, then  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
SE|\psi\rangle &amp;amp;= S|\psi^\prime\rangle =(-1)|\psi^\prime\rangle \\&lt;br /&gt;
ES|\psi\rangle &amp;amp;= E|\psi\rangle = |\psi^\prime\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.19}}&lt;br /&gt;
or&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(-1)SE|\psi\rangle = ES|\psi\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.20}}&lt;br /&gt;
This says that &amp;lt;math&amp;gt;SE + ES =0\,\!&amp;lt;/math&amp;gt; when acting on the code words.  In other words, the operators anti-commute when &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; produces a state that has eigenvalues -1 and also when it is a state that &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; stabilizes.&lt;br /&gt;
&lt;br /&gt;
This is the basic idea of the stabilizer code construction to be discussed in general in the next section.&lt;br /&gt;
&lt;br /&gt;
===General Stabilizer Formalism===&lt;br /&gt;
&lt;br /&gt;
This brief section provides general definitions and theorems for stabilizer quantum error correcting codes.  The next section provides an explicit example.&lt;br /&gt;
&lt;br /&gt;
====Definition 3: Stabilizer Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt; \mathcal{S}\subset \mathcal{P}_n \,\!&amp;lt;/math&amp;gt; be an abelian subgroup of the Pauli group that does not contain &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt;.  Let &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S}) = \{|\psi\rangle \; |\; S|\psi\rangle=|\psi\rangle, \mbox{ for all } S\in \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S})\,\!&amp;lt;/math&amp;gt; is a stabilizer code and &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; is its stabilizer. &lt;br /&gt;
&lt;br /&gt;
This formalizes what was stated earlier, which is that all states of the code space are eigenstates of elements of the stabilizer subgroup with eigenvalue +1.  However, it also says more.  It tells us that any subgroup of the Pauli group that is abelian and does not contain the elements &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; can be used to construct a stabilizer code by simply choosing the set of states that are eigenstates with eigenvalues +1.  Another way of saying this is that the states are fixed, or invariant, under the action of the stabilizer elements.  Let us see why the restriction not allowing &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; must be included.  Suppose that &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt; was in the set &amp;lt;math&amp;gt; \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  It then follows that &amp;lt;math&amp;gt; -\mathbb{I}|\psi\rangle = |\psi\rangle\,\!&amp;lt;/math&amp;gt;.  Only the zero state satisfies this equation, so the code must contain no states other than the zero one.  (The states must be +1 eigenstates of every stabilizer element.)  Now, suppose one of the other two was in the stabilizer subgroup. This means that the element squared is also in the stabilizer, since it is a subgroup and must be closed under multiplication.  But the square of these gives &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt;, which cannot be in the set.  Thus none of these can be included.&lt;br /&gt;
&lt;br /&gt;
====Encoding/Decoding from Stabilizer Generators====&lt;br /&gt;
&lt;br /&gt;
Once one has obtained the stabilizer subgroup, it is left to find the codewords that are states with eigenvalue +1.  To do this, one only needs to ensure the generators of the stabilizer satisfy this condition, since the generators give all other stabilizer elements through multiplication. Therefore, if the state has eigenvalue +1 for all generators, it will also have eigenvalue +1 for all stabilizer elements.  &lt;br /&gt;
&lt;br /&gt;
For smaller codes, finding the set of states could be as easy as satisfying constraints given by the small number of generators.  Larger, more complicated codes may however require a lot of work to find the states.  Cleve and Gottesman gave an algorithm for finding the code words using a efficient gate array obtained from the stabilizer formalism.  http://arxiv.org/abs/quant-ph/9607030  &lt;br /&gt;
&lt;br /&gt;
It is worth noting that the decoding and error detection and correction steps also require work to find explicit circuits.  However, for many stabilizer codes, decoding is simply encoding in reverse.  (This is not so for every quantum error correcting code.)  &lt;br /&gt;
&lt;br /&gt;
Although these accomplishments are very important, more work is required to ensure circuits are fault-tolerant---that errors do not propagate or grow as the computation progresses.  If they were to develop without these constraints, then the computation would eventually fail.&lt;br /&gt;
&lt;br /&gt;
===A Return to Shor's Code===&lt;br /&gt;
&lt;br /&gt;
Let us consider the set of operators in [[#Table7.1|Table 7.1]] where each operator in the row is included, in order, in the tensor product that forms an element of the Pauli group.  These elements form the eight [[Appendix D - Group Theory#Definition 14: Generators of a Group|generators]] of stabilizer elements &amp;lt;math&amp;gt;S_i\,\!&amp;lt;/math&amp;gt;.  The order of the stabilizer subgroup is much larger than the set of generators, which is only 8.  Here they are taken as in the table, but the set is not unique.  This set is chosen to agree with our earlier choice of measurements.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.1: ''The rows give the Pauli matrices which are included in a tensor product, in order, in an element of the Pauli group.  Each column corresponds to the qubit, q1-q9, on which the operator in that column will act.''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|q 8&lt;br /&gt;
|q 9&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_7\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_8\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Having the generators of the stabilizer of the code, the objective is to construct the codewords, or the explicit states that are eigenstates of these operators with eigenvalues +1.  From the top row, it is clear that the first two qubits must be the same, whether zero or one, so that the parity is even.  Similarly, the second two must be the same, and thus the first three must be the same.  Similarly, the middle three and last three must also be the same.  The last two generators state that flipping the first six bits at once will produce the same state, and flipping the last six bits together will produce the same state.  Thinking of these in blocks (since the first six generators give blocks of three) tells us that there are states that are symmetric under the interchange of zeroes and ones in pairs of triplet blocks.  To break this into two parts, one may choose the symmetric and anti-symmetric combination of states that leads to the Shor code words given in [[#eq7.10|Equation (7.10)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CSS codes==&lt;br /&gt;
&lt;br /&gt;
There is a class of quantum error correcting codes called the CSS codes after their inventors  [[Bibliography#CalderbankNShor|Calderbank and Shor]], and [[Bibliography#Steane:prsl|Steane]].   These are also stabilizer codes, but their construction is different and somewhat informative due to the connection to classical error correction.  However, given that they are stabilizer codes, the stabilizer formalism and tools can be used for encoding, etc.  &lt;br /&gt;
&lt;br /&gt;
The CSS codes are constructed from two classical linear codes, say &amp;lt;math&amp;gt; \mathcal{C}_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \mathcal{C}_2\,\!&amp;lt;/math&amp;gt;.  This is done by taking advantage of the parity check matrices from the classical coding theory.  In this section, this construction is briefly described.  In the next section, the seven qubit CSS code is described.  &lt;br /&gt;
&lt;br /&gt;
Recall from the discussion of the [[Chapter 7 - Quantum Error Correcting Codes#Shor's Nine-Qubit Quantum Error Correcting Code|Shor code]] that a phase-flip code can be constructed from a bit-flip code by using Hadamard gates in order to change the basis from &amp;lt;math&amp;gt; |0\rangle,|1\rangle\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; |+\rangle,|-\rangle\,\!&amp;lt;/math&amp;gt;.  Thus all of the error detection and correction can be accomplished by translating from one basis to the other.  &lt;br /&gt;
&lt;br /&gt;
Keeping this in mind, a quantum error correcting code can be constructed from a classical error correcting code using the following trick.  (See [[Bibliography#Steane:prl|Steane]] or the [[Bibliography#Gottesman:rev09|review by Gottesman]].)  Take the classical parity check matrix &amp;lt;math&amp;gt; P_1\,\!&amp;lt;/math&amp;gt; for a classical error correcting &amp;lt;math&amp;gt;[n_1,k_1,d_1]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt;.  This will turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_1=(d_1-1)/2\,\!&amp;lt;/math&amp;gt; bit-flip errors, just as did the classical code.  Then, given another classical error correcting &amp;lt;math&amp;gt;[n_2,k_2,d_2]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_2\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt;.  This will give turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_2=(d_2-1)/2\,\!&amp;lt;/math&amp;gt; phase-flip errors.  This would give a stabilizer code with one possible caveat: the operators in the stabilizer all need to commute with each other.  The way to ensure this will happen, that the &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt; generators and  &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt; generators commute, is to combine the codes in a particular way.  &lt;br /&gt;
&lt;br /&gt;
The dual of a code (denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;) is also a code, and it is not too difficult to show that the parity check matrix for &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; is the generator matrix for &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;.  It turns out that if (and only if) &amp;lt;math&amp;gt;\mathcal{C}_2^\perp \subseteq \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, then the two codes combine to produce an &amp;lt;math&amp;gt;[[n,k_1+k_2-n,d]]\,\!&amp;lt;/math&amp;gt; stabilizer code, where &amp;lt;math&amp;gt;d\geq \text{min}(d_1,d_2)\,\!&amp;lt;/math&amp;gt;.  That is, the generators for each of the two codes will commute with each other.  &lt;br /&gt;
&lt;br /&gt;
Now the two codes, one to protect against bit-flips and one to protect against phase-flips, combine so that they can correct any error, including &amp;lt;math&amp;gt;Y\,\!&amp;lt;/math&amp;gt; errors that are composed of both a bit-flip and phase-flip.  Therefore the code can protect against both, and the minimum distance is the smaller of the distance of the two codes.  It could actually be higher if the code is degenerate.  &lt;br /&gt;
&lt;br /&gt;
===Steane's Seven Qubit Code===&lt;br /&gt;
&lt;br /&gt;
The seven qubit quantum error correcting code, originally described by Steane, is member of the class of CSS quantum error correcting codes.  In fact it is the smallest such code, and has  &amp;lt;math&amp;gt;\mathcal{C}_2 = \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;.  It is a &amp;lt;math&amp;gt;[[7,1,3]]\,\!&amp;lt;/math&amp;gt; quantum error correcting code, using 7 qubits to encode one logical (or data) qubit such that one arbitrary error can be detected and corrected.  This code has been studied extensively, since it is able to be made fault tolerant (explained below).  &lt;br /&gt;
&lt;br /&gt;
This code is actually based on the &amp;lt;math&amp;gt;[7,4,3]\,\!&amp;lt;/math&amp;gt; Hamming code discussed in [[Appendix F - Classical Error Correcting Codes|Appendix F]].  Let us first recall the parity check matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
P =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
\end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.21}}&lt;br /&gt;
and the generator matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
G =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.22}}&lt;br /&gt;
Relating this back to the stabilizer formalism, the generators can be written using the parity check matrix as described above.  They are given in [[#Table7.2|Table 7.2]].  The first three rows each give the elements of the tensor product, in order, for the stabilizer elements of a code that can protect against bit flips.  The next three give stabilizers for the phase-flip code.  From these one may get the code words.  The logical zero and one are given below.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.2: ''The first three rows give the stabilizers for the bit-flip error correcting code.  The next three are for the phase-flip code. (See also [[#Table7.1|Table 7.1]] for further explanation.)''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Steane's 7-qubit code encodes the logical zero using all even weight classical code vectors, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|0_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{even }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.23}}&lt;br /&gt;
The odd weight classical code vectors are used to encode the logical one state,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|1_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{odd }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.24}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#Introduction|Continue to '''Chapter 8 - Decoherence-Free/Noiseless Subsystems''']]&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
[[Chapter 10 - Fault-Tolerant Quantum Computing#Introduction|Skip to '''Chapter 10 - Fault-Tolerant Quantum Computing''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1754</id>
		<title>Chapter 7 - Quantum Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_7_-_Quantum_Error_Correcting_Codes&amp;diff=1754"/>
		<updated>2011-11-28T14:04:42Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
If information was stored reduntantly in some set of quantum states, then it would be possible to use the redundancy to detect and correct errors.  Quantum error correcting codes aim to encode quantum information into states in just such a redundant fashion.  It is worth noting that classical error correcting codes and coding theory has been around a long time and much of the ideas and methods of quantum error correction are imported from classical error correction.  However, quantum error correction requires extra care when measuring to detect and correct errors because superpositions of states must be preserved.  In addition, qubits can experience errors that classical bits cannot.  (For example, there is no phase-flip error on a classical bit.)  This chapter contains an introduction to quantum error correction including simple examples of quantum error correcting codes.   &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Classical Code===&lt;br /&gt;
&lt;br /&gt;
Let us first consider a simple example of a classical error correcting code.  Consider a signal which is comprised only of zeroes and ones.  (For most of these notes, these are the only types of signals: bits and their quantum analogue, qubits.)  An error in a sequence of zeroes and ones would occur if the sender sends a 1 and the receiver receives a 0 for one element of the sequence, or the sender sends a 0 and the receiver receives a 1.  In other words, for this type of encoding, an error would be a &amp;quot;classical bit-flip error&amp;quot; which would turn a 0 into a 1 and a 1 into a 0.  A simple example of a classical error correcting code which protects against such bit-flip errors is the following code.  Rather than use the state 0, the state is encoded redundantly: the state 000 is used.  This is called an encoded zero state or a logical zero state.  Likewise, 111 is used as an encoded 1, or logical 1.  Now suppose one bit is flipped when the encoded state 111 is sent, and further suppose that it is the first bit which is flipped.  If one (and only one) of the bits is flipped, the encoded state could be fixed by flipping the outlier so that it agrees with the others.  &lt;br /&gt;
&lt;br /&gt;
Let us assume that each error is independent and has probability &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt;.  The probability that the bit is not flipped is then &amp;lt;math&amp;gt;1-p\,\!&amp;lt;/math&amp;gt;.  Since the probability is &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; that one bit flip occurs, then the probability that two are flipped is &amp;lt;math&amp;gt;3(1-p)p^2\,\!&amp;lt;/math&amp;gt; assuming that which one is unknown.  The probability that three are flipped is &amp;lt;math&amp;gt;p^3\,\!&amp;lt;/math&amp;gt;.  So the code will help us if &amp;lt;math&amp;gt;p &amp;gt; 3(1-p)p^2 +p^3\,\!&amp;lt;/math&amp;gt; which happens when &amp;lt;math&amp;gt;p&amp;lt;1/2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
This example will be used below to find a simple bit-flip code for a quantum system.&lt;br /&gt;
&lt;br /&gt;
===Further Reading===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Appendix F - Classical Error Correcting Codes|Appendix F]] contains a brief introduction to classical error correction.  Many of the concepts and definitions in that appendix will be helpful for understanding the material in this chapter.  However, the chapter itself is somewhat self-contained.  When more explanation is required or desired, it will likely be helpful to read or reread [[Appendix F - Classical Error Correcting Codes|Appendix F]] and/or consult the references there.&lt;br /&gt;
&lt;br /&gt;
==Shor's Nine-Qubit Quantum Error Correcting Code==&lt;br /&gt;
&lt;br /&gt;
Shor's nine-qubit quantum error correcting code is important for several reasons.  Historically, it is important because it provides the first example of a quantum error correcting code which, in principle, can correct arbitrary single-qubit errors.  Pedagogically, it is important because it is an example which can be understood in terms of the simple classical error correcting code given above.  It also uses many of the standard assumptions of more general quantum error correcting codes.  Therefore, it is presented as our first quantum error correcting code and, as will be seen later, an example of what is called a stabilizer code, which is a very general category.  &lt;br /&gt;
&lt;br /&gt;
The Shor code is introduced in parts, bit-flip and phase-flip, and then in its entirety.  Since  the phase-flip code follows from the bit-flip code (as discussed below), the bit-flip code is discussed in great detail.  &lt;br /&gt;
&lt;br /&gt;
===Bit-flip Errors: A Quantum Code===&lt;br /&gt;
&lt;br /&gt;
The quantum bit-flip code uses three quantum states to encode one as does the classical bit-flip code above.  The state &amp;lt;math&amp;gt;  \left\vert 0\right\rangle \otimes \left\vert 0\right\rangle\otimes \left\vert 0\right\rangle = \left\vert 000\right\rangle = \left\vert 0_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is the logical state representing the zero state of the encoded qubit.  (The subscript L is to indicate that it is a logical state and the b indicates that it is a bit-flip code.  We will see below why this distinction is helpful.)  Similarly, &amp;lt;math&amp;gt;\left\vert 111\right\rangle = \left\vert 1_{bL}\right\rangle\,\!&amp;lt;/math&amp;gt; is used for the logical one state.  &lt;br /&gt;
&lt;br /&gt;
====Encoding the Logical State====&lt;br /&gt;
&lt;br /&gt;
Note that one cannot just clone a state to produce redundancy due to the [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|No-Cloning Theorem]].  Also, the encoded state needs to preserve superpositions such as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle =  \alpha\left|0\right\rangle + \beta\left|1\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.1}}&lt;br /&gt;
To encode the state redundantly, cloning is not required.  The encoding can be accomplished using the &amp;lt;math&amp;gt; CNOT \,\!&amp;lt;/math&amp;gt; gate twice.  Simply apply &amp;lt;math&amp;gt; CNOT_{13} \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; CNOT_{12} \,\!&amp;lt;/math&amp;gt; to the following state of three qubits,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi\right\rangle\left|00\right\rangle =  (\alpha\left|0\right\rangle + \beta\left|1\right\rangle)\left|00\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This will produce  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left|\psi_L\right\rangle =  \alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle = \alpha\left|000\right\rangle + \beta\left|111\right\rangle . &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.2}}&lt;br /&gt;
The circuit diagram for this is given in [[#Figure 7.1|Figure 7.1]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccencode.jpg|300px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.1:  Circuit diagram for encoding a qubit into a 3-qubit bit-flip protected code.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Error Syndrome Extraction====&lt;br /&gt;
&lt;br /&gt;
Now a method for measurement and recovery is needed.  &lt;br /&gt;
The problem is that in quantum mechanics one cannot just measure the three states to see if they agree; a quantum state can be in a superposition of the (logical) zero state and the (logical) one state as above, and &lt;br /&gt;
a measurement of the first qubit to see if it is in the state zero or not will immediately produce the state &amp;lt;math&amp;gt; \left| 000 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\alpha|^2 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left| 111 \right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt; |\beta|^2 \,\!&amp;lt;/math&amp;gt;, thus destroying the superposition of the qubit state.  The state would then be one that can be described as containing only classical information.  (Essentially it is equivalent to the classical 000 or 111 binary state.)  Since we need to preserve arbitrary superpositions, we cannot use this method for determining whether or not an error occurred.  &lt;br /&gt;
&lt;br /&gt;
Now let us suppose that a bit-flip error occurs on &amp;lt;math&amp;gt;\left|\psi_L\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
The objective is to determine if the state has experienced a bit-flip error or not without ruining the superposition and, if it has an error, to determine which qubit experienced the error. This can be done by checking to see if the first two qubits are the same or not and then checking to see if the last two qubits are the same or not without ever determining whether the state is the logical zero, logical one, or a superposition of the two.  &lt;br /&gt;
&lt;br /&gt;
Let us examine this process in detail.  First, notice the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue 1 and &amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt; is an eigenvector of &amp;lt;math&amp;gt;\sigma_z \,\!&amp;lt;/math&amp;gt; with eigenvalue -1.  Then any logical state is an eigenstate of the operator &amp;lt;math&amp;gt; \sigma_z\otimes \sigma_z\otimes I\,\!&amp;lt;/math&amp;gt; with eigenvalue of 1 if the first two qubits are the same and -1 if they differ.  For example, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle = (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|000\right\rangle + \beta\left|111\right\rangle) = (1)(\alpha\left|000\right\rangle + \beta\left|111\right\rangle).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.3}}&lt;br /&gt;
Of course the same is also true for the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt;.  However, suppose that a bit-flip error occurs on the first qubit, giving &amp;lt;math&amp;gt; (\sigma_x\otimes I\otimes I) \left\vert\psi_L\right\rangle = (\sigma_x\otimes I\otimes I)(\alpha\left|0_{bL}\right\rangle + \beta\left|1_{bL}\right\rangle) = \alpha\left|100\right\rangle + \beta\left|011\right\rangle\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
(\sigma_z\otimes \sigma_z\otimes I) \left\vert\psi_L\right\rangle &amp;amp;= (\sigma_z\otimes \sigma_z\otimes I) (\alpha\left|100\right\rangle + \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp;= (-\alpha\left|100\right\rangle - \beta\left|011\right\rangle) \\&lt;br /&gt;
&amp;amp; = (-1)(\alpha\left|100\right\rangle + \beta\left|011\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.4}}&lt;br /&gt;
Notice that, in principle, we need not determine either &amp;lt;math&amp;gt; \alpha\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; \beta\,\!&amp;lt;/math&amp;gt;.  However, it does seem that the error can be detected.  Since determining the value of the operator &amp;lt;math&amp;gt; I\otimes\sigma_z\otimes \sigma_z \,\!&amp;lt;/math&amp;gt; shows that the last two qubits agree, we know that the error occurred on the first qubit.  In fact, it is not difficult to convince yourself that measuring these two operators will determine which qubit experienced a bit-flip for any of the three.  Just like the classical bit-flip code, this will not indicate whether or not an error occurred on two qubits.  Thus the probability must be small, just like the case for the classical code.&lt;br /&gt;
&lt;br /&gt;
Now, we have the idea that we could determine the parity of the pairs of qubits to determine if they are the same or different.  But how would we determine this in practice?  A method for doing this is shown in [[#Figure 7.2|Figure 7.2]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:3qeccSyndrome.jpg|center|400px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.2: A method for extracting a bit-flip error syndrome from a 3-qubit bit-flip protected code.  The M's are measurements on the ancillary qubits, the results of which are recorded as &amp;lt;math&amp;gt;R_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Figure 7.2|Figure 7.2]] gives a circuit for determining the error, also known as a syndrome measurement.  In this example, a bit-flip error occurred on qubit 1 in the 3 qubit QECC.  This is represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate.  After 4 CNOT gates, the two ancillary qubits are measured.  A measurement in the &amp;lt;math&amp;gt;|0\rangle, |1\rangle\,\!&amp;lt;/math&amp;gt; basis gives a result of &amp;lt;math&amp;gt;|1\rangle\,\!&amp;lt;/math&amp;gt; for the top ancillary qubit and &amp;lt;math&amp;gt;|0\rangle\,\!&amp;lt;/math&amp;gt; for the bottom one.  This tells us that the first qubit has had a bit-flip error.  We then feed this information back into the system by implementing an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, thus correcting the error.  &lt;br /&gt;
&lt;br /&gt;
Notice that we have not determined the coefficients of the superposition of the logical zero and logical one states.  We have only determined that there was an error on the first qubit since it does not agree with the other two.  (Assuming that only one bit-flip error could have occurred.)&lt;br /&gt;
&lt;br /&gt;
====Continuous Sets of Errors====&lt;br /&gt;
&lt;br /&gt;
The error, in this case represented by an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate, is not very realistic.  What would be more realistic is that the bit is not flipped completely; it is in a superposition of the zero state and one state.  In other words, we should properly consider the following state, where an error has occurred on the first qubit:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle &amp;amp;=  \alpha(a\left|0\right\rangle + b\left|1\right\rangle) \left|00\right\rangle + \beta(b\left|0\right\rangle + a\left|1\right\rangle)\left|11\right\rangle \\&lt;br /&gt;
 &amp;amp;= \alpha a\left|000\right\rangle + \alpha b\left|100\right\rangle + \beta b\left|011\right\rangle + \beta a\left|111\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.5}}&lt;br /&gt;
This is a rotation about the x-axis by an arbitrary angle with &lt;br /&gt;
&amp;lt;math&amp;gt;a=\cos \theta\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b=i\sin\theta\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Transformations of a Qubit|Section C.5.1]].)  Now suppose that two ancillary qubits are attached to the state&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle = \alpha a\left|00000\right\rangle + \alpha b\left|10000\right\rangle + \beta b\left|01100\right\rangle + \beta a\left|11100\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.6}}&lt;br /&gt;
and the resulting state is put into the circuit that gives the error syndrome given in [[#Figure 7.2|Figure 7.2]].  Let &lt;br /&gt;
&amp;lt;math&amp;gt;V = CNOT_{1{a_1}} CNOT_{2{a_1}} CNOT_{2{a_2}} CNOT_{3{a_2}}\,\!&amp;lt;/math&amp;gt;. Then   &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
 V\left\vert\psi_L^{e_1}\right\rangle\left\vert 00\right\rangle &amp;amp;= (\alpha a\left|00000\right\rangle + \alpha b\left|10010\right\rangle + \beta b\left|01110\right\rangle + \beta a\left|11100\right\rangle) \\&lt;br /&gt;
           &amp;amp;= (\alpha \left|000\right\rangle + \beta\left|111\right\rangle)a\left\vert 00\right\rangle +(\alpha\left|100\right\rangle + \beta \left|011\right\rangle)b\left\vert 10\right\rangle  &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.7}},&lt;br /&gt;
where the two ancillary qubits, denoted &amp;lt;math&amp;gt;a_1\,\!&amp;lt;/math&amp;gt; (for the first ancillary qubit which is on top in [[#Figure 7.2|Figure 7.2]]) and &amp;lt;math&amp;gt;a_2\,\!&amp;lt;/math&amp;gt; (for the second ancillary qubit which is on bottom in [[#Figure 7.2|Figure 7.2]]), will give the error syndrome.  The measurement of the second ancillary qubit always gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt;.  The measurement of the first gives &amp;lt;math&amp;gt;\left|0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|a|^2\,\!&amp;lt;/math&amp;gt; and, if this occurs, the system will be in its original state and there is no error.  However, if the measurement of the first ancillary qubit gives &amp;lt;math&amp;gt;\left|1\right\rangle\,\!&amp;lt;/math&amp;gt;, which it will with probability &amp;lt;math&amp;gt;|b|^2,\,\!&amp;lt;/math&amp;gt; then the system is left in the state &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\alpha\left|100\right\rangle + \beta\left|011\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.8}}&lt;br /&gt;
This indicates that a bit-flip error has occurred on the first qubit.  Such an error is easily corrected with an &amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt; gate on the first qubit, which will flip it.  &lt;br /&gt;
&lt;br /&gt;
Therefore any single-qubit bit-flip error can be corrected, since we will project into the basis of one bit-flip error and the syndrome measurement indicates which one.  In other words, we have made the error discrete using a projective measurement of the ancilla.&lt;br /&gt;
&lt;br /&gt;
===Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Phase-flip errors&amp;quot; are errors which change the sign of the &amp;lt;math&amp;gt; \left| 1\right\rangle\,\!&amp;lt;/math&amp;gt; state.  This is not a classical error as it does not occur on a classical bit.  However, it does occur on qubits that are not in the zero state.  Thus these errors must be treated.   &lt;br /&gt;
&lt;br /&gt;
Much of what works for the bit-flip errors also works for phase-flip errors once we are able to encode properly.  Let us consider the following states that we will used to encode our logical qubit: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left\vert \pm\right\rangle = \frac{1}{\sqrt{2}}(\left\vert 0 \right\rangle \pm \left\vert 1\right\rangle). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.9}}&lt;br /&gt;
In this case, when a &amp;quot;phase-flip&amp;quot; occurs, the &amp;lt;math&amp;gt; \left\vert + \right\rangle \,\!&amp;lt;/math&amp;gt; becomes a &amp;lt;math&amp;gt; \left\vert - \right\rangle \,\!&amp;lt;/math&amp;gt; or vice versa.  Therefore it is similar to the bit-flip error since there are two orthogonal states that are changed into one another by the error.  In this case the error operator is of the form &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt;.  As before, if a phase error occurs on the first qubit, then we can encode redundantly by letting &amp;lt;math&amp;gt; \left\vert 0_{pL} \right\rangle = \left\vert +++ \right\rangle \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \left\vert 1_{pL} \right\rangle = \left\vert --- \right\rangle\,\!&amp;lt;/math&amp;gt;.  It is easy to see that this code will enable the detection and correction of one phase error just as the bit-flip code did for one bit-flip.  In this case we exchange the &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt; in the bit-flip code with a &amp;lt;math&amp;gt; \sigma_x \,\!&amp;lt;/math&amp;gt; for the phase-flip code and the process carries through as before.&lt;br /&gt;
&lt;br /&gt;
===Bit-flip and Phase-flip Errors===&lt;br /&gt;
&lt;br /&gt;
Certainly if a phase-flip error does not have a classical analogue then the combination of bit- and phase-flip errors also does not.  It turns out that by having found a code that will protect against bit-flip errors and another against phase-flip errors, we are able to write down a code that will protect against both.  This was first given by Peter Shor [[Bibliography#Shor:QECC|Shor:1995]], but was also described by Carlton Caves in a very readable paper, [[Bibliography#Caves:QECC|Caves:1999]].  &lt;br /&gt;
&lt;br /&gt;
The way to protect against both is to combine the two codes and take the logical qubits to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle + \left\vert 1_{bL}\right\rangle)\\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle) (\left\vert 0_{bL}\right\rangle - \left\vert 1_{bL}\right\rangle).&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.10}}&lt;br /&gt;
One may also write this as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
\left\vert 0_L\right\rangle &amp;amp;= \left\vert +_{bL}\right\rangle \left\vert +_{bL}\right\rangle  \left\vert +_{bL}\right\rangle \\&lt;br /&gt;
\left\vert 1_L \right\rangle &amp;amp; = \left\vert -_{bL}\right\rangle  \left\vert -_{bL}\right\rangle \left\vert -_{bL}\right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.11}}&lt;br /&gt;
&lt;br /&gt;
This shows that there is a code which protects against bit-flip errors and phase-flip errors by using a redundant encoding comprised of the states that protect against bit flips and the states that protect against phase flips.&lt;br /&gt;
&lt;br /&gt;
==Quantum Error Correcting Codes: General Properties==&lt;br /&gt;
&lt;br /&gt;
Now that we have seen some examples of quantum error correcting codes, some natural questions come to mind.  Are there general rules for constructing quantum error correcting codes?  In the case of classical codes, there is a disjointness condition and a Hamming bound.  These let us know when it is not possible to construct a quantum error correcting code.  Here, the two analogues for quantum error correcting codes are given, although the disjointness condition is quite different for quantum error correcting codes.  &lt;br /&gt;
&lt;br /&gt;
===The Quantum Error Correcting Code Condition===&lt;br /&gt;
&lt;br /&gt;
Let us consider a quantum system undergoing some noisy evolution.  As described in [[Chapter 6 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|Section 6.2]] and [[Chapter 6 - Noise in Quantum Systems#Modelling Open System Evolution|Section 6.3]], such an open-system evolution can be described by a quantum operation acting on a density operator,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\rho^\prime= \sum_\alpha A_\alpha \rho A_\alpha^\dagger. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.12}}&lt;br /&gt;
The operator elements &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; can be used to express what is known as the quantum error correcting code condition&lt;br /&gt;
(See [[Bibliography#NielsenChuang:book|Nielsen and Chuang]],  or [[Bibliography#Nielsen/etal|Nielsen, et al:97]] for the original reference), &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
P  A^\dagger_\beta  A_\alpha P = d_{\alpha\beta}P, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.13}}&lt;br /&gt;
where the &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; are the operators from the operator-sum representation, and &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is a projector onto the code space.   &lt;br /&gt;
An equivalent expression is (see [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]]),&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\langle i_L| A^\dagger_\beta  A_\alpha |j_L\rangle = c_{\alpha\beta}\delta_{ij}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.14}}&lt;br /&gt;
This is the quantum analogue of the [[Appendix F - Classical Error Correcting Codes#eqF.5|disjointness condition]] for classical error correcting codes.  To interpret this, consider [[#eq7.14|Equation (7.14)]].  This says that if one error &amp;lt;math&amp;gt;A_\beta\,\!&amp;lt;/math&amp;gt; acts acts on a logical state &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; and another error (or possibly the same error) &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt; acts on a different logical state &amp;lt;math&amp;gt;|j_L\rangle\,\!&amp;lt;/math&amp;gt;, then the two cannot be equal.  In fact, the statement is a bit different.  It tells us that there can be no overlap between two states.  If there were overlap, there would be some probability for a measurement to produce an ambiguous result.  It also tells us that for two different &amp;lt;math&amp;gt;|i_L\rangle\,\!&amp;lt;/math&amp;gt; the same error acting will produce the same result.  This is allowed by the superposition principle, but not something one finds in classical error correction.  Therefore, the analogy with the classical disjointness condition is very loose.  (See  [[Bibliography#KnillLaflamme:QECC|Knill and Laflamme]] for further explanation.)  &lt;br /&gt;
&lt;br /&gt;
One way to understand [[#eq7.13|Equation (7.13)]] is to show [[#eq7.14|Equation (7.14)]] is true if and only if [[#eq7.13|Equation (7.13)]] is true.  However, these results can be seen as part of a broader and more basic property of quantum systems related to the reversibility of a quantum operation as discussed by [[Bibliography#Nielsen/etal|Nielsen, et al:97]].&lt;br /&gt;
&lt;br /&gt;
===A Basis for Errors===&lt;br /&gt;
&lt;br /&gt;
Using the Pauli matrices and the identity for the errors, any error can be described as a tensor product of operators.  Each term in the tensor product will involve one of four operators, &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;, where the identity &amp;lt;math&amp;gt;\mathbb{I} \;\!&amp;lt;/math&amp;gt; indicates that no error has occurred.  (See [[Chapter 6 - Noise in Quantum Systems#Examples|Section 6.5]].)  For example, suppose a code involves five qubits.  For each of the five qubits, suppose no error occurs on qubit 1, a bit-flip error occurs on qubits 2 and 3, a phase error occurs on qubit 4, and qubit 5 is affected by both types of errors.  This error operator would be &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}\otimes X_2\otimes X_3 \otimes Z_4 \otimes Y_5&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.15}}&lt;br /&gt;
or, using a short-hand notation, &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
X_2 X_3 Z_4 Y_5.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.16}}&lt;br /&gt;
This error operator is said to have weight five.  &lt;br /&gt;
&lt;br /&gt;
====Definition 1: weight of an operator====&lt;br /&gt;
&lt;br /&gt;
The '''weight of an operator''' is the number of non-identity elements in the tensor product.  &lt;br /&gt;
&lt;br /&gt;
This provides us with a basis for all errors that can occur.  This is enough, since the errors can be made discrete using the syndrome measurement process.&lt;br /&gt;
&lt;br /&gt;
====Definition 2: Distance of a Quantum Error Correcting Code====&lt;br /&gt;
&lt;br /&gt;
The distance of a quantum error correcting code is the minimum weight, greater than zero, of an element &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; of the Pauli group such that the quantum error correcting code condition fails (i.e., such that &amp;lt;math&amp;gt;\langle i_L |G|j_L \rangle = c\delta_{ij}\,\!&amp;lt;/math&amp;gt; is not satisfied).&lt;br /&gt;
&lt;br /&gt;
===Quantum Error Correction for &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; Errors===&lt;br /&gt;
&lt;br /&gt;
A quantum error correcting code that uses &amp;lt;math&amp;gt;n\;\!&amp;lt;/math&amp;gt; qubits to encode &amp;lt;math&amp;gt;k\;\!&amp;lt;/math&amp;gt; logical qubits and can correct up to &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors is denoted &amp;lt;math&amp;gt;[[n,k,2t+1]]\;\!&amp;lt;/math&amp;gt;.  This is similar to the classical code notation except that double brackets are used to distinguish the quantum code from the corresponding classical code.  Using &amp;lt;math&amp;gt;d=2t+1\;\!&amp;lt;/math&amp;gt;, this is also written &amp;lt;math&amp;gt;[[n,k,d]]\;\!&amp;lt;/math&amp;gt;.  When a code satisfies the more restrictive condition &amp;lt;math&amp;gt;c_{\alpha\beta}=0\;\!&amp;lt;/math&amp;gt; in [[#eq7.14|Equ. (7.14)]], the code is called non-degenerate.  Note that [[#eq7.14|Equ. (7.14)]] indicates the set of errors which needs to be corrected given by the operator elements of the operator-sum representation.  It turns out that one can choose the set of errors to be described by an orthogonal basis.  This is done using the unitary degree of freedom in the operator-sum representation from [[Chapter 6 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|Section 6.4]].  [[Bibliography#NielsenChuang:book|Nielsen and Chuang]] use this to show that the conditions [[#eq7.13|Equ. (7.13)]] are necessary and sufficient for the existence of a quantum error correcting code.  Thus the necessary and sufficient conditions for being able to correct &amp;lt;math&amp;gt;t\;\!&amp;lt;/math&amp;gt; errors are given by [[#eq7.13|Equ. (7.13)]], or equivalently, [[#eq7.14|Equ. (7.14)]].&lt;br /&gt;
&lt;br /&gt;
===The Quantum Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
Like the classical Hamming bound ([[Appendix F - Classical Error Correcting Codes#The Hamming Bound|Section F.4]]), the quantum Hamming bound is a simple bound on the size of the code for correcting a given number of errors.  In other words, it provides a bound on the rate of the code, &amp;lt;math&amp;gt;k/n\;\!&amp;lt;/math&amp;gt;.  The main difference is that there are three types of errors that can occur to a qubit: the three Pauli matrices &amp;lt;math&amp;gt;X \;\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y\;\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;Z\;\!&amp;lt;/math&amp;gt;.  So each error comes in three types.  The number of possible error operators of weight &amp;lt;math&amp;gt;t \;\!&amp;lt;/math&amp;gt; acting on a code of &amp;lt;math&amp;gt;n \;\!&amp;lt;/math&amp;gt; qubits is &amp;lt;math&amp;gt;3^t C(n,t)\;\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;C(n,t)\;\!&amp;lt;/math&amp;gt; is the binomial coefficient.  Therefore since every logical state (and every logical state with any error acting on it) must all be mutually orthogonal, the quantum Hamming bound states that this set must be less than or equal to the total number of states in the Hilbert space, which is &amp;lt;math&amp;gt;2^n\;\!&amp;lt;/math&amp;gt;.  That is,&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
m\sum_{i=0}^t 3^i C(n,i) \leq 2^n,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.17}}&lt;br /&gt;
where &amp;lt;math&amp;gt;m\;\!&amp;lt;/math&amp;gt; is the number of code words.&lt;br /&gt;
&lt;br /&gt;
Just as in the classical case, when &amp;lt;math&amp;gt;m= 2^k\;\!&amp;lt;/math&amp;gt;, we may take the logarithm of the equation along with &amp;lt;math&amp;gt;n,t \;\!&amp;lt;/math&amp;gt; large to get&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;&lt;br /&gt;
k/n\leq 1-(t/n)\log 3-H(t/n),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.18}}&lt;br /&gt;
where &amp;lt;math&amp;gt;H(x) = -x\log x -(1-x)\log(1-x)\;\!&amp;lt;/math&amp;gt; and all logarithms are base 2.  &lt;br /&gt;
&lt;br /&gt;
[[#eq7.17|Equation (7.17)]] tells us that the smallest possible code encoding one qubit such that it can be protected against one arbitrary error has 5 physical qubits encoding one logical one.  (Here &amp;lt;math&amp;gt;m=2 (k=1), t=1 \;\!&amp;lt;/math&amp;gt; so &amp;lt;math&amp;gt; n=5 \;\!&amp;lt;/math&amp;gt; .)&lt;br /&gt;
&lt;br /&gt;
==Stabilizer Codes==&lt;br /&gt;
&lt;br /&gt;
The mathematical definition of a stabilizer is given in [[Appendix D - Group Theory#Definition 10: Stabilizer|Section D.6.1]].  Loosely speaking, it is a subgroup of transformations that leave a particular point in space fixed.  The theory of stabilizer codes is based on this notion.  &lt;br /&gt;
&lt;br /&gt;
Stabilizer codes are a family of quantum error correcting codes which are describable by using the stabilizer of a state (really a set of states) in the Hilbert space.  They are distinguished for several reasons.  One, they form a large class of quantum error correcting codes.  Two, they are conveniently described by their operators rather than their states and show that this can generally be the case for many quantum error correcting codes.  Other reasons will be discussed later.&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
We will begin by revisiting the three-qubit quantum error correcting code presented in some detail in [[Chapter 7 - Quantum Error Correcting Codes#Bit-flip Errors: A Quantum Code|Section 7.2.1]].  Recall that a bit-flip error that has occurred on one of the three qubits used in the logical qubit would be detectable if we could measure the parity of pairs of qubits.  These operators could be chosen to be &amp;lt;math&amp;gt; Z_1Z_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; Z_2Z_3\,\!&amp;lt;/math&amp;gt;, although any two non-identical pair would work.  Note that the basis states  &amp;lt;math&amp;gt; |000\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; |111\rangle\,\!&amp;lt;/math&amp;gt;, as well as any linear combination of these states, are eigenstates of these operators with eigenvalue +1.  The states with a single correctable error are eigenstates of these operators, but one will have eigenvalue -1.  The operators that give one of the single qubit bit-flip errors are either &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt;.  This is the idea behind stabilizer quantum error correcting codes.  The stabilizers act as parity checks on the code words.  &lt;br /&gt;
&lt;br /&gt;
The stabilizer is a subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of the [[Appendix D - Group Theory#Definition 12: Pauli Group|Pauli group]], which is an abelian subgroup (this means all elements commute with each other).  However, the elements &amp;lt;math&amp;gt; X_1\,\!&amp;lt;/math&amp;gt;,  &amp;lt;math&amp;gt;X_2\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt; X_3\,\!&amp;lt;/math&amp;gt; anti-commute with at least one element of the stabilizer.   So the parity check describable by saying that states with errors are eigenstates of the stabilizers with eigenvalue -1 is equivalent to saying that one of the stabilizer operators will anti-commute with an error operator. &lt;br /&gt;
&lt;br /&gt;
The elements of the stabilizer stabilize code words, that is, code words are eigenstates of the stabilizer operators with eigenvalue +1, and states with errors have eigenvalue -1 and this can always be chosen to be true for this class of quantum error correcting codes. Note that if &amp;lt;math&amp;gt;|\psi\rangle\,\!&amp;lt;/math&amp;gt; is a code word, &amp;lt;math&amp;gt;S\in \mathcal{S}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; is an error operator, then  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
SE|\psi\rangle &amp;amp;= S|\psi^\prime\rangle =(-1)|\psi^\prime\rangle \\&lt;br /&gt;
ES|\psi\rangle &amp;amp;= E|\psi\rangle = |\psi^\prime\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.19}}&lt;br /&gt;
or&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(-1)SE|\psi\rangle = ES|\psi\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.20}}&lt;br /&gt;
This says that &amp;lt;math&amp;gt;SE + ES =0\,\!&amp;lt;/math&amp;gt; when acting on the code words.  In other words, the operators anti-commute when &amp;lt;math&amp;gt;E\,\!&amp;lt;/math&amp;gt; produces a state that has eigenvalues -1 and also when it is a state that &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; stabilizes.&lt;br /&gt;
&lt;br /&gt;
This is the basic idea of the stabilizer code construction to be discussed in general in the next section.&lt;br /&gt;
&lt;br /&gt;
===General Stabilizer Formalism===&lt;br /&gt;
&lt;br /&gt;
This brief section provides general definitions and theorems for stabilizer quantum error correcting codes.  The next section provides an explicit example.&lt;br /&gt;
&lt;br /&gt;
====Definition 3: Stabilizer Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt; \mathcal{S}\subset \mathcal{P}_n \,\!&amp;lt;/math&amp;gt; be an abelian subgroup of the Pauli group that does not contain &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt;.  Let &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S}) = \{|\psi\rangle \; |\; S|\psi\rangle=|\psi\rangle, \mbox{ for all } S\in \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  &amp;lt;math&amp;gt;\mathcal{C}(\mathcal{S})\,\!&amp;lt;/math&amp;gt; is a stabilizer code and &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; is its stabilizer. &lt;br /&gt;
&lt;br /&gt;
This formalizes what was stated earlier, which is that all states of the code space are eigenstates of elements of the stabilizer subgroup with eigenvalue +1.  However, it also says more.  It tells us that any subgroup of the Pauli group that is abelian and does not contain the elements &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; can be used to construct a stabilizer code by simply choosing the set of states that are eigenstates with eigenvalues +1.  Another way of saying this is that the states are fixed, or invariant, under the action of the stabilizer elements.  Let us see why the restriction not allowing &amp;lt;math&amp;gt; -\mathbb{I},\pm i\mathbb{I}\,\!&amp;lt;/math&amp;gt; must be included.  Suppose that &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt; was in the set &amp;lt;math&amp;gt; \mathcal{S}.\,\!&amp;lt;/math&amp;gt;  It then follows that &amp;lt;math&amp;gt; -\mathbb{I}|\psi\rangle = |\psi\rangle\,\!&amp;lt;/math&amp;gt;.  Only the zero state satisfies this equation, so the code must contain no states other than the zero one.  (The states must be +1 eigenstates of every stabilizer element.)  Now, suppose one of the other two was in the stabilizer subgroup. This means that the element squared is also in the stabilizer, since it is a subgroup and must be closed under multiplication.  But the square of these gives &amp;lt;math&amp;gt; -\mathbb{I}\,\!&amp;lt;/math&amp;gt;, which cannot be in the set.  Thus none of these can be included.&lt;br /&gt;
&lt;br /&gt;
====Encoding/Decoding from Stabilizer Generators====&lt;br /&gt;
&lt;br /&gt;
Once one has obtained the stabilizer subgroup, it is left to find the codewords that are states with eigenvalue +1.  To do this, one only needs to ensure the generators of the stabilizer satisfy this condition, since the generators give all other stabilizer elements through multiplication. Therefore, if the state has eigenvalue +1 for all generators, it will also have eigenvalue +1 for all stabilizer elements.  &lt;br /&gt;
&lt;br /&gt;
For smaller codes, finding the set of states could be as easy as satisfying constraints given by the small number of generators.  Larger, more complicated codes may however require a lot of work to find the states.  Cleve and Gottesman gave an algorithm for finding the code words using a efficient gate array obtained from the stabilizer formalism.  http://arxiv.org/abs/quant-ph/9607030  &lt;br /&gt;
&lt;br /&gt;
It is worth noting that the decoding and error detection and correction steps also require work to find explicit circuits.  However, for many stabilizer codes, decoding is simply encoding in reverse.  (This is not so for every quantum error correcting code.)  &lt;br /&gt;
&lt;br /&gt;
Although these accomplishments are very important, more work is required to ensure circuits are fault-tolerant---that errors do not propagate or grow as the computation progresses.  If they were to develop without these constraints, then the computation would eventually fail.&lt;br /&gt;
&lt;br /&gt;
===A Return to Shor's Code===&lt;br /&gt;
&lt;br /&gt;
Let us consider the set of operators in [[#Table7.1|Table 7.1]] where each operator in the row is included, in order, in the tensor product that forms an element of the Pauli group.  These elements form the eight [[Appendix D - Group Theory#Definition 14: Generators of a Group|generators]] of stabilizer elements &amp;lt;math&amp;gt;S_i\,\!&amp;lt;/math&amp;gt;.  The order of the stabilizer subgroup is much larger than the set of generators, which is only 8.  Here they are taken as in the table, but the set is not unique.  This set is chosen to agree with our earlier choice of measurements.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.1: ''The rows give the Pauli matrices which are included in a tensor product, in order, in an element of the Pauli group.  Each column corresponds to the qubit, q1-q9, on which the operator in that column will act.''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|q 8&lt;br /&gt;
|q 9&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_7\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_8\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Having the generators of the stabilizer of the code, the objective is to construct the codewords, or the explicit states that are eigenstates of these operators with eigenvalues +1.  From the top row, it is clear that the first two qubits must be the same, whether zero or one, so that the parity is even.  Similarly, the second two must be the same, and thus the first three must be the same.  Similarly, the middle three and last three must also be the same.  The last two generators state that flipping the first six bits at once will produce the same state, and flipping the last six bits together will produce the same state.  Thinking of these in blocks (since the first six generators give blocks of three) tells us that there are states that are symmetric under the interchange of zeroes and ones in pairs of triplet blocks.  To break this into two parts, one may choose the symmetric and anti-symmetric combination of states that leads to the Shor code words given in [[#eq7.10|Equation (7.10)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CSS codes==&lt;br /&gt;
&lt;br /&gt;
There is a class of quantum error correcting codes called the CSS codes after their inventors  [[Bibliography#CalderbankNShor|Calderbank and Shor]], and [[Bibliography#Steane:prsl|Steane]].   These are also stabilizer codes, but their construction is different and somewhat informative due to the connection to classical error correction.  However, given that they are stabilizer codes, the stabilizer formalism and tools can be used for encoding, etc.  &lt;br /&gt;
&lt;br /&gt;
The CSS codes are constructed from two classical linear codes, say &amp;lt;math&amp;gt; \mathcal{C}_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \mathcal{C}_2\,\!&amp;lt;/math&amp;gt;.  This is done by taking advantage of the parity check matrices from the classical coding theory.  In this section, this construction is briefly described.  In the next section, the seven qubit CSS code is described.  &lt;br /&gt;
&lt;br /&gt;
Recall from the discussion of the [[Chapter 7 - Quantum Error Correcting Codes#Shor's Nine-Qubit Quantum Error Correcting Code|Shor code]] that a phase-flip code can be constructed from a bit-flip code by using Hadamard gates in order to change the basis from &amp;lt;math&amp;gt; |0\rangle,|1\rangle\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; |+\rangle,|-\rangle\,\!&amp;lt;/math&amp;gt;.  Thus all of the error detection and correction can be accomplished by translating from one basis to the other.  &lt;br /&gt;
&lt;br /&gt;
Keeping this in mind, a quantum error correcting code can be constructed from a classical error correcting code using the following trick.  (See [[Bibliography#Steane:prl|Steane]] or the [[Bibliography#Gottesman:rev09|review by Gottesman]].)  Take the classical parity check matrix &amp;lt;math&amp;gt; P_1\,\!&amp;lt;/math&amp;gt; for a classical error correcting &amp;lt;math&amp;gt;[n_1,k_1,d_1]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt;.  This will turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_1=(d_1-1)/2\,\!&amp;lt;/math&amp;gt; bit-flip errors, just as did the classical code.  Then, given another classical error correcting &amp;lt;math&amp;gt;[n_2,k_2,d_2]\,\!&amp;lt;/math&amp;gt; code &amp;lt;math&amp;gt;\mathcal{C}_2\,\!&amp;lt;/math&amp;gt;, replace all zero entries with the identity &amp;lt;math&amp;gt;\mathbb{I} \,\!&amp;lt;/math&amp;gt; operator (matrix), and replace all one entries with the Pauli matrix &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt;.  This will give turn the rows into a set of stabilizer elements that will detect and correct &amp;lt;math&amp;gt;t_2=(d_2-1)/2\,\!&amp;lt;/math&amp;gt; phase-flip errors.  This would give a stabilizer code with one possible caveat: the operators in the stabilizer all need to commute with each other.  The way to ensure this will happen, that the &amp;lt;math&amp;gt;X \,\!&amp;lt;/math&amp;gt; generators and  &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt; generators commute, is to combine the codes in a particular way.  &lt;br /&gt;
&lt;br /&gt;
The dual of a code (denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;) is also a code, and it is not too difficult to show that the parity check matrix for &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; is the generator matrix for &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;.  It turns out that if (and only if) &amp;lt;math&amp;gt;\mathcal{C}_2^\perp \subseteq \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;, then the two codes combine to produce an &amp;lt;math&amp;gt;[[n,k_1+k_2-n,d]]\,\!&amp;lt;/math&amp;gt; stabilizer code, where &amp;lt;math&amp;gt;d\geq \text{min}(d_1,d_2)\,\!&amp;lt;/math&amp;gt;.  That is, the generators for each of the two codes will commute with each other.  &lt;br /&gt;
&lt;br /&gt;
Now the two codes, one to protect against bit-flips and one to protect against phase-flips, combine so that they can correct any error, including &amp;lt;math&amp;gt;Y\,\!&amp;lt;/math&amp;gt; errors that are composed of both a bit-flip and phase-flip.  Therefore the code can protect against both, and the minimum distance is the smaller of the distance of the two codes.  It could actually be higher if the code is degenerate.  &lt;br /&gt;
&lt;br /&gt;
===Steane's Seven Qubit Code===&lt;br /&gt;
&lt;br /&gt;
The seven qubit quantum error correcting code, originally described by Steane, is member of the class of CSS quantum error correcting codes.  In fact it is the smallest such code, and has  &amp;lt;math&amp;gt;\mathcal{C}_2 = \mathcal{C}_1\,\!&amp;lt;/math&amp;gt;.  It is a &amp;lt;math&amp;gt;[[7,1,3]]\,\!&amp;lt;/math&amp;gt; quantum error correcting code, using 7 qubits to encode one logical (or data) qubit such that one arbitrary error can be detected and corrected.  This code has been studied extensively, since it is able to be made fault tolerant (explained below).  &lt;br /&gt;
&lt;br /&gt;
This code is actually based on the &amp;lt;math&amp;gt;[7,4,3]\,\!&amp;lt;/math&amp;gt; Hamming code discussed in [[Appendix F - Classical Error Correcting Codes|Appendix F]].  Let us first recall the parity check matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
P =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
\end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.21}}&lt;br /&gt;
and the generator matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
G =  \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.22}}&lt;br /&gt;
Relating this back to the stabilizer formalism, the generators can be written using the parity check matrix as described above.  They are given in [[#Table7.2|Table 7.2]].  The first three rows each give the elements of the tensor product, in order, for the stabilizer elements of a code that can protect against bit flips.  The next three give stabilizers for the phase-flip code.  From these one may get the code words.  The logical zero and one are given below.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table7.2&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 7.2'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot;|Table 7.2: ''The first three rows give the stabilizers for the bit-flip error correcting code.  The next three are for the phase-flip code. (See also [[#Table7.1|Table 7.1]] for further explanation.)''&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; S_i\in \mathcal{S}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|q 1&lt;br /&gt;
|q 2&lt;br /&gt;
|q 3&lt;br /&gt;
|q 4&lt;br /&gt;
|q 5&lt;br /&gt;
|q 6&lt;br /&gt;
|q 7&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_4\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_5\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;  &lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;S_6\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Steane's 7-qubit code encodes the logical zero using all even weight classical code vectors, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|0_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{even }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.23}}&lt;br /&gt;
The odd weight classical code vectors are used to encode the logical one state,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|1_L\rangle = \frac{1}{\sqrt{8}} \sum_{\text{odd }v} |v\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|7.24}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#Introduction|Continue to '''Chapter 8 - Decoherence-Free/Noiseless Subsystems''']]&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
[[Chapter 10 - Fault-Tolerant Quantum Computing#Introduction|Skip to '''Chapter 10 - Fault-Tolerant Quantum Computing''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_9_-_Dynamical_Decoupling_Controls&amp;diff=1751</id>
		<title>Chapter 9 - Dynamical Decoupling Controls</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_9_-_Dynamical_Decoupling_Controls&amp;diff=1751"/>
		<updated>2011-11-21T16:54:17Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
In the last chapter, it was shown that a symmetry in the system-bath Hamiltonian, if present, could be used to construct states immune to noise.  In this chapter we will see that under certain conditions it is possible to reduce errors, create a symmetry, or even remove errors in the evolution of a quantum system.  This is done though repeated use of external controls which act on the system.  These controls are often called &amp;quot;dynamical decoupling controls&amp;quot; due to their original objective of decoupling the system from the bath.  They are quite generally useful controls to consider for the elimination and/or reduction of errors.  In this chapter, a simple introduction to dynamical decoupling controls is given and some important concepts discussed.&lt;br /&gt;
&lt;br /&gt;
==General Conditions==&lt;br /&gt;
&lt;br /&gt;
As stated in [[Chapter 8 - Decoherence-Free/Noiseless Subsystems|Chapter 8]] the Hamiltonian describing the evolution of a system and bath which are coupled together can always be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
H = H_S\otimes I_B + I_S\otimes H_B + H_I, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.1}}&lt;br /&gt;
where &amp;lt;math&amp;gt;H_S \,\!&amp;lt;/math&amp;gt; acts only on the system, &amp;lt;math&amp;gt;H_B\,\!&amp;lt;/math&amp;gt; acts only on the bath, and &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; H_I = \sum_\alpha S_\alpha\otimes B_\alpha, \,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
is the interaction Hamiltonian with the &amp;lt;math&amp;gt;S_\alpha\,\!&amp;lt;/math&amp;gt; acting only on the system and the &amp;lt;math&amp;gt;B_\alpha\,\!&amp;lt;/math&amp;gt; acting only on the bath. &lt;br /&gt;
&lt;br /&gt;
The idea is to modify the evolution of the system and bath such that the errors are reduced or eliminated using external control Hamiltonians.  These controls are called dynamical decoupling controls since they are used to decouple (at least approximately decouple) the system from the bath.  Since can be difficult to change states of a bath, indeed one often does not know details of the bath, the controls which are to be used for reducing errors should act on the system.  As discussed previously, the errors arise from the system-bath interaction Hamiltonian &amp;lt;math&amp;gt; H_I\,\!&amp;lt;/math&amp;gt; and, in particular, the system operators &amp;lt;math&amp;gt; S_\alpha\,\!&amp;lt;/math&amp;gt; are the operators which describe the affect of the coupling on the system.  In general the interaction Hamiltonian is time-dependent since the bath operators will change in time.  However, for short times we may assume the interaction Hamiltonian is unchanged, or at least approximately constant.  This is sometimes called the short-time assumption in dynamical decoupling.&lt;br /&gt;
&lt;br /&gt;
==The Magnus Expansion==&lt;br /&gt;
&lt;br /&gt;
A fairly good starting point to see how this is done is the so-called Magnus expansion. (See [[Bibliography#Blanes/etal:08|Blanes, et al.]] and references therein.)  The general problem is that a time-dependent operation is to be applied to the Hamiltonian making the Hamiltonian itself time-dependent and one would like to solve the time-dependent Schrodinger equation:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
i\frac{\partial}{\partial t}\left\vert \Psi(t)\right\rangle = H(t) \left\vert \Psi(t) \right\rangle,\,\!&lt;br /&gt;
&amp;lt;/math&amp;gt;|9.2}}&lt;br /&gt;
which is sometimes written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U^\prime(t) = -iH(t)U(t).\,\!&lt;br /&gt;
&amp;lt;/math&amp;gt;|9.3}}&lt;br /&gt;
The question is, what &amp;lt;math&amp;gt; U(t)\,\!&amp;lt;/math&amp;gt; will solve this equation?  If &amp;lt;math&amp;gt;U(t) \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; H(t)\,\!&amp;lt;/math&amp;gt; are just numbers, the solution would be  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U^\prime(t) = \exp\left(-i\int_0^t H(t^\prime)dt^\prime\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.4}}&lt;br /&gt;
However, when the Schrodinger equation is the equation to be solved, &amp;lt;math&amp;gt;U(t) \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; H(t)\,\!&amp;lt;/math&amp;gt; are matrices.  To be specific, &amp;lt;math&amp;gt;U(t) \,\!&amp;lt;/math&amp;gt; is a unitary matrix and &amp;lt;math&amp;gt; H(t)\,\!&amp;lt;/math&amp;gt; is a Hermitian matrix.  The solution is often written in the form &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U^\prime(t) = {\mathcal{T}}\left[\exp\left(-i\int_0^t H(t^\prime)dt^\prime\right)\right],&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.5}}&lt;br /&gt;
where &amp;lt;math&amp;gt; \mathcal{T}\,\!&amp;lt;/math&amp;gt; denotes the time-ordered exponential.  In this case, matrices do not commute so that the exponential must be handled with care.  Operators must be ordered according to the time where they appear in the operation, and the solution [[#eq9.2|Eq.(9.2)]] is not the solution to the problem unless &amp;lt;math&amp;gt; H(t)\,\!&amp;lt;/math&amp;gt; is a constant matrix.  &lt;br /&gt;
&lt;br /&gt;
The solution to this problem is the following,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U^\prime(T) = \exp\left(-i\Omega(T)T\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.6}}&lt;br /&gt;
where&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\Omega(T) = \sum_{k=1}^\infty\Omega_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.7}}&lt;br /&gt;
and &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Omega_1 &amp;amp;= \frac{1}{T}\int_0^T H(t_1)dt_1, \\&lt;br /&gt;
\Omega_2 &amp;amp;= -\frac{1}{T^2}\frac{i}{2}\int_0^T dt_1\int_0^{t_1}dt_2 [H(t_1),H(t_2)], \\&lt;br /&gt;
\Omega_3 &amp;amp;= -\frac{1}{T^3}\frac{1}{6} \int_0^T dt_1\int_0^{t_1} dt_2 \int_0^{t_2} dt_3 ([H(t_1),[H(t_2),H(t_3)]] + [H(t_3),[H(t_2),H(t_1)]]) \\&lt;br /&gt;
 &amp;amp; \mbox{etc.},&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|9.8}}&lt;br /&gt;
where &amp;lt;math&amp;gt;T \,\!&amp;lt;/math&amp;gt; is some characteristic time scale.&lt;br /&gt;
&lt;br /&gt;
This expansion can be used to find approximations to the time-dependent evolution of a quantum system to any desired order and is thus worth noting.  However, due to the introductory nature of this material, we will primarily discuss first-order theory and the reader should assume the calculations are for the first-order theory unless otherwise noted.&lt;br /&gt;
&lt;br /&gt;
==A First-Order Theory==&lt;br /&gt;
&lt;br /&gt;
To show how this theory of dynamical decoupling controls could work in an ideal case, let us consider a simple example.  Suppose that the external controls (decoupling controls) are so strong that the Hamiltonian evolution can be neglected during the time the external controls are turned on.  Due to their strength, we will also assume that they can be implemented in a very short time and that there are &amp;lt;math&amp;gt;N \,\!&amp;lt;/math&amp;gt; different controls to be used.  We will first use a given control &amp;lt;math&amp;gt; U_n \,\!&amp;lt;/math&amp;gt; and then its inverse.  Between the controls the system evolves for a short time &amp;lt;math&amp;gt;\Delta t \,\!&amp;lt;/math&amp;gt;.  After all control pulses have been implemented, the effective evolution of the system will be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U_{eff}(T) = \prod_{n=0}^{N-1} U_n^{-1} U(\Delta t) U_n,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.9}}&lt;br /&gt;
where&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
U(\Delta t) = \exp(-i H \Delta t),&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|9.10}}&lt;br /&gt;
&amp;lt;math&amp;gt; T=N\Delta t,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; H\,\!&amp;lt;/math&amp;gt; is free evolution given by [[#eq9.1|Eq.(9.1)]] above.  &lt;br /&gt;
Furthermore, suppose that  the time &amp;lt;math&amp;gt; \Delta t\,\!&amp;lt;/math&amp;gt; is small so that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
U(\Delta t) \approx I-i H \Delta t.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.11}}&lt;br /&gt;
Now suppose that we let &amp;lt;math&amp;gt; U_{eff}=\exp(-iH_{eff}T) \,\!&amp;lt;/math&amp;gt;.  Inserting [[#eq9.11|Eq.(9.11)]] into [[#eq9.9|Eq.(9.9)]] and keeping only first order terms in the product gives&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
H_{eff} = \frac{1}{N}\sum_{n=0}^{N-1} U_n^{-1} H U_n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.12}}&lt;br /&gt;
This is a simple expression for the effective Hamiltonian evolution of a system undergoing a series of dynamical decoupling controls.  Note that &lt;br /&gt;
the assumptions are that the operations are strong (since the free evolution is neglected during the control pulses) and fast (since we assume that the Hamiltonian &amp;lt;math&amp;gt; H\,\!&amp;lt;/math&amp;gt; of [[#eq9.1|Eq.(9.1)]] is constant during the entire time &amp;lt;math&amp;gt; T\,\!&amp;lt;/math&amp;gt; of this cycle of control pulses).  Due to these strong and fast assumptions, these are often referred to as &amp;quot;bang-bang&amp;quot; controls.  &lt;br /&gt;
&lt;br /&gt;
It is important to note that these controls are rather unrealistic.  That is, these criteria are never met completely.  However, they are met approximately in some systems, most notably in nuclear magnetic resonance experiments where the so-called average Hamiltonian theory originated.  More realistic pulses can be, and have been, explored for use in actual physical systems where they have been shown to reduce noise very effectively.  This has been done by generalizing the theory beyond the first-order limit and without the assumption that the pulses are extremely strong.  &lt;br /&gt;
&lt;br /&gt;
==The Single Qubit Case==&lt;br /&gt;
&lt;br /&gt;
The simplest case involves the elimination of an error on a single qubit.  There are several types of errors that can degrade a qubit state as discussed in [[Chapter 6 - Noise in Quantum Systems#Examples|Section 6.4]].  There is a bit-flip, phase-flip, or both.  In this section the first-order approximation is used to show how to eliminate first phase errors and then arbitrary errors on an arbitrary qubit state.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Phase Errors===&lt;br /&gt;
&lt;br /&gt;
Let us suppose that the Hamiltonian for the free evolution contains only an interaction part which induces a phase error, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
H_i = \sigma_z\otimes B,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.13}}&lt;br /&gt;
where &amp;lt;math&amp;gt; B\,\!&amp;lt;/math&amp;gt; is the bath operator.  This interaction Hamiltonian will couple the system to the bath and thus case errors.  The factor &amp;lt;math&amp;gt; \sigma_z\,\!&amp;lt;/math&amp;gt; indicates that it is a phase error.  Using the first-order theory, the objective is to find a series of pulses which will effectively decouple the system from the bath.  In this case it can be done with only one decoupling pulse, &amp;lt;math&amp;gt; U_X(\pi)\,\!&amp;lt;/math&amp;gt;.  This will be denoted  &amp;lt;math&amp;gt; U_1 = U_X(\pi)\,\!&amp;lt;/math&amp;gt; and the identity (doing nothing) will be denoted &amp;lt;math&amp;gt; U_0=I\,\!&amp;lt;/math&amp;gt;.  The effective Hamiltonian is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
H_{eff} = \frac{1}{2}\sum_{n=0}^{1} U_n^{-1} H_i U_n = \frac{1}{2}(U_0^\dagger H_i U_0 + U_1^\dagger H_i U_1).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.14}}&lt;br /&gt;
A rotation about the x-axis by an angle &amp;lt;math&amp;gt; \pi\,\!&amp;lt;/math&amp;gt; will rotate the Pauli matrix &amp;lt;math&amp;gt; \sigma_z\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; -\sigma_z\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Transformations|Section C.5]], in particular [[Appendix C - Vectors and Linear Algebra#Transformations of a Qubit|Section C.5.1]].)  This is because &amp;lt;math&amp;gt; U_1 = U_X(\pi) = \sigma_x\,\!&amp;lt;/math&amp;gt;.  After the pulse sequence &amp;lt;math&amp;gt; U_0, U_1\,\!&amp;lt;/math&amp;gt;, the system is decoupled from the bath because the effective Hamiltonian is zero!  There is no more interaction between the system and bath!  That is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
H_{eff} = \frac{1}{2}(\sigma_z\otimes B + \sigma_x\sigma_z\sigma_x\otimes B) = \frac{1}{2}(\sigma_z\otimes B -\sigma_z\otimes B) =0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.15}}&lt;br /&gt;
Thus the noise has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
This may be considered an averaging method.  (As mentioned before, it is sometimes called average Hamiltonian theory in the NMR literature.)  In this case, it is also sometimes called a parity kick since the sign of the interaction Hamiltonian is reversed giving just two terms which cancel.&lt;br /&gt;
&lt;br /&gt;
===Arbitrary Single Qubit Errors===&lt;br /&gt;
&lt;br /&gt;
Now let us consider arbitrary single qubit errors in an interaction Hamiltonian.  The interaction will have the form&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
H_i = \sigma_x\otimes B_x + \sigma_y\otimes B_y + \sigma_z\otimes B_z.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.16}}&lt;br /&gt;
The objective here is to eliminate all terms in the interaction Hamiltonian.  It turns out that this may be accomplished in several different ways.  Let us first consider the obvious choice of bang-bang pulses, &amp;lt;math&amp;gt;\{ I, \sigma_x, \sigma_y, \sigma_z\}\,\!&amp;lt;/math&amp;gt;.  First, recall that the Pauli matrices have the property that &amp;lt;math&amp;gt;\sigma_i\sigma_j\sigma_i = -\sigma_i\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i\neq j\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_i\sigma_j\sigma_i = \sigma_i\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i= j\,\!&amp;lt;/math&amp;gt;. Then the effective Hamiltonian is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
H_{eff} &amp;amp;= \;\;\frac{1}{4}(\sigma_x\otimes B_x + \sigma_y\otimes B_y + \sigma_z\otimes B_z) \\&lt;br /&gt;
        &amp;amp; \;\;\; + \frac{1}{4}\sigma_x(\sigma_x\otimes B_x + \sigma_y\otimes B_y + \sigma_z\otimes B_z)\sigma_x \\&lt;br /&gt;
        &amp;amp; \;\;\; + \frac{1}{4}\sigma_y(\sigma_x\otimes B_x + \sigma_y\otimes B_y + \sigma_z\otimes B_z)\sigma_y \\&lt;br /&gt;
        &amp;amp; \;\;\; + \frac{1}{4}\sigma_z(\sigma_x\otimes B_x + \sigma_y\otimes B_y + \sigma_z\otimes B_z)\sigma_z \\&lt;br /&gt;
        &amp;amp;= \;\;\frac{1}{4}(\;\sigma_x\otimes B_x + \sigma_y\otimes B_y + \sigma_z\otimes B_z) \\&lt;br /&gt;
        &amp;amp; \;\;\; + \frac{1}{4}(\;\sigma_x\otimes B_x - \sigma_y\otimes B_y - \sigma_z\otimes B_z) \\&lt;br /&gt;
        &amp;amp; \;\;\; + \frac{1}{4}(-\sigma_x\otimes B_x + \sigma_y\otimes B_y - \sigma_z\otimes B_z) \\&lt;br /&gt;
        &amp;amp; \;\;\; + \frac{1}{4}(-\sigma_x\otimes B_x - \sigma_y\otimes B_y + \sigma_z\otimes B_z) \\&lt;br /&gt;
        &amp;amp; = \;\;0.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|9.17}}&lt;br /&gt;
So again we see that the interaction Hamiltonian has been eliminated, so the errors will be removed.  &lt;br /&gt;
&lt;br /&gt;
Notice that if [[#eq9.9|Eq.(9.9)]] is used, then the number of pulses required is actually two.  To see this, consider the sequence of pulses above which include &amp;lt;math&amp;gt;\{ I, \sigma_x, \sigma_y, \sigma_z\}\,\!&amp;lt;/math&amp;gt;.  [[#eq9.9|Eq.(9.9)]] indicates that the sequence is &amp;lt;math&amp;gt; \sigma_x U\sigma_x\sigma_y U \sigma_y \sigma_z U\sigma_z U\,\!&amp;lt;/math&amp;gt;.  However, &lt;br /&gt;
&amp;lt;math&amp;gt;\sigma_x\sigma_y = i\sigma_z\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_y\sigma_z = i\sigma_x\,\!&amp;lt;/math&amp;gt;.  So this sequence is equivalent to &amp;lt;math&amp;gt; \sigma_x U\sigma_z U \sigma_x U\sigma_z U\,\!&amp;lt;/math&amp;gt; which involves only two different pulses &amp;lt;math&amp;gt; \sigma_x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \sigma_z \,\!&amp;lt;/math&amp;gt; whereas the sum would seem to indicate three are required.  &lt;br /&gt;
&lt;br /&gt;
==Extensions==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, the theory of dynamical decoupling controls has been extended beyond the first-order limit and without the assumption that the pulses are extremely strong.  However, even within the first-order theory there are several ways one can extend these results in order to be able to find a complete set of pulses to eliminate a particular error.  Here we describe two of these that are also useful beyond first order.  However, beginning with the first order theory aids in our understanding of extensions.  &lt;br /&gt;
&lt;br /&gt;
===Groups of Transformations===&lt;br /&gt;
&lt;br /&gt;
One method for finding a set of pulses to achieve a particular decoupling goal is to choose the set of pulses to belong to a discrete group of the unitary group.  (See [[Appendix D - Group Theory|Appendix D]] for definitions and examples of groups.)  Let us suppose that the set of math&amp;gt;N\,\!&amp;lt;/math&amp;gt; pulses &amp;lt;math&amp;gt;\{U_i\}\,\!&amp;lt;/math&amp;gt; forms a group.  Then our effective Hamiltonian after implementing a complete set of pulses is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
H_{eff} = \frac{1}{N}\sum_{n=0}^{N-1} U_n^{-1} H U_n,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.18}}&lt;br /&gt;
where, by convention, &amp;lt;math&amp;gt;U_0 = \mathbb{I}\,\!&amp;lt;/math&amp;gt; is the identity and the sum is over every element of the group.  This effective Hamiltonian commutes with any element of the group.  To see this, let some particular element of the group be denoted &amp;lt;math&amp;gt;U_k\,\!&amp;lt;/math&amp;gt;, and let &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
[\frac{1}{N}\sum_{n=0}^{N-1} U_n^{-1} H U_n, U_k] = C,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.19}}&lt;br /&gt;
where &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is some constant (as of yet unknown).  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
\frac{1}{N}\sum_{n=0}^{N-1} U_n^{-1} H U_n U_k - U_k \frac{1}{N}\sum_{n=0}^{N-1} U_n^{-1} H U_n = C.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.20}}&lt;br /&gt;
We may rewrite this as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
 U_k^{-1}\frac{1}{N}\sum_{n=0}^{N-1} U_n^{-1} H U_n U_k -  \frac{1}{N}\sum_{n=0}^{N-1} U_n^{-1} H U_n = U_k^{-1}C,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.21}}&lt;br /&gt;
or equivalently,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
\frac{1}{N}\sum_{n=0}^{N-1} (U_nU_k)^{-1} H U_n U_k -  \frac{1}{N}\sum_{n=0}^{N-1} U_n^{-1} H U_n = U_k^{-1}C.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.22}}&lt;br /&gt;
Now since both &amp;lt;math&amp;gt;U_n\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;U_k\,\!&amp;lt;/math&amp;gt; are both group elements, then &amp;lt;math&amp;gt;U_nU_k\,\!&amp;lt;/math&amp;gt; is also a group element.  This is the closure property ([[Appendix D - Group Theory#Definition 1:Group|Section D.2.1]]).  So, since the sum is over all group elements, the two sums on the left-hand side are both equal to the same thing.  Therefore, the constant on the left-hand side must be zero and therefore the cummutator must be zero.  &lt;br /&gt;
&lt;br /&gt;
This is called the group symmetrization of the Hamiltonian and is very useful since we can choose or pulses from a set of group elements and the Hamiltonian resulting from the symmetrization procedure will also be invariant, or immune to, noises which are elements of the group.  This provides a critierion for choosing a set of pulses.  &lt;br /&gt;
&lt;br /&gt;
An example of this was already given since the group of matrices formed from &amp;lt;math&amp;gt;\{\mathbb{I}, X, Y, Z\}\,\!&amp;lt;/math&amp;gt; gives us a set of noises to which our effective Hamiltonian of [[#Arbitrary Single Qubit Error|Section 9.5.2]] will be immune.&lt;br /&gt;
&lt;br /&gt;
===Geometric Conditions===&lt;br /&gt;
&lt;br /&gt;
The geometric conditions are quite appealing for two reasons.  The first is that they provide a picture for the effect of dynamical decoupling operations which gives a sufficient criterion for the removal of noise.  The second is that they are more general than the group-theoretical criterion from the last subsection.  These will both be valuable in the next chapter where combinations of error prevention methods are given.  &lt;br /&gt;
&lt;br /&gt;
The set of geometric conditions becomes clear after two observations.  The first is that the Hamiltonian can be described by a complete set of Hermitian matrices &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
H = a_0\mathbb{I} + \sum_ia_i\lambda_i,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.23}}&lt;br /&gt;
The second is that a unitary transformation acting on the Hamiltonian (as in [[#eq9.14|Eq.(9.14)]]) can be viewed as acting as a rotation&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
U\lambda_i U^\dagger = \sum_j R_{ij}\lambda_j,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.24}}&lt;br /&gt;
Both of these will be discussed briefly before the geometric conditions are given.  Any Hermitian matrix can be expanded in terms of a complete set of Hermitian matrices.  (See [[Appendix C - Vectors and Linear Algebra#Hermitian Matrices|Section C.3.9]].)  The (adjoint) action of the unitary transformation on matrix then acts as a rotation.  They way to see this is to compare to the case of the two-state system which is described in [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|Section 3.5.4]] and [[Chapter 3 - Physics of Quantum Information#Rotations of Bloch Vectors|Section 3.5.5]].  So just like the Bloch vector, one can take the set of coefficients &amp;lt;math&amp;gt; a_i\,\!&amp;lt;/math&amp;gt; to be components of a vector.  Then the unitary transformation acts as a rotation of this vector, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
U\vec{a}\cdot \vec{\lambda} U^\dagger = U\sum_{i,j} a_i\lambda_i U^\dagger = \sum_{i,j} a_i (R_{ij}\lambda_j) = \sum_j (R_{ij}a_i) \lambda_j.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.25}}&lt;br /&gt;
In the third equality the rotation is called passive since the rotation is acting on the basis and in the fourth equality the rotation is called active since it is acting on the vector.  This is two different views of the rotation dependent upon the frame of reference.  Note that the matrices &amp;lt;math&amp;gt; S_\alpha\,\!&amp;lt;/math&amp;gt; can also be decomposed using a complete set of Hermitian matrices.  (In fact, any matrix can be decomposed using a ''complex'' combination of a complete set of Hermitian matrices.)  &lt;br /&gt;
&lt;br /&gt;
Now, returning to [[#eq9.12|Eq.(9.12)]], the geometric picture becomes clear.  Each transformation &amp;lt;math&amp;gt; U_i\,\!&amp;lt;/math&amp;gt; acts as a rotation on the vector &amp;lt;math&amp;gt; \vec{a}\,\!&amp;lt;/math&amp;gt; and when the results of each rotation (each being another vector) are added up, the result should be zero if the errors are to be completely removed.  So if we associate to each vector rotated by &amp;lt;math&amp;gt; U_i\,\!&amp;lt;/math&amp;gt; a new vector &amp;lt;math&amp;gt; \vec{a}_i\,\!&amp;lt;/math&amp;gt;, then the condition for the Hamiltonian to vanish is that the sum of these vectors must be zero&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;  &lt;br /&gt;
\sum_i \vec{a}_i = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|9.25}}&lt;br /&gt;
This is the geometric condition for the elimination of errors given a set of dynamical decoupling pulses.  This will be quite useful in the next chapter.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 10 - Fault-Tolerant Quantum Computing#Introduction|Continue to '''Chapter 10 - Fault-Tolerant Quantum Computing''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1750</id>
		<title>Index</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1750"/>
		<updated>2011-11-21T16:45:58Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;A&amp;quot;&amp;gt;&amp;lt;big&amp;gt;A&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:average - [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;big&amp;gt;B&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:basis vectors real [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
:binary numbers [[Appendix F - Classical Error Correcting Codes#Binary Operations|'''F.2''']]&lt;br /&gt;
:bit [[Chapter 1 - Introduction#Bits and Qubits: An Introduction|1.3]]&lt;br /&gt;
:bit-flip operation [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Bloch Sphere [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:bra [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:bracket [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;C&amp;quot;&amp;gt;&amp;lt;big&amp;gt;C&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:check-sum [[Appendix F - Classical Error Correcting Codes#Definition 1|'''F.3.1''']]&lt;br /&gt;
:closed-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:CNOT gate(see controlled NOT) &lt;br /&gt;
:Code [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code word [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:commutator [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], &lt;br /&gt;
:complex conjugate [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
::of a matrix [[Appendix C - Vectors and Linear Algebra#Complex Conjugate|'''C.3.1''']], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']]&lt;br /&gt;
:complex number [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
:computational basis [[Chapter 2 - Qubits and Collections of Qubits#Qubit States|2.2]]&lt;br /&gt;
:controlled NOT [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:controlled phase gate [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|6.1]]&lt;br /&gt;
:controlled unitary operation [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;big&amp;gt;D&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:decoherence [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:degenerate [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:delta&lt;br /&gt;
::Kronecker [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dense coding [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]]&lt;br /&gt;
:density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]],[[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::for two qubits [[Chapter 3 - Physics of Quantum Information#Density Matrix for a Mixed State: Two States|3.5.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
::mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:density operator [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:determinant [[Appendix C - Vectors and Linear Algebra#The Determinant|'''C.3.6''']]&lt;br /&gt;
:disjointness condition [[Appendix F - Classical Error Correcting Codes#Errors|'''F.5''']]&lt;br /&gt;
:distance (see also, code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']])&lt;br /&gt;
:DiVencenzo's requirements [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:Dirac notation [[Appendix C - Vectors and Linear Algebra#Introduction|'''C.2.1''']], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dot product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:dual code [[Appendix F - Classical Error Correcting Codes#Definition_11:_Dual_Code|'''F.8.1''']]&lt;br /&gt;
:dual matrix [[Appendix F - Classical Error Correcting Codes#Parity_Check_Matrix|'''F.4.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;E&amp;quot;&amp;gt;&amp;lt;big&amp;gt;E&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:eigenvalue decomposition [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvectors [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:epsilon tensor (see Levi-Civita Tensor)&lt;br /&gt;
:entangled states (see entanglement)&lt;br /&gt;
:entanglement [[Chapter 4 - Entanglement|4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 1 - Introduction#How do quantum computers provide an advantage?|1.2.5]]&lt;br /&gt;
::pure state [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
::mixed state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:error syndrome [[Appendix F - Classical Error Correcting Codes#Parity Check Matrix|'''F.4.2''']]&lt;br /&gt;
:expectation value [[Chapter 3 - Physics of Quantum Information#Expectation Values|3.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;F&amp;quot;&amp;gt;&amp;lt;big&amp;gt;F&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:field [[Appendix F - Classical Error Correcting Codes#Binary_Operations|'''F.2''']]&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;G&amp;quot;&amp;gt;&amp;lt;big&amp;gt;G&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:generator matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.1''']]&lt;br /&gt;
:group [[Appendix D - Group Theory#Definitions and Examples|'''D.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;H&amp;quot;&amp;gt;&amp;lt;big&amp;gt;H&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Hadamard gate [[Chapter 2 - Qubits and Collections of Qubits#eq2.16|2.16]]&lt;br /&gt;
:Hamiltonian [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:Hamming distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.3''']]&lt;br /&gt;
:Hamming weight, or weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
:Hermitian matrix [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5]], [[Chapter 8 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|8.2]], [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']], [[Appendix C - Vectors and Linear Algebra#Examples|'''C.6.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:Hilbert-Schmidt inner product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;I&amp;quot;&amp;gt;&amp;lt;big&amp;gt;I&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:inner product  &lt;br /&gt;
::for real vectors [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
::for complex vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:inverse of a matrix [[Appendix C - Vectors and Linear Algebra#The Inverse of a Matrix|'''C.3.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;K&amp;quot;&amp;gt;&amp;lt;big&amp;gt;K&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:ket [[Chapter 2 - Qubits and Collections of Qubits#States of Many Qubits|2.5]], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']]&lt;br /&gt;
:Kraus operators [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]]&lt;br /&gt;
:Kronecker delta [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:Kronecker product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;L&amp;quot;&amp;gt;&amp;lt;big&amp;gt;L&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Levi-Civita Tensor [[Appendix C - Vectors and Linear Algebra#eqC.9|'''C.3.6''']]&lt;br /&gt;
::Generalized [[Appendix C - Vectors and Linear Algebra#eqC.8|'''C.3.6''']]&lt;br /&gt;
:linear code [[Appendix F - Classical Error Correcting Codes#Definition 6|'''F.3.8''']]&lt;br /&gt;
:local operations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
:local unitary transformations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;big&amp;gt;M&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:matrix exponentiation [[Chapter 3 - Physics of Quantum Information#expmatrix|3.2]]&lt;br /&gt;
:maximally entangled states [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:maximally mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]]&lt;br /&gt;
::two qubits&lt;br /&gt;
:mean (see Average)&lt;br /&gt;
:median [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:minimum distance of a code (also code distance) [[Appendix F - Classical Error Correcting Codes#Definition 5|'''F.3.5''']]&lt;br /&gt;
:mixed state density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
:modulus squared [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;O&amp;quot;&amp;gt;&amp;lt;big&amp;gt;O&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:open quantum systems [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:open-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:operator-sum decomposition [[Chapter 8 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|8.4]]&lt;br /&gt;
:orthogonal [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|5.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
::vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;big&amp;gt;P&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:parity check [[Appendix F - Classical Error Correcting Codes#Defintion 1|'''F.3.1''']]&lt;br /&gt;
:parity check matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.2''']]&lt;br /&gt;
:partial trace&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:Pauli matrices [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:phase gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:phase-flip [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Planck's constant [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:projection operator [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]]&lt;br /&gt;
:pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Q&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Q&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Qbit (see qubit)&lt;br /&gt;
:quantum bit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
:quantum dense coding (see [[#D|dense coding]])&lt;br /&gt;
:quantum gates [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:qubit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;R&amp;quot;&amp;gt;&amp;lt;big&amp;gt;R&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:reduced density matrix [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::see reduced density operator&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:requirements for scalable quantum computing [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;S&amp;quot;&amp;gt;&amp;lt;big&amp;gt;S&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:scalability&lt;br /&gt;
:Schrodinger Equation [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
::for density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:separable state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
::simply separable [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:similar matrices [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:similarity transformation [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:singular values [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:special unitary matrix [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]]&lt;br /&gt;
:spectrum [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:standard deviation [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:SU [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']]&lt;br /&gt;
:syndrome measurement [[Appendix F - Classical Error Correcting Codes#Parity_Check_Matrix|'''F.4.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;big&amp;gt;T&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:teleportation [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:tensor product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
:trace [[Appendix C - Vectors and Linear Algebra#The Trace|'''C.3.5''']]&lt;br /&gt;
::partial(see partial trace)&lt;br /&gt;
:transformation [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Circuit Diagrams for Qubit Gates|2.3.1]], [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]], [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 8 - Noise in Quantum Systems#Modelling Open System Evolution|8.3]], [[Chapter 8 - Noise in Quantum Systems#Fixed-Basis Operations|8.3.2]], [[Chapter 8 - Noise in Quantum Systems#Unitary Freedom|8.4.1]], [[Chapter 8 - Noise in Quantum Systems#Physical Interpretation of the Unitary Freedom|8.4.2]], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']], [[Appendix D - Group Theory#Introduction|'''D.1''']]&lt;br /&gt;
::active [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
::passive [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:transpose [[Appendix C - Vectors and Linear Algebra#Transpose|'''C.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;U&amp;quot;&amp;gt;&amp;lt;big&amp;gt;U&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:uncertainty principle [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
:unitary matrix [[Chapter 2 - Qubits and Collections of Qubits#Chapter 2 - Qubits and Collections of Qubits|2.3]], [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']], [[Appendix D - Group Theory#Infinite Order Groups: Lie Groups|'''D.7.2''']]&lt;br /&gt;
:universal set of gates [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:universality [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;V&amp;quot;&amp;gt;&amp;lt;big&amp;gt;V&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:variance [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;W&amp;quot;&amp;gt;&amp;lt;big&amp;gt;W&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:weight, or Hamming weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;X&amp;quot;&amp;gt;&amp;lt;big&amp;gt;X&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:X-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Y&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Y&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Y-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Z&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Z&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Z-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1749</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1749"/>
		<updated>2011-11-21T16:43:11Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Example 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and these two operations follow the distributive law, the set becomes a '''field''' (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate''' of a code is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form of an augmented matrix &amp;lt;math&amp;gt;(I_k|A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T|I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1748</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1748"/>
		<updated>2011-11-21T16:42:33Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Example 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and these two operations follow the distributive law, the set becomes a '''field''' (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate''' of a code is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form of an augmented matrix &amp;lt;math&amp;gt;(I_k|A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T,I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_D_-_Group_Theory&amp;diff=1747</id>
		<title>Appendix D - Group Theory</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_D_-_Group_Theory&amp;diff=1747"/>
		<updated>2011-11-21T16:37:53Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Group Theory in Physics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;''&amp;lt;/nowiki&amp;gt;''Symmetry, as wide or as narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty and perfection.''&amp;lt;nowiki&amp;gt;''&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Hermann Weyl'''&lt;br /&gt;
&lt;br /&gt;
====Symmetries and Groups====&lt;br /&gt;
&lt;br /&gt;
Symmetry arguments have been used widely in mathematics, physics,&lt;br /&gt;
chemistry, biology, computer science, engineering, and elsewhere.  &lt;br /&gt;
Group theory can be an invaluable organizational tool,&lt;br /&gt;
whether it is used explicitly or implicitly, in many areas of&lt;br /&gt;
science.  &lt;br /&gt;
&lt;br /&gt;
In physics, symmetry principles are often used to describe what&lt;br /&gt;
changes and what does not in a physical system undergoing some&lt;br /&gt;
particular transformation.  For example, if a knob is turned in an&lt;br /&gt;
experiment and nothing changes, then that is an invariant of the&lt;br /&gt;
system and thus indicates a symmetry.  (Of course, the trivial case&lt;br /&gt;
where the knob has nothing to do with the experiment, like if the&lt;br /&gt;
machine with the knob is unplugged, should be excluded.) The objective&lt;br /&gt;
here is to explain group theory with this practical viewpoint in&lt;br /&gt;
mind; the idea is for this motivation to be kept in mind&lt;br /&gt;
throughout these notes.  Some formalism is necessary however.  &lt;br /&gt;
&lt;br /&gt;
It is worth noting that very general things tend to need to be &lt;br /&gt;
abstract.  And so it is with group theory.  However, to reiterate, the &lt;br /&gt;
objective here is to be as concrete as possible with the emphasis on &lt;br /&gt;
physical applications.  In this regard, it is worth mentioning that, &lt;br /&gt;
directly or indirectly, [[Bibliography#Tinkham:gpthbook|Michael Tinkham's book]] &lt;br /&gt;
on group theory very much influenced these notes.  Also, Encyclopedia of Maths, Hammermesh, ...&lt;br /&gt;
&lt;br /&gt;
====Group Theory in Physics====&lt;br /&gt;
&lt;br /&gt;
The applications to physics are too numerous to mention here.  However, several comments&lt;br /&gt;
are in order.  First, if a system has a symmetry (often able to be determined by inspection), then it has a&lt;br /&gt;
constraint placed on it. This limits the acceptability of solutions to a problem -&lt;br /&gt;
they must satisfy the symmetry requirement.  Thus identifying&lt;br /&gt;
symmetries is an excellent problem-solving technique.  Choosing &lt;br /&gt;
coordinates is an example of such symmetry identification.  &lt;br /&gt;
&lt;br /&gt;
A group is a set of symmetries.  To see this, suppose that, for example, elements &amp;lt;math&amp;gt;A,B,C,D,...\,\!&amp;lt;/math&amp;gt; operate on an object in such a way&lt;br /&gt;
that they do not change the object.  Most often in physics the&lt;br /&gt;
elements are matrices and the objects on which they act are vectors.&lt;br /&gt;
If a vector or set of vectors is unchanged by these operations, then&lt;br /&gt;
the vectors have a symmetry described by the action of these&lt;br /&gt;
operators.  In Example 2 the vectors are the vertices of the triangle&lt;br /&gt;
and the triangle is unchanged by the action of the group elements given in the example.  (Notice, as an example&lt;br /&gt;
of how a set of symmetries forms a group, that if the vector is &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and assuming &amp;lt;math&amp;gt;Av=v\,\!&amp;lt;/math&amp;gt;, i.e. &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is a symmetry operation, and also assuming&lt;br /&gt;
&amp;lt;math&amp;gt;Bv=v\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;ABv = v\,\!&amp;lt;/math&amp;gt;. Thus the set is closed under multiplication, which means that the product of elements in the set is always in the set.)&lt;br /&gt;
One way to think of this is quite literal.  If a symmetry operation is&lt;br /&gt;
applied to the equilateral triangle and the triangle is still&lt;br /&gt;
equilateral with the vertices indistinguishable (assuming no labels), then the&lt;br /&gt;
operation did not change anything discernible.  &lt;br /&gt;
&lt;br /&gt;
It turns out that group theory has been applied with great success to&lt;br /&gt;
many areas of quantum physics: solid-state physics including&lt;br /&gt;
crystallography, nuclear physics, atomic physics, molecular physics,&lt;br /&gt;
and particle physics.  It has also been applied in classical physics&lt;br /&gt;
and relativity.  It has been especially indispensable in quantum field theory and particle physics where symmetries correspond to conserved quantities observed in experiment.  &lt;br /&gt;
&lt;br /&gt;
Some groups of infinite order, such as Lie groups, were originally&lt;br /&gt;
studied largely in order to understand the symmetries of&lt;br /&gt;
differential equations.  This is the set of groups that is discussed&lt;br /&gt;
next.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Examples===&lt;br /&gt;
&lt;br /&gt;
====Definition 1: Group====&lt;br /&gt;
A '''group''' &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; is a set of objects &amp;lt;math&amp;gt;\{A,B,C,&lt;br /&gt;
...\}\,\!&amp;lt;/math&amp;gt; together with a composition rule between them (denoted &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; here and &lt;br /&gt;
called a product or multiplication) such that the following are satisfied:&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\circ B)\circ C = A\circ (B \circ C)\,\!&amp;lt;/math&amp;gt;. (&amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; is associative.)&lt;br /&gt;
#If &amp;lt;math&amp;gt;A\in \mathcal{G}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\in\mathcal{G}\,\!&amp;lt;/math&amp;gt;, then their product is &amp;lt;math&amp;gt;A\circ B \in \mathcal{G}\,\!&amp;lt;/math&amp;gt;.  (The set is closed under multiplication.)&lt;br /&gt;
#There is an element &amp;lt;math&amp;gt;\mathbb{I}\in \mathcal{G}\,\!&amp;lt;/math&amp;gt;  such that, for all &amp;lt;math&amp;gt;A\in \mathcal{G}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbb{I}A = A = A\mathbb{I}\,\!&amp;lt;/math&amp;gt;. (&amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; contains the identity element.)&lt;br /&gt;
#For all &amp;lt;math&amp;gt;A\in \mathcal{G}\,\!&amp;lt;/math&amp;gt; there exists an element &amp;lt;math&amp;gt;A^{-1}\in\mathcal{G}\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;AA^{-1} =  \mathbb{I} =A^{-1}A\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the provided examples, the objective is to make the direct connection between a group and a set of symmetries of an object.  The reason is that ''a set of symmetries forms a group'' since it satisfies all the conditions in the definition.  The symmetries are things you can do to (i.e., operations you can perform on) a set that leaves the set unchanged.   &lt;br /&gt;
&lt;br /&gt;
To see this, suppose that we operate on a set of vectors whose endpoints have a certain symmetry associated with them.  (For example, the vertices of a triangle.)  Assume their origin is the origin of a coordinate axes.  Operating on these with a set of matrix operators may leave the set of vectors unchanged, e.g. the arrows associated with the vectors still point to the same set of points, if the set of matrices is chosen properly.  Assuming all possible such matrices are included in the set, then the set of matrices, or set of symmetries, forms a group.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
&lt;br /&gt;
Consider a line segment of length 2 cm with midpoint at zero.  Suppose the end points are located at &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt; cm of the x-axis.  If the line segment were rotated &amp;lt;math&amp;gt;180^o\,\!&amp;lt;/math&amp;gt; about any line perpendicular to the segment, it would look like the same line segment.  (Let us be definite and choose the axis perpendicular to the x-y plane after choosing x and y axes.)  What this would do is exchange the two ends.  The set of points &amp;lt;math&amp;gt;\pm 1\,\!&amp;lt;/math&amp;gt; could be acted upon by an operator that&lt;br /&gt;
exchanges the two.  This rotation operation can be represented through multiplication by &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;.  Then there are two elements in the set of operations to consider.  The first is ''do nothing'' represented by &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;.  (This, of course, is the identity operation &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; for this ''group''.)  The other element is &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;.  Thus, representing&lt;br /&gt;
multiplication by &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt;, we have a group with the set &amp;lt;math&amp;gt;\{+1,-1\}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and operation &amp;lt;math&amp;gt;\circ \equiv \times \,\!&amp;lt;/math&amp;gt;.  Clearly the product is associative (it is multiplication), the set contains the identity, products are either &amp;lt;math&amp;gt;+1\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt; which are both in the group (indicating closure), and the inverse of &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;; all of requirements defined above are satisfied.  In fact this is the simplest group.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
&lt;br /&gt;
The set of symmetries of an equilateral triangle can be represented in several ways.  Two that are presented here are the set of operations on vectors from the origin to the vertices and the set of permutations on three objects.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Figure D.1&amp;quot;&amp;gt;'''Figure D.1'''&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:triangle2.jpg]]&lt;br /&gt;
|}&lt;br /&gt;
Figure D.1:  An equilateral triangle with vertices in the x-y plane, &amp;lt;math&amp;gt; v_1\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;(0,1/\sqrt{3})\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;v_2\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;(-1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;, and  &amp;lt;math&amp;gt;v_3\,\!&amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt;(1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Consider an equilateral triangle with its &lt;br /&gt;
center at the origin of the x-y plane and  vertices&lt;br /&gt;
placed at the following points: &amp;lt;math&amp;gt;(0,1/\sqrt{3})\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math&amp;gt;(1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;(-1/2,-1/(2\sqrt{3}))\,\!&amp;lt;/math&amp;gt;.  (See [[#Figure D.1|Figure D.1]].)  Now consider the&lt;br /&gt;
following operations on the triangle: a rotation of &amp;lt;math&amp;gt;0^o\,\!&amp;lt;/math&amp;gt; (do&lt;br /&gt;
nothing), a rotation of &amp;lt;math&amp;gt;120^o\,\!&amp;lt;/math&amp;gt;, a rotation of &amp;lt;math&amp;gt;240^o\,\!&amp;lt;/math&amp;gt;, and a reflection&lt;br /&gt;
about the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; axis, &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;.  There are two other reflections we could perform, labelled &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;, which are reflections through lines bisecting the angles at vertices 2 and 3 respectively, as shown in [[#Figure D.1|Figure D.1]].  These make up the set of six symmetry operations on the equilateral triangle.  &lt;br /&gt;
&lt;br /&gt;
If we take the first of these, &amp;lt;math&amp;gt;P_0\,\!&amp;lt;/math&amp;gt;, to be the original configuration (shown in [[#Figure D.1|Figure D.1]]), then each of the first three of these are a rotation from the original configuration.  Each of the last three is obtained from a reflection combined with a rotation.  To be explicit, let us consider the following operations:  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}_2 = \left(\begin{array}{cc} 1 &amp;amp; 0  \\ 0 &amp;amp; 1  &lt;br /&gt;
  \end{array}\right), &lt;br /&gt;
R_1 = \left(\begin{array}{cc} -1/2 &amp;amp; -\sqrt{3}/2 \\ \sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), \; \; &lt;br /&gt;
R_2 = \left(\begin{array}{cc}  -1/2 &amp;amp; \sqrt{3}/2  \\ -\sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.1}}&lt;br /&gt;
where &amp;lt;math&amp;gt;R_1\,\!&amp;lt;/math&amp;gt; is a rotation of the x-y plane by &amp;lt;math&amp;gt;120^o\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt;  is a rotation&lt;br /&gt;
of the x-y plane by &amp;lt;math&amp;gt;240^o\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\mathbb{I}_2\,\!&amp;lt;/math&amp;gt; is a rotation of &amp;lt;math&amp;gt;0^o\,\!&amp;lt;/math&amp;gt;.  In addition to these operations, two others must be included&lt;br /&gt;
to complete the set: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\sigma_1   = \left(\begin{array}{cc} -1 &amp;amp;0 \\ 0 &amp;amp;1 \end{array}\right), \;\; &lt;br /&gt;
\sigma_2 =\sigma_1 R_1 =  \left(\begin{array}{cc} 1/2 &amp;amp; \sqrt{3}/2 \\ \sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), \; \; &lt;br /&gt;
\sigma_3 = \sigma_1R_2 = \left(\begin{array}{cc}  1/2 &amp;amp; -\sqrt{3}/2  \\ -\sqrt{3}/2&lt;br /&gt;
    &amp;amp; -1/2 \end{array}\right), \; \; &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.2}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\sigma_1 R_1\,\!&amp;lt;/math&amp;gt; is the same as &amp;lt;math&amp;gt;\sigma_1\circ R_1\,\!&amp;lt;/math&amp;gt;, but the &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; has been dropped&lt;br /&gt;
since this is ordinary matrix multiplication.  This group will be used&lt;br /&gt;
as an example for several group properties and is called &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  The&lt;br /&gt;
products of these &lt;br /&gt;
elements are summarized in [[#Table D.1|Table D.1]], which is called the&lt;br /&gt;
multiplication table for the group.  The multiplication table will be&lt;br /&gt;
discussed repeatedly throughout this appendix due to its importance in&lt;br /&gt;
group theory.  It would be advisable to stare at it for some time to&lt;br /&gt;
see what patterns can be identified.  The meaning of these patterns&lt;br /&gt;
will be discussed later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;div id=&amp;quot;Table D.1&amp;quot;&amp;gt;&lt;br /&gt;
'''Table D.1: Group Multiplication Table for''' &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;20&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \downarrow\rightarrow\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\mathbb{I}_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_3 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;  \sigma_1  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_1 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; R_2 \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt; \mathbb{I}_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Table D.1: ''Group multiplication table for the group &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  The notation in the upper left corner (&amp;lt;math&amp;gt;\downarrow\rightarrow\,\!&amp;lt;/math&amp;gt;) indicates that the element in the first column is to be multiplied by the element in the first row to obtain the result.  Since the group is not abelian, i.e. the elements do not commute, the order matters.''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A second way to identify all possible configurations of&lt;br /&gt;
the triangle that leave the triangle looking the same is to use the positions of the vertices.  There are six possible choices for the positions of the vertices.  Let us name them 1,2,3.  Then, reading counter-clockwise&lt;br /&gt;
from the top, we can have &amp;lt;math&amp;gt;P_0=(1,2,3)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_2=(3,1,2)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_4=(2,3,1)\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math&amp;gt;P_1=(1,3,2)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_3=(3,2,1)\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;P_5=(2,1,3)\,\!&amp;lt;/math&amp;gt;.  These are all of the permutations of three objects.  (In this case the three objects are the numbers 1,2,3.)  This is another way to represent the various configurations of the equilateral triangle.&lt;br /&gt;
&lt;br /&gt;
====Definition 2: Order of a Group====&lt;br /&gt;
&lt;br /&gt;
The number of elements in a group is called the '''order''' of the group.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example 1 has two elements and so has order two. Example 2 has six&lt;br /&gt;
elements, so the order of this group is six.&lt;br /&gt;
&lt;br /&gt;
====Definition 3: Abelian and Nonabelian Group====&lt;br /&gt;
&lt;br /&gt;
A group for which every element of the group commutes with every other element of the group (&amp;lt;math&amp;gt;g_1g_2 = g_2g_1,\;\;\forall g_1,g_2\in \mathcal{G}\,\!&amp;lt;/math&amp;gt;) is called '''abelian'''.  If any two elements do not commute, the group is called '''nonabelian'''.  &lt;br /&gt;
&lt;br /&gt;
It is clear that Example 1 is an abelian group consisting of only two&lt;br /&gt;
elements &amp;lt;math&amp;gt;+1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;.  However, Example 2 is clearly a nonabelian&lt;br /&gt;
group as can be seen from the multiplication table.  For example&lt;br /&gt;
&amp;lt;math&amp;gt;\sigma_2R_2 = \sigma_1 \,\!&amp;lt;/math&amp;gt;, but &amp;lt;math&amp;gt;R_2\sigma_2 =\sigma_3 \neq \sigma_1 \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4: Cyclic Group====&lt;br /&gt;
A '''cyclic group''' is a group in which every element of the group can be obtained from one element and all its distinct powers.  The particular element is called the '''generating element'''.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example 4 provides examples of cyclic groups.  &lt;br /&gt;
&lt;br /&gt;
====Definition 5: Subgroup====&lt;br /&gt;
&lt;br /&gt;
A '''subgroup''' &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; is a subset of the group elements that satisfies all&lt;br /&gt;
the properties in the definition of a group under the inherited multiplication rule.&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
&lt;br /&gt;
Consider the set &amp;lt;math&amp;gt;\{0,1,2,3, \cdots, N-1\}\,\!&amp;lt;/math&amp;gt; and identify &amp;lt;math&amp;gt; N\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt;.  This is written as &amp;lt;math&amp;gt;0 \equiv N\,\!&amp;lt;/math&amp;gt;.  The operation on this set will be addition.  This is the group of integers modulo &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; and is&lt;br /&gt;
denoted &amp;lt;math&amp;gt;\mathbb{Z}_N\,\!&amp;lt;/math&amp;gt;.  To be concrete, let us consider the group &amp;lt;math&amp;gt;\mathbb{Z}_3\,\!&amp;lt;/math&amp;gt;, consisting of &lt;br /&gt;
&amp;lt;math&amp;gt;\{0,1,2;+\}\,\!&amp;lt;/math&amp;gt;.  (When the operation could be ambiguous, it is often useful to specify it explicitly along with the members of the set.)  Let us check that this is a group.  First, addition is certainly associative.  Second, the identity is zero since &lt;br /&gt;
&amp;lt;math&amp;gt;a+0 =a\,\!&amp;lt;/math&amp;gt; for any integer &amp;lt;math&amp;gt;a\,\!&amp;lt;/math&amp;gt;.  Third, &amp;lt;math&amp;gt;1+2=3 = 0\,\!&amp;lt;/math&amp;gt; mod &amp;lt;math&amp;gt;3\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
In other words, since &amp;lt;math&amp;gt;3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; are equivalent, the sum of one and&lt;br /&gt;
two is zero which is in the set.  The order of the group is 3 (hence the subscript).  &lt;br /&gt;
&lt;br /&gt;
====Example 1 Revisited====&lt;br /&gt;
&lt;br /&gt;
Recall  [[#Example 1|Example 1]] is a group with &amp;lt;math&amp;gt;\{+1,-1\}\,\!&amp;lt;/math&amp;gt; using multiplication.  &lt;br /&gt;
This is the simplest nontrivial &lt;br /&gt;
''cyclic group'', since it is a cyclic group of order two.  &lt;br /&gt;
All elements of this group are obtained from powers&lt;br /&gt;
of &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt;, namely &amp;lt;math&amp;gt;-1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(-1)^2 =1\,\!&amp;lt;/math&amp;gt;.  Notice that the generating&lt;br /&gt;
element is special; one cannot just take any element of the group to&lt;br /&gt;
be a generating element.&lt;br /&gt;
&lt;br /&gt;
====Example 4====&lt;br /&gt;
&lt;br /&gt;
We can represent the cyclic group of order &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
in several ways.  One we have seen is &amp;lt;math&amp;gt;\mathbb{Z}_N\,\!&amp;lt;/math&amp;gt; with the operation of addition.  Another is the set of elements &lt;br /&gt;
&amp;lt;math&amp;gt;\{e^{2\pi i n/(N-1)}\}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n = 0, 1, 2, 3, ..., N-1\,\!&amp;lt;/math&amp;gt; with the operation of multiplication.  Since this group can be&lt;br /&gt;
seen as the consisting of the element &amp;lt;math&amp;gt;e^{2\pi i/(N-1)}\,\!&amp;lt;/math&amp;gt; and all its&lt;br /&gt;
powers, then this is a cyclic group with generating element &lt;br /&gt;
&amp;lt;math&amp;gt;e^{2\pi i/(N-1)}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Example 5====&lt;br /&gt;
&lt;br /&gt;
Include modular arithmetic under multiplication as a group.&lt;br /&gt;
&lt;br /&gt;
===Comparing Groups: Homomorphisms and Isomorphisms===&lt;br /&gt;
&lt;br /&gt;
Let us consider two groups &amp;lt;math&amp;gt;\mathcal{G}_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathcal{G}_2\,\!&amp;lt;/math&amp;gt; with product rules symbolized by &amp;lt;math&amp;gt;\cdot\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; respectively.  Let the elements of &amp;lt;math&amp;gt;\mathcal{G}_1\,\!&amp;lt;/math&amp;gt; be denoted &amp;lt;math&amp;gt;a_1,a_2, ...\,\!&amp;lt;/math&amp;gt; and the elements of &amp;lt;math&amp;gt;\mathcal{G}_2\,\!&amp;lt;/math&amp;gt; be denoted &amp;lt;math&amp;gt;b_1,b_2, ...\,\!&amp;lt;/math&amp;gt;  When comparing two groups to see how similar they are, the relationship among the&lt;br /&gt;
elements under the product rule is all-important.  Therefore, if a map from one set of elements to another is given by &amp;lt;math&amp;gt;f:\mathcal{G}_1\rightarrow\mathcal{G}_2\,\!&amp;lt;/math&amp;gt;, meaning &amp;lt;math&amp;gt;f(a_1) \in\mathcal{G}_2\,\!&amp;lt;/math&amp;gt;, then the two groups have the same (algebraic) structure if, for all&lt;br /&gt;
&amp;lt;math&amp;gt;a_i,a_j,a_k \in \mathcal{G}_1\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
a_i\cdot a_j = a_k \;\; \Rightarrow \;\; f(a_i)\circ f(a_j) = f(a_k).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.3}}  &lt;br /&gt;
(Notice that this can be true even if the map takes all of the elements &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; to the identity.)  &lt;br /&gt;
&lt;br /&gt;
====Definition 6: Homomorphism====&lt;br /&gt;
&lt;br /&gt;
If the condition [[#eqD.3|Eq.(D.3)]] is satisfied, the map is called a '''homomorpic map''' or a '''homomorphism'''.  A homomorphism &amp;lt;math&amp;gt;f\,\!&amp;lt;/math&amp;gt; satisfies the important property that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f(A\circ B) = f(A) \cdot f(B).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.4}}&lt;br /&gt;
The composition &amp;lt;math&amp;gt;\circ\,\!&amp;lt;/math&amp;gt; can, in general, be different from &amp;lt;math&amp;gt;\cdot\,\!&amp;lt;/math&amp;gt;, but here both will be matrix multiplication unless otherwise stated.&lt;br /&gt;
&lt;br /&gt;
====Definition 7: Isomorphism====&lt;br /&gt;
&lt;br /&gt;
If a homomorphism is one-to-one (each &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; is mapped to one and only one &amp;lt;math&amp;gt;b_j\,\!&amp;lt;/math&amp;gt;) and onto (each element in &amp;lt;math&amp;gt;\mathcal{G}_2\,\!&amp;lt;/math&amp;gt; has an element of &amp;lt;math&amp;gt;\mathcal{G}_1\,\!&amp;lt;/math&amp;gt; mapped to it), then the map is called an '''isomorphic map''' or an&lt;br /&gt;
'''isomorphism'''.  &lt;br /&gt;
&lt;br /&gt;
These definitions are used repeatedly in the representation theory of groups discussed below.&lt;br /&gt;
&lt;br /&gt;
===Discussion===&lt;br /&gt;
&lt;br /&gt;
With only these few definitions it is possible to discuss many important properties of groups and some of the reasons why they are so&lt;br /&gt;
important to physics.  Let us first discuss some of the important properties of the group multiplication table.  &lt;br /&gt;
&lt;br /&gt;
====Group Multiplication Table====&lt;br /&gt;
&lt;br /&gt;
The group multiplication table specifies the structure of the group and thus identifies a group.  One example of this is when the&lt;br /&gt;
group is abelian.  For all abelian groups the table is symmetric about the diagonal.  (This follows from the fact that &amp;lt;math&amp;gt;ab=ba\,\!&amp;lt;/math&amp;gt; for abelian&lt;br /&gt;
groups.)  Another example is the presence of subgroups.  This will be illustrated in this section.   &lt;br /&gt;
&lt;br /&gt;
====Subgroups: Return to Example 2====&lt;br /&gt;
&lt;br /&gt;
In [[#Example 2|Example 2]], [[#Table D.1|Table D.1]] immediately shows that the elements&lt;br /&gt;
&amp;lt;math&amp;gt;\mathbb{I}, R_1,\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R_2\,\!&amp;lt;/math&amp;gt; form a subgroup since they are closed&lt;br /&gt;
under multiplication.  Another somewhat less obvious subgroup&lt;br /&gt;
consists of the elements &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_1 \,\!&amp;lt;/math&amp;gt;.  This is a convenient&lt;br /&gt;
method for identifying subgroups, but is clearly limited to groups&lt;br /&gt;
with a relatively small order.&lt;br /&gt;
&lt;br /&gt;
====The Rearrangement Theorem====&lt;br /&gt;
&lt;br /&gt;
Notice that each group element appears in each row and each column of [[#Table D.1|Table D.1]] once and only once.  This is no coincidence, but&lt;br /&gt;
is a general property of the multiplication table for groups.  This implies that each row and column contains each and every group element&lt;br /&gt;
(due to the presence of the identity) so that each row and column is a simple rearrangement of the set of elements.  For this reason, this is&lt;br /&gt;
sometimes called the rearrangement theorem and follows directly from the uniqueness of the elements in the set.  (If there were two&lt;br /&gt;
elements in a row that were the same, then &amp;lt;math&amp;gt;ac=ab\,\!&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;a,b,c\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
But then &amp;lt;math&amp;gt;a^{-1}ac = a^{-1}ab \Rightarrow c=b\,\!&amp;lt;/math&amp;gt;, which cannot happen if&lt;br /&gt;
all elements are distinct.)&lt;br /&gt;
&lt;br /&gt;
===A Little Representation Theory===&lt;br /&gt;
&lt;br /&gt;
A group is specified by a set of elements, its product rule, and the&lt;br /&gt;
relations among the elements of the group under the product rule.  &lt;br /&gt;
For finite order groups the group multiplication table is how one &lt;br /&gt;
identifies a group or shows that two groups are homomorphic&lt;br /&gt;
(explicitly or not).  &lt;br /&gt;
&lt;br /&gt;
====Definition 8: Representation====&lt;br /&gt;
&lt;br /&gt;
A '''matrix representation''' of an abstract group is&lt;br /&gt;
any set of elements which is homomorphic to the set of elements in the abstract group.  &lt;br /&gt;
&lt;br /&gt;
More generally, if there is a homomorphic map from the set of abstract group elements onto a set of operators which, with their own combination rule (multiplication rule), satisfies the group axioms, then the operators form a representation of the group.  (This includes preserving products as described in [[#Definition 6: Homomorphism|Section 3.1]].)&lt;br /&gt;
&lt;br /&gt;
For our purposes, it is very important to note that a&lt;br /&gt;
set of group elements can always be represented by a set of matrices&lt;br /&gt;
so that we may restrict our attention to matrix representations.  &lt;br /&gt;
This, along with ordinary &lt;br /&gt;
matrix multiplication for the product rule, provides a way to represent&lt;br /&gt;
any group.  This is true for groups that have a finite order as well&lt;br /&gt;
as infinite order (discussed later).  &lt;br /&gt;
&lt;br /&gt;
Note that a representation is a ''homomorphism'' that can be a many-to-one map.  If it is an isomorphism, the representation is said to be '''faithful'''.  If, however, all matrices are the identity matrix, then all group elements are mapped to the identity and the multiplication relations (in the group multiplication table) are preserved; this representation is sometimes called the ''trivial representation''.  This is always a valid, but not very informative and certainly not faithful, representation of any group.  &lt;br /&gt;
&lt;br /&gt;
As will be shown in this first example, there are different sets of matrices that can represent the same group.  This example will provide motivation for what follows.&lt;br /&gt;
&lt;br /&gt;
====Example 6====&lt;br /&gt;
&lt;br /&gt;
Let us consider an example of the representation of the group from&lt;br /&gt;
[[#Example 2|Example 2]].  This is a group of operations that will&lt;br /&gt;
take any permutation of the vertices to any other permutation.  This&lt;br /&gt;
is also the set of permutations of three objects.  This group is often&lt;br /&gt;
denoted &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  The set of matrices representing the&lt;br /&gt;
rotations, reflection, and rotations combined with reflection provides&lt;br /&gt;
one way of representing this group.  Another way to represent this&lt;br /&gt;
group is to use &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrices rather than the&lt;br /&gt;
&amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices given in the example.  Let us&lt;br /&gt;
consider the following set of matrices:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbb{I}_3 = \left(\begin{array}{ccc} 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;1&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
P_2 = \left(\begin{array}{ccc} 0&amp;amp;0&amp;amp;1 \\ 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;1&amp;amp;0 \end{array}\right), \\&lt;br /&gt;
P_4 = \left(\begin{array}{ccc} 0&amp;amp;1&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \\ 1&amp;amp;0&amp;amp;0 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
P_1 = \left(\begin{array}{ccc} 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \\ 0&amp;amp;1&amp;amp;0 \end{array}\right),\\&lt;br /&gt;
P_3 = \left(\begin{array}{ccc} 0&amp;amp;0&amp;amp;1 \\ 0&amp;amp;1&amp;amp;0 \\ 1&amp;amp;0&amp;amp;0 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
P_5 = \left(\begin{array}{ccc} 0&amp;amp;1&amp;amp;0 \\ 1&amp;amp;0&amp;amp;0 \\ 0&amp;amp;0&amp;amp;1 \end{array}\right). &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|D.5}}&lt;br /&gt;
Clearly, when these matrices act on a column vector, labelling the&lt;br /&gt;
vertices,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{c} 1 \\ 2 \\ 3 \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.6}}&lt;br /&gt;
the result is one of the permutations of three objects.  These&lt;br /&gt;
orientations correspond to the same action as the &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices&lt;br /&gt;
given in [[#Example 2|Example 2]] above.  Therefore, these two sets of matrices&lt;br /&gt;
represent the ''same'' group, &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt;.  These representations are clearly&lt;br /&gt;
different; in fact, the dimensions of the matrices representing the&lt;br /&gt;
group is different for the two different representations.  There are &lt;br /&gt;
other representations that can be immediately constructed.   Consider&lt;br /&gt;
a set of matrices like the following:  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbb{I}_5 = \left(\begin{array}{cc} \mathbb{I}_3&amp;amp;0 \\ 0&amp;amp;\mathbb{I}_2 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
g_2 = \left(\begin{array}{cc} P_2&amp;amp;0 \\ 0&amp;amp;R_1  \end{array}\right), \\&lt;br /&gt;
g_4 = \left(\begin{array}{cc} P_4&amp;amp;0 \\ 0&amp;amp;R_2  \end{array}\right), \; \;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
g_1 = \left(\begin{array}{cc} P_1&amp;amp;0 \\ 0&amp;amp; \sigma_1 \end{array}\right), \\&lt;br /&gt;
g_3 = \left(\begin{array}{cc} P_3&amp;amp;0 \\ 0&amp;amp;\sigma_2 \end{array}\right), \;\;\;&lt;br /&gt;
&amp;amp;&lt;br /&gt;
g_5 = \left(\begin{array}{cc} P_5&amp;amp;0 \\ 0&amp;amp; \sigma_3  \end{array}\right). &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|D.7}}&lt;br /&gt;
This set of matrices is said to be block-diagonal since it only has&lt;br /&gt;
non-zero elements in blocks along the diagonal.  The &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; represents a&lt;br /&gt;
block of zeroes which is either &amp;lt;math&amp;gt;3\times 2\,\!&amp;lt;/math&amp;gt; (upper right) or &amp;lt;math&amp;gt;2\times&lt;br /&gt;
3\,\!&amp;lt;/math&amp;gt; (lower left).  This set of matrices clearly satisfies the same multiplication relations as the sets given above,&lt;br /&gt;
(&amp;lt;math&amp;gt;\{\mathbb{I}_3,P_1,P_2,P_3,P_4,P_5\}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\{\mathbb{I}_2, R_1, R_2, \sigma_1, \sigma_2, \sigma_3\}\,\!&amp;lt;/math&amp;gt;), since the matrices multiply in&lt;br /&gt;
blocks.  The elements of the group have the same multiplication table&lt;br /&gt;
and thus are isomorphic.  Therefore this is another representation of&lt;br /&gt;
the group &amp;lt;math&amp;gt;S_3\,\!&amp;lt;/math&amp;gt; that is different from either of the&lt;br /&gt;
two representations in the subblocks along the diagonal since it is a&lt;br /&gt;
combination of the two.&lt;br /&gt;
&lt;br /&gt;
====Definition 9: Similarity Transformation====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; be an invertible matrix and &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; be any matrix.  In these notes, by '''similarity transformation'''  we mean a transformation of the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;M^\prime\,\!&amp;lt;/math&amp;gt; that looks like &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;M^\prime = SMS^{-1}.\,\!&amp;lt;/math&amp;gt;|D.8}}&lt;br /&gt;
We say the matrices &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;M^\prime\,\!&amp;lt;/math&amp;gt; are similar matrices.  &lt;br /&gt;
&lt;br /&gt;
The importance of similarity transformations for representation theory is that they leave matrix equations unchanged.  Suppose &amp;lt;math&amp;gt;A=BC \,\!&amp;lt;/math&amp;gt;.  Then defining &amp;lt;math&amp;gt;A^\prime = SAS^{-1}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B^\prime = SBS^{-1}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;C^\prime = SCS^{-1}\,\!&amp;lt;/math&amp;gt;, then&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;A=BC \; \Rightarrow A^\prime=B^\prime C^\prime\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more discussion on similarity transformations, see [[Appendix C - Vectors and Linear Algebra|Appendix C]], especially [[Appendix C - Vectors and Linear Algebra#The Trace|Section 3.5]], [[Appendix C - Vectors and Linear Algebra#The Trace|Section 3.6]], and  [[Appendix C - Vectors and Linear Algebra#The Trace|Section 5.1]].&lt;br /&gt;
&lt;br /&gt;
====Example 6 Continued====&lt;br /&gt;
&lt;br /&gt;
Example 6 is a non-trivial problem even though it appears&lt;br /&gt;
otherwise.  The way to show this is to&lt;br /&gt;
perform a similarity transformation, &amp;lt;math&amp;gt;g&lt;br /&gt;
\rightarrow S g S^{-1}\,\!&amp;lt;/math&amp;gt;, on all elements &amp;lt;math&amp;gt;g\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the group.  Since &amp;lt;math&amp;gt;S \,\!&amp;lt;/math&amp;gt; is any invertible matrix,&lt;br /&gt;
it could mix all rows and columns.  This would make it very difficult to identify the block-diagonal form or even know that it exists unless some other tools are used.&lt;br /&gt;
&lt;br /&gt;
Furthermore, given a set of matrices that are known to form a representation of the group, it is non-trivial to find the similarity transformation that will simultaneously block-diagonalize all of these matrices to enable the identification of irreducible blocks.&lt;br /&gt;
&lt;br /&gt;
====Equivalent Representations====&lt;br /&gt;
&lt;br /&gt;
Two representations &amp;lt;math&amp;gt;D^{(1)}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D^{(2)} \,\!&amp;lt;/math&amp;gt; are '''equivalent''' if and only if there is an invertible matrix &amp;lt;math&amp;gt;S \,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;D^{(1)} = SD^{(2)}S^{-1} \,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We will only consider matrix representations.  In this case, the matrices will act on a vector space &amp;lt;math&amp;gt;\mathcal{V}\,\!&amp;lt;/math&amp;gt; called the '''representation space.'''&lt;br /&gt;
&lt;br /&gt;
===Miscellaneous Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 10: Stabilizer====&lt;br /&gt;
&lt;br /&gt;
The '''stabilizer''' of an element &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; of a set &amp;lt;math&amp;gt;\mathcal{M}\,\!&amp;lt;/math&amp;gt; is the subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; that leaves the element &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; fixed: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{S} = \{S\in \mathcal{S}|sm=m\}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.9}}&lt;br /&gt;
The stabilizer of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt; is also called the '''isotropy group''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;, the '''isotropy subgroup''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;, the '''stationary subgroup''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;, or sometimes in physics, '''little group''' of &amp;lt;math&amp;gt;m\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Centralizer====&lt;br /&gt;
&lt;br /&gt;
The '''centralizer''' subgroup of a group consists of elements of the group that commute with all elements of a certain set.&lt;br /&gt;
&lt;br /&gt;
====Definition 12: Pauli Group====&lt;br /&gt;
The '''Pauli Group''' on &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; qubits, denoted &amp;lt;math&amp;gt;\mathcal{P}_n\,\!&amp;lt;/math&amp;gt;, is the set of &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; tensor products of the Pauli matrices &amp;lt;math&amp;gt;\mathbb{I}, X, Y, Z\,\!&amp;lt;/math&amp;gt; along with coefficients &amp;lt;math&amp;gt;\pm 1,\pm i\,\!&amp;lt;/math&amp;gt;.  This is an example of a group.  It is defined here due to its importance for quantum error correcting codes and the factors &amp;lt;math&amp;gt;\pm 1,\pm i\,\!&amp;lt;/math&amp;gt; are required for the closure property in the definition of a group.&lt;br /&gt;
&lt;br /&gt;
====Properties of the Pauli Group====&lt;br /&gt;
&lt;br /&gt;
Let us consider the Pauli group for 2 qubits with the tensor product symbols omitted.  The following are elements:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}\mathbb{I},\mathbb{I}X,\mathbb{I}Y,\mathbb{I}Z,X\mathbb{I},XX,XY,XZ,Y\mathbb{I},YX,YY,YZ,Z\mathbb{I},ZX,ZY,ZZ,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.10}}&lt;br /&gt;
as are all of these elements multiplied by &amp;lt;math&amp;gt;-1,\,\!&amp;lt;/math&amp;gt; and all of these elements multiplied by &amp;lt;math&amp;gt;i,\,\!&amp;lt;/math&amp;gt; as well as all of these elements multiplied by &amp;lt;math&amp;gt;-i.\,\!&amp;lt;/math&amp;gt;  Thus there are &amp;lt;math&amp;gt;4^3\,\!&amp;lt;/math&amp;gt; total elements of the group for two qubits.  In general there are &amp;lt;math&amp;gt;4\cdot 4^n\,\!&amp;lt;/math&amp;gt; elements for the Pauli group for &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; qubits.&lt;br /&gt;
&lt;br /&gt;
One of the nice and interesting properties of the Pauli group is that every pair, say &amp;lt;math&amp;gt;A,B\,\!&amp;lt;/math&amp;gt;, of elements of the Pauli group either commutes &amp;lt;math&amp;gt;[A,B]= AB-BA =0\,\!&amp;lt;/math&amp;gt; or anti-commutes &amp;lt;math&amp;gt;\{A,B\} = AB+BA =0\,\!&amp;lt;/math&amp;gt;.  This turns out the be very useful.  &lt;br /&gt;
&lt;br /&gt;
Another notation for [[#eqD.10|Equation (D.10)]] is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I},X_2,Y_2,Z_2,X_1,X_1X_2,X_1Y_2,X_1Z_2,Y_1,Y_1X_2,Y_1Y_2,Y_1Z_2,Z_1,Z_1X_2,Z_1Y_2,Z_1Z_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.11}}&lt;br /&gt;
Clearly this index notation has an advantage for large products.  It also enables us to immediately see the weight of an operator.&lt;br /&gt;
&lt;br /&gt;
====Definition 13: Weight of an Operator====&lt;br /&gt;
&lt;br /&gt;
The '''weight of an operator''' is the number of non-identity elements in the tensor product.  &lt;br /&gt;
&lt;br /&gt;
This definition is most often used in the context of the Pauli Group.  Its importance is seen in quantum error correcting codes.&lt;br /&gt;
&lt;br /&gt;
====Definition 14: Generators of a Group====&lt;br /&gt;
&lt;br /&gt;
Let us consider a discrete group (or subgroup of a larger group).  There exists a subset of the group elements that will give all of the (sub)group elements through multiplication.  The elements in this subset are called '''generators''' of the group.  &lt;br /&gt;
&lt;br /&gt;
Note that the set of generators is not unique.  &lt;br /&gt;
&lt;br /&gt;
The generators are a very convenient set to use because it is a much smaller set than the whole group and many properties of the group can discovered using only the generators.  For example, if every generator of a subgroup acts on an object and leaves it invariant, then every element of the group will also leave it invariant since they are all given by products of the generators.  Thus one only needs to check whether or not the generators will leave an object invariant.  &lt;br /&gt;
&lt;br /&gt;
One example is the stabilizer subgroup where a set of generators stabilizes, or leaves invariant, the code words of the stabilizer code.&lt;br /&gt;
&lt;br /&gt;
====Definition 15: Normalizer====&lt;br /&gt;
&lt;br /&gt;
The '''normalizer''' of a set &amp;lt;math&amp;gt;\mathcal{M}\,\!&amp;lt;/math&amp;gt; is the subgroup &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; of a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; that leaves the set &amp;lt;math&amp;gt;\mathcal{M}\,\!&amp;lt;/math&amp;gt; fixed: &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{S} = \{S\in \mathcal{S}|S\mathcal{M}=\mathcal{M}\}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.12}}&lt;br /&gt;
Note the difference between the centralizer, with which this should not be confused.  The centralizer leaves ''every element'' of the set fixed.  The elements of the normalizer contain the elements of the centralizer as a special case, but they can move elements around within the set.&lt;br /&gt;
&lt;br /&gt;
====Definition 16: Coset====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; be a subgroup and &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; be an element of the group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt;.  The left '''coset''' is a subset of the group &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G\mathcal{S} = \{GS|S\in\mathcal{S} \}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.13}}&lt;br /&gt;
&lt;br /&gt;
One can similarly define the right coset.  &lt;br /&gt;
&lt;br /&gt;
The importance of cosets is that they partition the group in a particular way.  If there is another coset, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
K\mathcal{S} = \{KS|S\in\mathcal{S} \},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.14}}&lt;br /&gt;
then either &amp;lt;math&amp;gt;G\mathcal{S}=K\mathcal{S}\,\!&amp;lt;/math&amp;gt; or they are disjoint sets, having no element in common.  (This is because &amp;lt;math&amp;gt;\mathcal{S}\,\!&amp;lt;/math&amp;gt; is a subgroup.  You could multiply by an element to show they are the same set.)&lt;br /&gt;
&lt;br /&gt;
===Infinite Order Groups: Lie Groups===&lt;br /&gt;
&lt;br /&gt;
All of the examples presented so far have been groups with finite order.  Groups that have infinite order can be described with one or more parameters. Groups that are differentiable with respect to those parameters are called ''Lie groups''.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Definition 17: Lie Group====&lt;br /&gt;
&lt;br /&gt;
A '''Lie group''' is a group that is also a differentiable manifold.  (See for example [[Bibliography#Cecile:book|Analysis, Manifolds, and Physics]]).  &lt;br /&gt;
&lt;br /&gt;
In this section, several examples of Lie groups are given.  In physics these groups correspond to a continuous set of symmetries, whereas the groups of finite order correspond to a discrete set of symmetries.&lt;br /&gt;
&lt;br /&gt;
====Example 7====&lt;br /&gt;
&lt;br /&gt;
The Lie group most often used as the introductory example is the group consisting of the set &amp;lt;math&amp;gt;e^{i\theta}\,\!&amp;lt;/math&amp;gt; for all &lt;br /&gt;
&amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  This group has an infinite number of elements (i.e. an infinite order) and one parameter, &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  The group is also a differentiable manifold---a circle.  Notice that this group is also isomorphic to the set of matrices &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
       \cos \theta &amp;amp; -\sin \theta \\&lt;br /&gt;
       \sin \theta &amp;amp; \cos \theta &lt;br /&gt;
\end{array}\right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.15}}&lt;br /&gt;
If this matrix were to act on a unit vector in the x-y plane, it would rotate that vector around in a circle; after &amp;lt;math&amp;gt;2\pi\,\!&amp;lt;/math&amp;gt;, the tip of the vector would sweep out a circle of unit radius.&lt;br /&gt;
&lt;br /&gt;
====Example 8====&lt;br /&gt;
&lt;br /&gt;
Another example of a Lie group, and one of the most important for quantum information, is the set of complex &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices&lt;br /&gt;
that satisfy&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U^\dagger U = \mathbb{I} = U U^\dagger. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.16}}&lt;br /&gt;
This group is called &amp;lt;math&amp;gt;U(2)\,\!&amp;lt;/math&amp;gt;  and is the set of ''unitary'' &lt;br /&gt;
&amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrices (hence the &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;).  Notice that the determinant&lt;br /&gt;
of this set is &amp;lt;math&amp;gt;e^{i\alpha}\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\alpha\,\!&amp;lt;/math&amp;gt; is a real number, since  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
1 = \det(\mathbb{I}) = \det(U U^\dagger) = \det(U)\det(U^\dagger) &lt;br /&gt;
  = \det(U)(\det(U))^*. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.17}}&lt;br /&gt;
&lt;br /&gt;
There is a subgroup of this group that is often considered---the subgroup with determinant one.  This group is denoted &amp;lt;math&amp;gt;SU(2)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and is known as the ''special unitary group''.  The term unitary refers to the fact that &amp;lt;math&amp;gt;U^\dagger U = I = UU^\dagger\,\!&amp;lt;/math&amp;gt;, and the &amp;quot;S&amp;quot; for special indicates that it has determinant one.&lt;br /&gt;
&lt;br /&gt;
====Example 9====&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the unitary and special unitary groups&lt;br /&gt;
to &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; matrices.  These are denoted &amp;lt;math&amp;gt;U(N)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;SU(N)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
respectively.  In quantum computing, an important set of unitary groups is the set with &amp;lt;math&amp;gt;U(2^n)\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is the number of qubits.  This is the set of all possible unitary transformations on a set of qubits.&lt;br /&gt;
&lt;br /&gt;
====Example 10====&lt;br /&gt;
&lt;br /&gt;
The complex General Linear group is the set of invertible &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; matrices with complex numbers as entries.  It is denoted &amp;lt;math&amp;gt;GL(N,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===More Representation Theory===&lt;br /&gt;
&lt;br /&gt;
In physics we are most often concerned with linear representations of groups that use linear operators to represent group elements; we represent these operators with matrices.  In this Appendix, the focus is entirely on these types of representations, although this is not always stated explicitly.  Although these comments have been made above for finite groups, they are worth reiterating due to their importance and because they also apply to infinite order groups, such as Lie groups.  Furthermore, definitions introduced for finite order groups are also applicable to Lie groups.&lt;br /&gt;
&lt;br /&gt;
Thus the previous discussion of representation theory applies to the representation of Lie groups.  A representation of a group can be &amp;quot;reduced&amp;quot; to block-diagonal form.  When these blocks cannot be further reduced, the blocks are called &amp;quot;irreducible&amp;quot;.  These irreducible blocks make up &amp;quot;irreducible representations.&amp;quot; Our study of representation theory is our concern with irreducible blocks and how to find them.  &lt;br /&gt;
&lt;br /&gt;
Clearly, a set of matrices that may be block-diagonalizable but has been acted upon by a highly non-trivial &amp;lt;math&amp;gt;S \,\!&amp;lt;/math&amp;gt; may well represent a group &amp;lt;math&amp;gt;\mathcal{G}\,\!&amp;lt;/math&amp;gt; for sets of matrices with many different dimensions and many different block-diagonal forms.  Therefore finding irreducible blocks and the similarity transformation that simultaneously block-diagonalizes all matrices of a given representation is highly non-trivial.  &lt;br /&gt;
&lt;br /&gt;
Before discussing the representation of Lie groups, there is another definition that is quite helpful.  &lt;br /&gt;
&lt;br /&gt;
====The Lie Algebra of a Lie Group====&lt;br /&gt;
&lt;br /&gt;
The Lie algebra of a Lie group is defined as the set of left-invariant vector fields on the manifold of the Lie group.  For our purposes, the Lie algebra will be described by the basis elements of the tangent space to the origin of the group that is isomorphic to the set of left-invariant vector fields.  To see how to relate the group and algebra and to see how this is useful, let us suppose that there is a Lie algebra corresponding to a Lie group that has a set of basis elements &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt;.  To describe the relation between the Lie group and Lie algebra, let &amp;lt;math&amp;gt;g\in\mathcal{G}\,\!&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;\{a_i\}\,\!&amp;lt;/math&amp;gt; be a set of parameters (which can be taken to be real).  Then an element of the Lie algebra is given by &amp;lt;math&amp;gt; \sum_i a_i\lambda_i\,\!&amp;lt;/math&amp;gt; and an element of the group written in terms of these parameters is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
g=\exp\left(-i\sum_i a_i \lambda_i\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.17}}&lt;br /&gt;
The tangent space to the origin is given by the derivative of &amp;lt;math&amp;gt;g \,\!&amp;lt;/math&amp;gt; with respect to the parameters &amp;lt;math&amp;gt; a_i\,\!&amp;lt;/math&amp;gt;.  In this way, one sees that the group is an analytic manifold.  There are several reasons why it is useful to consider the Lie algebra.  One is that it is often easier to analyze than the Lie group, and several important properties of the Lie group are able to be obtained from properties of the Lie algebra.  (For example, subalgebras correspond to subgroups.)&lt;br /&gt;
&lt;br /&gt;
====Representation Theory for Lie Groups====&lt;br /&gt;
&lt;br /&gt;
As with finite order groups, one of the primary objectives of this introduction to group theory is to enable one to find irreducible representations of a group from a given reducible one.  At the least, the objective should be to understand what this means, how one would go about it in principle, and how it is used in quantum physics and quantum computing.  &lt;br /&gt;
&lt;br /&gt;
Lie groups, represented by a set of matrices consisting of differentiable parameters, may also be described by matrices that are reducible to block-diagonal form with blocks that cannot be reduced further.  These irreducible blocks form irreducible representations of the group.  One may suppose that irreducible representations of Lie groups are more difficult to understand than finite groups due to the fact that there are an infinite number of matrices in the set of group elements.  This is certainly true, so one sometimes relies on the Lie algebra.  Suppose a set of elements &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt; of a Lie algebra obeys a particular set of commutation relations, say&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
[\lambda_i,\lambda_j] = 2i\sum_kf_{ijk}\lambda_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.18}}&lt;br /&gt;
where &amp;lt;math&amp;gt;f_{ijk}\,\!&amp;lt;/math&amp;gt; is some set of constants (and the factor of two is a non-standard convention).  Then any other set that obeys the same commutation relations is also a representation of the same Lie algebra.  The representation of the algebra can then give a representation of the group through exponentiation, although the representation may not be faithful.  &lt;br /&gt;
&lt;br /&gt;
Now let us suppose that there exists a similarity transformation &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; that will simultaneously block-diagonalize all elements of a group.  Then, observing that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
SgS^{-1}=S\exp\left(-i\sum_i a_i \lambda_i\right)S^{-1} = \exp\left(-i\sum_i a_i S\lambda_iS^{-1}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.19}}&lt;br /&gt;
it is clear that the same similarity transformation will block-diagonalize the elements of the algebra as well.&lt;br /&gt;
&lt;br /&gt;
====Some Useful Relations Among Lie Algebra Elements====&lt;br /&gt;
&lt;br /&gt;
A Lie algebra will obey the the commutation relations, [[#eqD.18|Equation (D.18)]].  However, since the emphasis here is the representation of groups in terms of matrices, there are several other useful relations that will be listed.  These relations apply to all Lie algebra elements of &amp;lt;math&amp;gt;SU(d)\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
We have chosen the following convention for the normalization of &lt;br /&gt;
the algebra of Hermitian matrices that represent generators of &amp;lt;math&amp;gt;SU(d)\,\!&amp;lt;/math&amp;gt;:  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\text{Tr}(\lambda_i\lambda_j) = 2\delta_{ij}.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.20}}&lt;br /&gt;
&lt;br /&gt;
The commutation and anti-commutation relations of the matrices &lt;br /&gt;
representing the basis for the Lie algebra can be summarized &lt;br /&gt;
by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_i \lambda_j = \frac{2}{d}\delta_{ij} + if _{ijk} \lambda_k &lt;br /&gt;
                      + d_{ijk}\lambda_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.21}}&lt;br /&gt;
where here, and throughout this section, a sum over repeated &lt;br /&gt;
indices is understood.  &lt;br /&gt;
&lt;br /&gt;
As with any Lie algebra, we have the Jacobi identity:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f_{ilm}f_{jkl} + f_{jlm}f_{kil} + f_{klm}f_{ijl} =0,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.22}}&lt;br /&gt;
which may also be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
[[\lambda_i,\lambda_j],\lambda_k]+ [[\lambda_j,\lambda_k],\lambda_i] + [[\lambda_k,\lambda_i],\lambda_j]  =0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.23}}&lt;br /&gt;
There is also a Jacobi-like identity,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f_{ilm}d_{jkl} + f_{jlm}d_{kil} + f_{klm}d_{ijl} =0,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.24}}&lt;br /&gt;
which was given by Macfarlane, et al. \cite{Macfarlane}. &lt;br /&gt;
&lt;br /&gt;
Also provided in cite{Macfarlane} are the following identities:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
d_{iik} &amp;amp;= 0, \\&lt;br /&gt;
d_{ijk}f_{ljk} &amp;amp;= 0,  \\&lt;br /&gt;
f_{ijk}f_{ljk} &amp;amp;= d\delta_{il},  \\&lt;br /&gt;
d_{ijk}d_{ljk} &amp;amp;= \frac{d^2 - 4}{d}\delta_{il},  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.25}}&lt;br /&gt;
and&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
f_{ijm}f_{klm} = \frac{2}{d}(\delta_{ik}\delta_{jl} - \delta_{il}\delta_{jk}) &lt;br /&gt;
                  + (d_{ikm}d_{jlm} - d_{jkm}d_{ilm}) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.26}}&lt;br /&gt;
and finally&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
f_{piq}f_{qjr}f_{rkp} &amp;amp;= -\left(\frac{d}{2}\right)f_{ijk},\\&lt;br /&gt;
d_{piq}f_{qjr}f_{rkp} &amp;amp;= -\left(\frac{d}{2}\right)d_{ijk},\\&lt;br /&gt;
d_{piq}d_{qjr}f_{rkp} &amp;amp;= \left(\frac{d^2 - 4}{2d}\right)f_{ijk},\\&lt;br /&gt;
d_{piq}d_{qjr}d_{rkp} &amp;amp;= \left(\frac{d^2 - 12}{2d}\right)d_{ijk}.&lt;br /&gt;
\end{align} &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.27}}&lt;br /&gt;
The proofs of these are fairly straight-forward and are omitted.&lt;br /&gt;
&lt;br /&gt;
====Tensor Products of Representations====&lt;br /&gt;
&lt;br /&gt;
When one takes the tensor product of two representations, another representation results.  In general, this representation is reducible.  &lt;br /&gt;
&lt;br /&gt;
To see this, let &amp;lt;math&amp;gt; g_1,g_2,g_3,g_4 \in \mathcal{G}\,\!&amp;lt;/math&amp;gt;.  A tensor product of two group elements is  &amp;lt;math&amp;gt; g_1\otimes g_1 \in \mathcal{G}\otimes \mathcal{G} \,\!&amp;lt;/math&amp;gt;.  Certainly, when &amp;lt;math&amp;gt; g_1\cdot g_2 = g_3, \,\!&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt; g_1\otimes g_1\cdot g_2 \otimes g_3 = g_3 \otimes g_3\,\!&amp;lt;/math&amp;gt;.  (See [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)  Therefore, the tensor product of two representations is another representation.  However, even if this is an irreducible representation, one would suspect that the tensor product is a reducible representation---this turns out to be true.  The task is to find the irreducible components.&lt;br /&gt;
&lt;br /&gt;
One very important example of this is used for the addition of angular momenta.  Before revisiting the more general case, this important example is discussed.&lt;br /&gt;
&lt;br /&gt;
====Addition of Angular Momenta====&lt;br /&gt;
&lt;br /&gt;
In the theory of angular momenta, quantum states are labelled according to their total angular momentum eigenstates and the z component of their angular momentum.  Let the total angular momentum square be given by the operator &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{J} = (J_x,J_y,J_z).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.26}}&lt;br /&gt;
These operators satisfy the commutation relations&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
[J_i,J_j] = i\epsilon_{ijk}J_k,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.27}}&lt;br /&gt;
where &amp;lt;math&amp;gt;i,j,k = 1,2,\text{ or }3, \,\!&amp;lt;/math&amp;gt; and the epsilon tensor is defined in [[Appendix C - Vectors and Linear Algebra#eqC.9|Equation C.9]].  A state &amp;lt;math&amp;gt;\left| j, m\right\rangle\,\!&amp;lt;/math&amp;gt; satisfies &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
J^2\left| j, m\right\rangle = j(j+1)\hbar^2\left| j, m\right\rangle, \;\; \text{and} \;\; J_z\left| j, m\right\rangle = m\hbar\left| j, m\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.28}}&lt;br /&gt;
where &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
J^2 = \vec{J} \cdot \vec{J} = J_x^2 + J_y^2 + J_z^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.29}}&lt;br /&gt;
The common problem is as follows.  Given two states &amp;lt;math&amp;gt;\left| j_1, m_1\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left| j_2, m_2\right\rangle\,\!&amp;lt;/math&amp;gt;, find the total angular momentum of the two states combined.  The objective is to find a new basis,  &amp;lt;math&amp;gt;\left| j, m, j_1, j_2\right\rangle\,\!&amp;lt;/math&amp;gt;, which is expressed in terms of the old basis.  In other words, we need to find the set of numbers &amp;lt;math&amp;gt;C(j_1,j_2,j,m|m_1,m_2,m)\,\!&amp;lt;/math&amp;gt; such that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left| j, m, j_1, j_2\right\rangle = \sum_{m_1,m_2} C(j_1,j_2,j,m|m_1,m_2,m) \left| j_1, m_1\right\rangle \left| j_2, m_2\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|D.30}}&lt;br /&gt;
The numbers &amp;lt;math&amp;gt;C(j_1,j_2,j,m|m_1,m_2,m)\,\!&amp;lt;/math&amp;gt; are called Clebsch-Gordan coefficients, or Wigner-Clebsch-Gordan coefficients.  These not only put the tensor product of the vectors into this special form, but they also block-diagonalize the tensor products of the operators.  The most common example of this is the addition of angular momentum of two spin-1/2 particles.  The result is a triplet (spin-1 representation) and a singlet (spin-0 representation).  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;  \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Concluding Remarks====&lt;br /&gt;
&lt;br /&gt;
'''To summarize''', matrix representations of a group are sets of matrices that represent the group in the sense that they follow the same multiplication law as the original group elements.  The representation may be reducible, meaning the set of matrices are all block-diagonalized by a single similarity transformation such that each individual block will represent a group element in its own representation.  If a representation (the set of matrices) cannot be transformed into by a single similarity transformation such that each matrix is comprised of a set of smaller blocks, then the representation is called irreducible.  If there is a isomorphism from the set of matrices to the original group, then the representation is faithful.&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1746</id>
		<title>Index</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1746"/>
		<updated>2011-11-21T16:35:13Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;A&amp;quot;&amp;gt;&amp;lt;big&amp;gt;A&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:average - [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;big&amp;gt;B&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:basis vectors real [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
:binary numbers [[Appendix F - Classical Error Correcting Codes#Binary Operations|'''F.2''']]&lt;br /&gt;
:bit [[Chapter 1 - Introduction#Bits and Qubits: An Introduction|1.3]]&lt;br /&gt;
:bit-flip operation [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Bloch Sphere [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:bra [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:bracket [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;C&amp;quot;&amp;gt;&amp;lt;big&amp;gt;C&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:check-sum [[Appendix F - Classical Error Correcting Codes#Definition 1|'''F.3.1''']]&lt;br /&gt;
:closed-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:CNOT gate(see controlled NOT) &lt;br /&gt;
:Code [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code word [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:commutator [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], &lt;br /&gt;
:complex conjugate [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
::of a matrix [[Appendix C - Vectors and Linear Algebra#Complex Conjugate|'''C.3.1''']], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']]&lt;br /&gt;
:complex number [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
:computational basis [[Chapter 2 - Qubits and Collections of Qubits#Qubit States|2.2]]&lt;br /&gt;
:controlled NOT [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:controlled phase gate [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|6.1]]&lt;br /&gt;
:controlled unitary operation [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;big&amp;gt;D&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:decoherence [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:degenerate [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:delta&lt;br /&gt;
::Kronecker [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dense coding [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]]&lt;br /&gt;
:density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]],[[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::for two qubits [[Chapter 3 - Physics of Quantum Information#Density Matrix for a Mixed State: Two States|3.5.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
::mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:density operator [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:determinant [[Appendix C - Vectors and Linear Algebra#The Determinant|'''C.3.6''']]&lt;br /&gt;
:disjointness condition [[Appendix F - Classical Error Correcting Codes#Errors|'''F.5''']]&lt;br /&gt;
:distance (see also, code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']])&lt;br /&gt;
:DiVencenzo's requirements [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:Dirac notation [[Appendix C - Vectors and Linear Algebra#Introduction|'''C.2.1''']], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dot product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:dual matrix [[Appendix F -Classical Error Correcting Codes#Parity_Check_Matrix|'''F.4.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;E&amp;quot;&amp;gt;&amp;lt;big&amp;gt;E&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:eigenvalue decomposition [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvectors [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:epsilon tensor (see Levi-Civita Tensor)&lt;br /&gt;
:entangled states (see entanglement)&lt;br /&gt;
:entanglement [[Chapter 4 - Entanglement|4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 1 - Introduction#How do quantum computers provide an advantage?|1.2.5]]&lt;br /&gt;
::pure state [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
::mixed state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:error syndrome [[Appendix F - Classical Error Correcting Codes#Parity Check Matrix|'''F.4.2''']]&lt;br /&gt;
:expectation value [[Chapter 3 - Physics of Quantum Information#Expectation Values|3.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;F&amp;quot;&amp;gt;&amp;lt;big&amp;gt;F&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:field [[Appendix F - Classical Error Correcting Codes#Binary_Operations|'''F.2''']]&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;G&amp;quot;&amp;gt;&amp;lt;big&amp;gt;G&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:generator matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.1''']]&lt;br /&gt;
:group [[Appendix D - Group Theory#Definitions and Examples|'''D.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;H&amp;quot;&amp;gt;&amp;lt;big&amp;gt;H&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Hadamard gate [[Chapter 2 - Qubits and Collections of Qubits#eq2.16|2.16]]&lt;br /&gt;
:Hamiltonian [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:Hamming distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.3''']]&lt;br /&gt;
:Hamming weight, or weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
:Hermitian matrix [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5]], [[Chapter 8 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|8.2]], [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']], [[Appendix C - Vectors and Linear Algebra#Examples|'''C.6.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:Hilbert-Schmidt inner product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;I&amp;quot;&amp;gt;&amp;lt;big&amp;gt;I&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:inner product  &lt;br /&gt;
::for real vectors [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
::for complex vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:inverse of a matrix [[Appendix C - Vectors and Linear Algebra#The Inverse of a Matrix|'''C.3.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;K&amp;quot;&amp;gt;&amp;lt;big&amp;gt;K&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:ket [[Chapter 2 - Qubits and Collections of Qubits#States of Many Qubits|2.5]], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']]&lt;br /&gt;
:Kraus operators [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]]&lt;br /&gt;
:Kronecker delta [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:Kronecker product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;L&amp;quot;&amp;gt;&amp;lt;big&amp;gt;L&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Levi-Civita Tensor [[Appendix C - Vectors and Linear Algebra#eqC.9|'''C.3.6''']]&lt;br /&gt;
::Generalized [[Appendix C - Vectors and Linear Algebra#eqC.8|'''C.3.6''']]&lt;br /&gt;
:linear code [[Appendix F - Classical Error Correcting Codes#Definition 6|'''F.3.8''']]&lt;br /&gt;
:local operations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
:local unitary transformations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;big&amp;gt;M&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:matrix exponentiation [[Chapter 3 - Physics of Quantum Information#expmatrix|3.2]]&lt;br /&gt;
:maximally entangled states [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:maximally mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]]&lt;br /&gt;
::two qubits&lt;br /&gt;
:mean (see Average)&lt;br /&gt;
:median [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:minimum distance of a code (also code distance) [[Appendix F - Classical Error Correcting Codes#Definition 5|'''F.3.5''']]&lt;br /&gt;
:mixed state density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
:modulus squared [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;O&amp;quot;&amp;gt;&amp;lt;big&amp;gt;O&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:open quantum systems [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:open-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:operator-sum decomposition [[Chapter 8 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|8.4]]&lt;br /&gt;
:orthogonal [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|5.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
::vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;big&amp;gt;P&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:parity check [[Appendix F - Classical Error Correcting Codes#Defintion 1|'''F.3.1''']]&lt;br /&gt;
:parity check matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.2''']]&lt;br /&gt;
:partial trace&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:Pauli matrices [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:phase gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:phase-flip [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Planck's constant [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:projection operator [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]]&lt;br /&gt;
:pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Q&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Q&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Qbit (see qubit)&lt;br /&gt;
:quantum bit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
:quantum dense coding (see [[#D|dense coding]])&lt;br /&gt;
:quantum gates [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:qubit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;R&amp;quot;&amp;gt;&amp;lt;big&amp;gt;R&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:reduced density matrix [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::see reduced density operator&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:requirements for scalable quantum computing [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;S&amp;quot;&amp;gt;&amp;lt;big&amp;gt;S&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:scalability&lt;br /&gt;
:Schrodinger Equation [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
::for density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:separable state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
::simply separable [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:similar matrices [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:similarity transformation [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:singular values [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:special unitary matrix [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]]&lt;br /&gt;
:spectrum [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:standard deviation [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:SU [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']]&lt;br /&gt;
:syndrome measurement [[Appendix F - Classical Error Correcting Codes#Parity_Check_Matrix|'''F.4.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;big&amp;gt;T&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:teleportation [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:tensor product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
:trace [[Appendix C - Vectors and Linear Algebra#The Trace|'''C.3.5''']]&lt;br /&gt;
::partial(see partial trace)&lt;br /&gt;
:transformation [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Circuit Diagrams for Qubit Gates|2.3.1]], [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]], [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 8 - Noise in Quantum Systems#Modelling Open System Evolution|8.3]], [[Chapter 8 - Noise in Quantum Systems#Fixed-Basis Operations|8.3.2]], [[Chapter 8 - Noise in Quantum Systems#Unitary Freedom|8.4.1]], [[Chapter 8 - Noise in Quantum Systems#Physical Interpretation of the Unitary Freedom|8.4.2]], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']], [[Appendix D - Group Theory#Introduction|'''D.1''']]&lt;br /&gt;
::active [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
::passive [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:transpose [[Appendix C - Vectors and Linear Algebra#Transpose|'''C.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;U&amp;quot;&amp;gt;&amp;lt;big&amp;gt;U&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:uncertainty principle [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
:unitary matrix [[Chapter 2 - Qubits and Collections of Qubits#Chapter 2 - Qubits and Collections of Qubits|2.3]], [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']], [[Appendix D - Group Theory#Infinite Order Groups: Lie Groups|'''D.7.2''']]&lt;br /&gt;
:universal set of gates [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:universality [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;V&amp;quot;&amp;gt;&amp;lt;big&amp;gt;V&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:variance [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;W&amp;quot;&amp;gt;&amp;lt;big&amp;gt;W&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:weight, or Hamming weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;X&amp;quot;&amp;gt;&amp;lt;big&amp;gt;X&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:X-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Y&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Y&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Y-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Z&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Z&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Z-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1745</id>
		<title>Index</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1745"/>
		<updated>2011-11-21T16:32:27Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;A&amp;quot;&amp;gt;&amp;lt;big&amp;gt;A&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:average - [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;big&amp;gt;B&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:basis vectors real [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
:binary numbers [[Appendix F - Classical Error Correcting Codes#Binary Operations|'''F.2''']]&lt;br /&gt;
:bit [[Chapter 1 - Introduction#Bits and Qubits: An Introduction|1.3]]&lt;br /&gt;
:bit-flip operation [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Bloch Sphere [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:bra [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:bracket [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;C&amp;quot;&amp;gt;&amp;lt;big&amp;gt;C&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:check-sum [[Appendix F - Classical Error Correcting Codes#Definition 1|'''F.3.1''']]&lt;br /&gt;
:closed-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:CNOT gate(see controlled NOT) &lt;br /&gt;
:Code [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code word [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:commutator [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], &lt;br /&gt;
:complex conjugate [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
::of a matrix [[Appendix C - Vectors and Linear Algebra#Complex Conjugate|'''C.3.1''']], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']]&lt;br /&gt;
:complex number [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
:computational basis [[Chapter 2 - Qubits and Collections of Qubits#Qubit States|2.2]]&lt;br /&gt;
:controlled NOT [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:controlled phase gate [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|6.1]]&lt;br /&gt;
:controlled unitary operation [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;big&amp;gt;D&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:decoherence [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:degenerate [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:delta&lt;br /&gt;
::Kronecker [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dense coding [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]]&lt;br /&gt;
:density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]],[[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::for two qubits [[Chapter 3 - Physics of Quantum Information#Density Matrix for a Mixed State: Two States|3.5.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
::mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:density operator [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:determinant [[Appendix C - Vectors and Linear Algebra#The Determinant|'''C.3.6''']]&lt;br /&gt;
:disjointness condition [[Appendix F - Classical Error Correcting Codes#Errors|'''F.5''']]&lt;br /&gt;
:distance (see also, code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']])&lt;br /&gt;
:DiVencenzo's requirements [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:Dirac notation [[Appendix C - Vectors and Linear Algebra#Introduction|'''C.2.1''']], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dot product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:dual matrix [[Appendix F -Classical Error Correcting Codes#Parity_Check_Matrix|'''F.4.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;E&amp;quot;&amp;gt;&amp;lt;big&amp;gt;E&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:eigenvalue decomposition [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvectors [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:epsilon tensor (see Levi-Civita Tensor)&lt;br /&gt;
:entangled states (see entanglement)&lt;br /&gt;
:entanglement [[Chapter 4 - Entanglement|4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 1 - Introduction#How do quantum computers provide an advantage?|1.2.5]]&lt;br /&gt;
::pure state [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
::mixed state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:error syndrome [[Appendix F - Classical Error Correcting Codes#Parity Check Matrix|'''F.4.2''']]&lt;br /&gt;
:expectation value [[Chapter 3 - Physics of Quantum Information#Expectation Values|3.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;F&amp;quot;&amp;gt;&amp;lt;big&amp;gt;F&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:field [[Appendix F - Classical Error Correcting Codes#Binary_Operations|'''F.2''']]&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;G&amp;quot;&amp;gt;&amp;lt;big&amp;gt;G&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:generator matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.1''']]&lt;br /&gt;
:group [[Appendix D - Group Theory#Definitions and Examples|'''D.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;H&amp;quot;&amp;gt;&amp;lt;big&amp;gt;H&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Hadamard gate [[Chapter 2 - Qubits and Collections of Qubits#eq2.16|2.16]]&lt;br /&gt;
:Hamiltonian [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:Hamming distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.3''']]&lt;br /&gt;
:Hamming weight, or weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
:Hermitian matrix [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5]], [[Chapter 8 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|8.2]], [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']], [[Appendix C - Vectors and Linear Algebra#Examples|'''C.6.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:Hilbert-Schmidt inner product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;I&amp;quot;&amp;gt;&amp;lt;big&amp;gt;I&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:inner product  &lt;br /&gt;
::for real vectors [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
::for complex vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:inverse of a matrix [[Appendix C - Vectors and Linear Algebra#The Inverse of a Matrix|'''C.3.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;K&amp;quot;&amp;gt;&amp;lt;big&amp;gt;K&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:ket [[Chapter 2 - Qubits and Collections of Qubits#States of Many Qubits|2.5]], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']]&lt;br /&gt;
:Kraus operators [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]]&lt;br /&gt;
:Kronecker delta [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:Kronecker product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;L&amp;quot;&amp;gt;&amp;lt;big&amp;gt;L&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Levi-Civita Tensor [[Appendix C - Vectors and Linear Algebra#eqC.9|'''C.3.6''']]&lt;br /&gt;
::Generalized [[Appendix C - Vectors and Linear Algebra#eqC.8|'''C.3.6''']]&lt;br /&gt;
:linear code [[Appendix F - Classical Error Correcting Codes#Definition 6|'''F.3.8''']]&lt;br /&gt;
:local operations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
:local unitary transformations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;big&amp;gt;M&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:matrix exponentiation [[Chapter 3 - Physics of Quantum Information#expmatrix|3.2]]&lt;br /&gt;
:maximally entangled states [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:maximally mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]]&lt;br /&gt;
::two qubits&lt;br /&gt;
:mean (see Average)&lt;br /&gt;
:median [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:minimum distance of a code (also code distance) [[Appendix F - Classical Error Correcting Codes#Definition 5|'''F.3.5''']]&lt;br /&gt;
:mixed state density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
:modulus squared [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;O&amp;quot;&amp;gt;&amp;lt;big&amp;gt;O&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:open quantum systems [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:open-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:operator-sum decomposition [[Chapter 8 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|8.4]]&lt;br /&gt;
:orthogonal [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|5.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
::vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;big&amp;gt;P&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:parity check [[Appendix F - Classical Error Correcting Codes#Defintion 1|'''F.3.1''']]&lt;br /&gt;
:parity check matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.2''']]&lt;br /&gt;
:partial trace&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:Pauli matrices [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:phase gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:phase-flip [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Planck's constant [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:projection operator [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]]&lt;br /&gt;
:pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Q&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Q&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Qbit (see qubit)&lt;br /&gt;
:quantum bit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
:quantum dense coding (see [[#D|dense coding]])&lt;br /&gt;
:quantum gates [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:qubit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;R&amp;quot;&amp;gt;&amp;lt;big&amp;gt;R&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:reduced density matrix [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::see reduced density operator&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:requirements for scalable quantum computing [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;S&amp;quot;&amp;gt;&amp;lt;big&amp;gt;S&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:scalability&lt;br /&gt;
:Schrodinger Equation [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
::for density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:separable state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
::simply separable [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:similar matrices [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:similarity transformation [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:singular values [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:special unitary matrix [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]]&lt;br /&gt;
:spectrum [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:standard deviation [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:SU [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;big&amp;gt;T&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:teleportation [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:tensor product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
:trace [[Appendix C - Vectors and Linear Algebra#The Trace|'''C.3.5''']]&lt;br /&gt;
::partial(see partial trace)&lt;br /&gt;
:transformation [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Circuit Diagrams for Qubit Gates|2.3.1]], [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]], [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 8 - Noise in Quantum Systems#Modelling Open System Evolution|8.3]], [[Chapter 8 - Noise in Quantum Systems#Fixed-Basis Operations|8.3.2]], [[Chapter 8 - Noise in Quantum Systems#Unitary Freedom|8.4.1]], [[Chapter 8 - Noise in Quantum Systems#Physical Interpretation of the Unitary Freedom|8.4.2]], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']], [[Appendix D - Group Theory#Introduction|'''D.1''']]&lt;br /&gt;
::active [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
::passive [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:transpose [[Appendix C - Vectors and Linear Algebra#Transpose|'''C.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;U&amp;quot;&amp;gt;&amp;lt;big&amp;gt;U&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:uncertainty principle [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
:unitary matrix [[Chapter 2 - Qubits and Collections of Qubits#Chapter 2 - Qubits and Collections of Qubits|2.3]], [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']], [[Appendix D - Group Theory#Infinite Order Groups: Lie Groups|'''D.7.2''']]&lt;br /&gt;
:universal set of gates [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:universality [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;V&amp;quot;&amp;gt;&amp;lt;big&amp;gt;V&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:variance [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;W&amp;quot;&amp;gt;&amp;lt;big&amp;gt;W&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:weight, or Hamming weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;X&amp;quot;&amp;gt;&amp;lt;big&amp;gt;X&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:X-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Y&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Y&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Y-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Z&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Z&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Z-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1744</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1744"/>
		<updated>2011-11-21T16:27:46Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Definition 6 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and these two operations follow the distributive law, the set becomes a '''field''' (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate''' of a code is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;(I_k,A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T,I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1743</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1743"/>
		<updated>2011-11-21T16:26:48Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Definition 6 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and these two operations follow the distributive law, the set becomes a '''field''' (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate of a code''' is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;(I_k,A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T,I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1742</id>
		<title>Index</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1742"/>
		<updated>2011-11-21T16:23:57Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;A&amp;quot;&amp;gt;&amp;lt;big&amp;gt;A&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:average - [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;big&amp;gt;B&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:basis vectors real [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
:binary numbers [[Appendix F - Classical Error Correcting Codes#Binary Operations|'''F.2''']]&lt;br /&gt;
:bit [[Chapter 1 - Introduction#Bits and Qubits: An Introduction|1.3]]&lt;br /&gt;
:bit-flip operation [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Bloch Sphere [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:bra [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:bracket [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;C&amp;quot;&amp;gt;&amp;lt;big&amp;gt;C&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:check-sum [[Appendix F - Classical Error Correcting Codes#Definition 1|'''F.3.1''']]&lt;br /&gt;
:closed-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:CNOT gate(see controlled NOT) &lt;br /&gt;
:Code [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code word [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:commutator [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], &lt;br /&gt;
:complex conjugate [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
::of a matrix [[Appendix C - Vectors and Linear Algebra#Complex Conjugate|'''C.3.1''']], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']]&lt;br /&gt;
:complex number [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
:computational basis [[Chapter 2 - Qubits and Collections of Qubits#Qubit States|2.2]]&lt;br /&gt;
:controlled NOT [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:controlled phase gate [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|6.1]]&lt;br /&gt;
:controlled unitary operation [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;big&amp;gt;D&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:decoherence [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:degenerate [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:delta&lt;br /&gt;
::Kronecker [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dense coding [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]]&lt;br /&gt;
:density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]],[[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::for two qubits [[Chapter 3 - Physics of Quantum Information#Density Matrix for a Mixed State: Two States|3.5.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
::mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:density operator [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:determinant [[Appendix C - Vectors and Linear Algebra#The Determinant|'''C.3.6''']]&lt;br /&gt;
:disjointness condition [[Appendix F - Classical Error Correcting Codes#Errors|'''F.5''']]&lt;br /&gt;
:distance (see also, code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']])&lt;br /&gt;
:DiVencenzo's requirements [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:Dirac notation [[Appendix C - Vectors and Linear Algebra#Introduction|'''C.2.1''']], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dot product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;E&amp;quot;&amp;gt;&amp;lt;big&amp;gt;E&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:eigenvalue decomposition [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvectors [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:epsilon tensor (see Levi-Civita Tensor)&lt;br /&gt;
:entangled states (see entanglement)&lt;br /&gt;
:entanglement [[Chapter 4 - Entanglement|4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 1 - Introduction#How do quantum computers provide an advantage?|1.2.5]]&lt;br /&gt;
::pure state [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
::mixed state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:error syndrome [[Appendix F - Classical Error Correcting Codes#Parity Check Matrix|'''F.4.2''']]&lt;br /&gt;
:expectation value [[Chapter 3 - Physics of Quantum Information#Expectation Values|3.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;F&amp;quot;&amp;gt;&amp;lt;big&amp;gt;F&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:field [[Appendix F - Classical Error Correcting Codes#field|'''F.2''']]&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;G&amp;quot;&amp;gt;&amp;lt;big&amp;gt;G&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:generator matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.1''']]&lt;br /&gt;
:group [[Appendix D - Group Theory#Definitions and Examples|'''D.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;H&amp;quot;&amp;gt;&amp;lt;big&amp;gt;H&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Hadamard gate [[Chapter 2 - Qubits and Collections of Qubits#eq2.16|2.16]]&lt;br /&gt;
:Hamiltonian [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:Hamming distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.3''']]&lt;br /&gt;
:Hamming weight, or weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
:Hermitian matrix [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5]], [[Chapter 8 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|8.2]], [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']], [[Appendix C - Vectors and Linear Algebra#Examples|'''C.6.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:Hilbert-Schmidt inner product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;I&amp;quot;&amp;gt;&amp;lt;big&amp;gt;I&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:inner product  &lt;br /&gt;
::for real vectors [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
::for complex vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:inverse of a matrix [[Appendix C - Vectors and Linear Algebra#The Inverse of a Matrix|'''C.3.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;K&amp;quot;&amp;gt;&amp;lt;big&amp;gt;K&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:ket [[Chapter 2 - Qubits and Collections of Qubits#States of Many Qubits|2.5]], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']]&lt;br /&gt;
:Kraus operators [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]]&lt;br /&gt;
:Kronecker delta [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:Kronecker product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;L&amp;quot;&amp;gt;&amp;lt;big&amp;gt;L&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Levi-Civita Tensor [[Appendix C - Vectors and Linear Algebra#eqC.9|'''C.3.6''']]&lt;br /&gt;
::Generalized [[Appendix C - Vectors and Linear Algebra#eqC.8|'''C.3.6''']]&lt;br /&gt;
:linear code [[Appendix F - Classical Error Correcting Codes#Definition 6|'''F.3.8''']]&lt;br /&gt;
:local operations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
:local unitary transformations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;big&amp;gt;M&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:matrix exponentiation [[Chapter 3 - Physics of Quantum Information#expmatrix|3.2]]&lt;br /&gt;
:maximally entangled states [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:maximally mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]]&lt;br /&gt;
::two qubits&lt;br /&gt;
:mean (see Average)&lt;br /&gt;
:median [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:minimum distance of a code (also code distance) [[Appendix F - Classical Error Correcting Codes#Definition 5|'''F.3.5''']]&lt;br /&gt;
:mixed state density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
:modulus squared [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;O&amp;quot;&amp;gt;&amp;lt;big&amp;gt;O&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:open quantum systems [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:open-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:operator-sum decomposition [[Chapter 8 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|8.4]]&lt;br /&gt;
:orthogonal [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|5.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
::vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;big&amp;gt;P&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:parity check [[Appendix F - Classical Error Correcting Codes#Defintion 1|'''F.3.1''']]&lt;br /&gt;
:parity check matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.2''']]&lt;br /&gt;
:partial trace&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:Pauli matrices [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:phase gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:phase-flip [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Planck's constant [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:projection operator [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]]&lt;br /&gt;
:pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Q&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Q&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Qbit (see qubit)&lt;br /&gt;
:quantum bit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
:quantum dense coding (see [[#D|dense coding]])&lt;br /&gt;
:quantum gates [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:qubit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;R&amp;quot;&amp;gt;&amp;lt;big&amp;gt;R&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:reduced density matrix [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::see reduced density operator&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:requirements for scalable quantum computing [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;S&amp;quot;&amp;gt;&amp;lt;big&amp;gt;S&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:scalability&lt;br /&gt;
:Schrodinger Equation [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
::for density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:separable state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
::simply separable [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:similar matrices [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:similarity transformation [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:singular values [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:special unitary matrix [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]]&lt;br /&gt;
:spectrum [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:standard deviation [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:SU [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;big&amp;gt;T&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:teleportation [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:tensor product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
:trace [[Appendix C - Vectors and Linear Algebra#The Trace|'''C.3.5''']]&lt;br /&gt;
::partial(see partial trace)&lt;br /&gt;
:transformation [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Circuit Diagrams for Qubit Gates|2.3.1]], [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]], [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 8 - Noise in Quantum Systems#Modelling Open System Evolution|8.3]], [[Chapter 8 - Noise in Quantum Systems#Fixed-Basis Operations|8.3.2]], [[Chapter 8 - Noise in Quantum Systems#Unitary Freedom|8.4.1]], [[Chapter 8 - Noise in Quantum Systems#Physical Interpretation of the Unitary Freedom|8.4.2]], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']], [[Appendix D - Group Theory#Introduction|'''D.1''']]&lt;br /&gt;
::active [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
::passive [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:transpose [[Appendix C - Vectors and Linear Algebra#Transpose|'''C.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;U&amp;quot;&amp;gt;&amp;lt;big&amp;gt;U&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:uncertainty principle [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
:unitary matrix [[Chapter 2 - Qubits and Collections of Qubits#Chapter 2 - Qubits and Collections of Qubits|2.3]], [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']], [[Appendix D - Group Theory#Infinite Order Groups: Lie Groups|'''D.7.2''']]&lt;br /&gt;
:universal set of gates [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:universality [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;V&amp;quot;&amp;gt;&amp;lt;big&amp;gt;V&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:variance [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;W&amp;quot;&amp;gt;&amp;lt;big&amp;gt;W&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:weight, or Hamming weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;X&amp;quot;&amp;gt;&amp;lt;big&amp;gt;X&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:X-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Y&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Y&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Y-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Z&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Z&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Z-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1741</id>
		<title>Index</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1741"/>
		<updated>2011-11-21T16:21:05Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;A&amp;quot;&amp;gt;&amp;lt;big&amp;gt;A&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:average - [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;big&amp;gt;B&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:basis vectors real [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
:binary numbers [[Appendix F - Classical Error Correcting Codes#Binary Operations|'''F.2''']]&lt;br /&gt;
:bit [[Chapter 1 - Introduction#Bits and Qubits: An Introduction|1.3]]&lt;br /&gt;
:bit-flip operation [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Bloch Sphere [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:bra [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:bracket [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;C&amp;quot;&amp;gt;&amp;lt;big&amp;gt;C&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:check-sum [[Appendix F - Classical Error Correcting Codes#Definition 1|'''F.3.1''']]&lt;br /&gt;
:closed-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:CNOT gate(see controlled NOT) &lt;br /&gt;
:Code [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code word [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:commutator [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], &lt;br /&gt;
:complex conjugate [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
::of a matrix [[Appendix C - Vectors and Linear Algebra#Complex Conjugate|'''C.3.1''']], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']]&lt;br /&gt;
:complex number [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
:computational basis [[Chapter 2 - Qubits and Collections of Qubits#Qubit States|2.2]]&lt;br /&gt;
:controlled NOT [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:controlled phase gate [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|6.1]]&lt;br /&gt;
:controlled unitary operation [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;big&amp;gt;D&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:decoherence [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:degenerate [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:delta&lt;br /&gt;
::Kronecker [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dense coding [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]]&lt;br /&gt;
:density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]],[[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::for two qubits [[Chapter 3 - Physics of Quantum Information#Density Matrix for a Mixed State: Two States|3.5.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
::mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:density operator [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:determinant [[Appendix C - Vectors and Linear Algebra#The Determinant|'''C.3.6''']]&lt;br /&gt;
:disjointness condition [[Appendix F - Classical Error Correcting Codes#Errors|'''F.5''']]&lt;br /&gt;
:distance (see also, code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']])&lt;br /&gt;
:DiVencenzo's requirements [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:Dirac notation [[Appendix C - Vectors and Linear Algebra#Introduction|'''C.2.1''']], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dot product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;E&amp;quot;&amp;gt;&amp;lt;big&amp;gt;E&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:eigenvalue decomposition [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvectors [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:epsilon tensor (see Levi-Civita Tensor)&lt;br /&gt;
:entangled states (see entanglement)&lt;br /&gt;
:entanglement [[Chapter 4 - Entanglement|4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 1 - Introduction#How do quantum computers provide an advantage?|1.2.5]]&lt;br /&gt;
::pure state [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
::mixed state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:error syndrome [[Appendix F - Classical Error Correcting Codes#Parity Check Matrix|'''F.4.2''']]&lt;br /&gt;
:expectation value [[Chapter 3 - Physics of Quantum Information#Expectation Values|3.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;F&amp;quot;&amp;gt;&amp;lt;big&amp;gt;F&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:field [[Appendix F - Classical Error Correcting Codes#field|F.2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;G&amp;quot;&amp;gt;&amp;lt;big&amp;gt;G&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:generator matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.1''']]&lt;br /&gt;
:group [[Appendix D - Group Theory#Definitions and Examples|'''D.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;H&amp;quot;&amp;gt;&amp;lt;big&amp;gt;H&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Hadamard gate [[Chapter 2 - Qubits and Collections of Qubits#eq2.16|2.16]]&lt;br /&gt;
:Hamiltonian [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:Hamming distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.3''']]&lt;br /&gt;
:Hamming weight, or weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
:Hermitian matrix [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5]], [[Chapter 8 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|8.2]], [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']], [[Appendix C - Vectors and Linear Algebra#Examples|'''C.6.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:Hilbert-Schmidt inner product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;I&amp;quot;&amp;gt;&amp;lt;big&amp;gt;I&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:inner product  &lt;br /&gt;
::for real vectors [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
::for complex vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:inverse of a matrix [[Appendix C - Vectors and Linear Algebra#The Inverse of a Matrix|'''C.3.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;K&amp;quot;&amp;gt;&amp;lt;big&amp;gt;K&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:ket [[Chapter 2 - Qubits and Collections of Qubits#States of Many Qubits|2.5]], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']]&lt;br /&gt;
:Kraus operators [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]]&lt;br /&gt;
:Kronecker delta [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:Kronecker product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;L&amp;quot;&amp;gt;&amp;lt;big&amp;gt;L&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Levi-Civita Tensor [[Appendix C - Vectors and Linear Algebra#eqC.9|'''C.3.6''']]&lt;br /&gt;
::Generalized [[Appendix C - Vectors and Linear Algebra#eqC.8|'''C.3.6''']]&lt;br /&gt;
:linear code [[Appendix F - Classical Error Correcting Codes#Definition 6|'''F.3.8''']]&lt;br /&gt;
:local operations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
:local unitary transformations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;big&amp;gt;M&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:matrix exponentiation [[Chapter 3 - Physics of Quantum Information#expmatrix|3.2]]&lt;br /&gt;
:maximally entangled states [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:maximally mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]]&lt;br /&gt;
::two qubits&lt;br /&gt;
:mean (see Average)&lt;br /&gt;
:median [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:minimum distance of a code (also code distance) [[Appendix F - Classical Error Correcting Codes#Definition 5|'''F.3.5''']]&lt;br /&gt;
:mixed state density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
:modulus squared [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;O&amp;quot;&amp;gt;&amp;lt;big&amp;gt;O&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:open quantum systems [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:open-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:operator-sum decomposition [[Chapter 8 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|8.4]]&lt;br /&gt;
:orthogonal [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|5.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
::vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;big&amp;gt;P&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:parity check [[Appendix F - Classical Error Correcting Codes#Defintion 1|'''F.3.1''']]&lt;br /&gt;
:parity check matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.2''']]&lt;br /&gt;
:partial trace&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:Pauli matrices [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:phase gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:phase-flip [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Planck's constant [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:projection operator [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]]&lt;br /&gt;
:pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Q&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Q&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Qbit (see qubit)&lt;br /&gt;
:quantum bit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
:quantum dense coding (see [[#D|dense coding]])&lt;br /&gt;
:quantum gates [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:qubit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;R&amp;quot;&amp;gt;&amp;lt;big&amp;gt;R&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:reduced density matrix [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::see reduced density operator&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:requirements for scalable quantum computing [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;S&amp;quot;&amp;gt;&amp;lt;big&amp;gt;S&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:scalability&lt;br /&gt;
:Schrodinger Equation [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
::for density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:separable state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
::simply separable [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:similar matrices [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:similarity transformation [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:singular values [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:special unitary matrix [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]]&lt;br /&gt;
:spectrum [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:standard deviation [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:SU [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;big&amp;gt;T&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:teleportation [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:tensor product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
:trace [[Appendix C - Vectors and Linear Algebra#The Trace|'''C.3.5''']]&lt;br /&gt;
::partial(see partial trace)&lt;br /&gt;
:transformation [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Circuit Diagrams for Qubit Gates|2.3.1]], [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]], [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 8 - Noise in Quantum Systems#Modelling Open System Evolution|8.3]], [[Chapter 8 - Noise in Quantum Systems#Fixed-Basis Operations|8.3.2]], [[Chapter 8 - Noise in Quantum Systems#Unitary Freedom|8.4.1]], [[Chapter 8 - Noise in Quantum Systems#Physical Interpretation of the Unitary Freedom|8.4.2]], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']], [[Appendix D - Group Theory#Introduction|'''D.1''']]&lt;br /&gt;
::active [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
::passive [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:transpose [[Appendix C - Vectors and Linear Algebra#Transpose|'''C.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;U&amp;quot;&amp;gt;&amp;lt;big&amp;gt;U&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:uncertainty principle [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
:unitary matrix [[Chapter 2 - Qubits and Collections of Qubits#Chapter 2 - Qubits and Collections of Qubits|2.3]], [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']], [[Appendix D - Group Theory#Infinite Order Groups: Lie Groups|'''D.7.2''']]&lt;br /&gt;
:universal set of gates [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:universality [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;V&amp;quot;&amp;gt;&amp;lt;big&amp;gt;V&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:variance [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;W&amp;quot;&amp;gt;&amp;lt;big&amp;gt;W&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:weight, or Hamming weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;X&amp;quot;&amp;gt;&amp;lt;big&amp;gt;X&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:X-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Y&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Y&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Y-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Z&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Z&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Z-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1740</id>
		<title>Index</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1740"/>
		<updated>2011-11-21T16:18:56Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;A&amp;quot;&amp;gt;&amp;lt;big&amp;gt;A&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:average - [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;big&amp;gt;B&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:basis vectors real [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
:binary numbers [[Appendix F - Classical Error Correcting Codes#Binary Operations|'''F.2''']]&lt;br /&gt;
:bit [[Chapter 1 - Introduction#Bits and Qubits: An Introduction|1.3]]&lt;br /&gt;
:bit-flip operation [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Bloch Sphere [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:bra [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:bracket [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;C&amp;quot;&amp;gt;&amp;lt;big&amp;gt;C&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:check-sum [[Appendix F - Classical Error Correcting Codes#Definition 1|'''F.3.1''']]&lt;br /&gt;
:closed-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:CNOT gate(see controlled NOT) &lt;br /&gt;
:Code [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code word [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:commutator [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], &lt;br /&gt;
:complex conjugate [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
::of a matrix [[Appendix C - Vectors and Linear Algebra#Complex Conjugate|'''C.3.1''']], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']]&lt;br /&gt;
:complex number [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
:computational basis [[Chapter 2 - Qubits and Collections of Qubits#Qubit States|2.2]]&lt;br /&gt;
:controlled NOT [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:controlled phase gate [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|6.1]]&lt;br /&gt;
:controlled unitary operation [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;big&amp;gt;D&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:decoherence [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:degenerate [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:delta&lt;br /&gt;
::Kronecker [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dense coding [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]]&lt;br /&gt;
:density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]],[[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::for two qubits [[Chapter 3 - Physics of Quantum Information#Density Matrix for a Mixed State: Two States|3.5.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
::mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:density operator [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:determinant [[Appendix C - Vectors and Linear Algebra#The Determinant|'''C.3.6''']]&lt;br /&gt;
:disjointness condition [[Appendix F - Classical Error Correcting Codes#Errors|'''F.5''']]&lt;br /&gt;
:distance (see also, code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']])&lt;br /&gt;
:DiVencenzo's requirements [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:Dirac notation [[Appendix C - Vectors and Linear Algebra#Introduction|'''C.2.1''']], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dot product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;E&amp;quot;&amp;gt;&amp;lt;big&amp;gt;E&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:eigenvalue decomposition [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvectors [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:epsilon tensor (see Levi-Civita Tensor)&lt;br /&gt;
:entangled states (see entanglement)&lt;br /&gt;
:entanglement [[Chapter 4 - Entanglement|4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 1 - Introduction#How do quantum computers provide an advantage?|1.2.5]]&lt;br /&gt;
::pure state [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
::mixed state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:error syndrome [[Appendix F - Classical Error Correcting Codes#Parity Check Matrix|'''F.4.2''']]&lt;br /&gt;
:expectation value [[Chapter 3 - Physics of Quantum Information#Expectation Values|3.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;F&amp;quot;&amp;gt;&amp;lt;big&amp;gt;F&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:field [[Appendix F - Classical Error Correcting Codes#field|'''F.2''']]&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;G&amp;quot;&amp;gt;&amp;lt;big&amp;gt;G&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:generator matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.1''']]&lt;br /&gt;
:group [[Appendix D - Group Theory#Definitions and Examples|'''D.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;H&amp;quot;&amp;gt;&amp;lt;big&amp;gt;H&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Hadamard gate [[Chapter 2 - Qubits and Collections of Qubits#eq2.16|2.16]]&lt;br /&gt;
:Hamiltonian [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:Hamming distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.3''']]&lt;br /&gt;
:Hamming weight, or weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
:Hermitian matrix [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5]], [[Chapter 8 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|8.2]], [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']], [[Appendix C - Vectors and Linear Algebra#Examples|'''C.6.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:Hilbert-Schmidt inner product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;I&amp;quot;&amp;gt;&amp;lt;big&amp;gt;I&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:inner product  &lt;br /&gt;
::for real vectors [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
::for complex vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:inverse of a matrix [[Appendix C - Vectors and Linear Algebra#The Inverse of a Matrix|'''C.3.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;K&amp;quot;&amp;gt;&amp;lt;big&amp;gt;K&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:ket [[Chapter 2 - Qubits and Collections of Qubits#States of Many Qubits|2.5]], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']]&lt;br /&gt;
:Kraus operators [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]]&lt;br /&gt;
:Kronecker delta [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:Kronecker product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;L&amp;quot;&amp;gt;&amp;lt;big&amp;gt;L&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Levi-Civita Tensor [[Appendix C - Vectors and Linear Algebra#eqC.9|'''C.3.6''']]&lt;br /&gt;
::Generalized [[Appendix C - Vectors and Linear Algebra#eqC.8|'''C.3.6''']]&lt;br /&gt;
:linear code [[Appendix F - Classical Error Correcting Codes#Definition 6|'''F.3.8''']]&lt;br /&gt;
:local operations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
:local unitary transformations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;big&amp;gt;M&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:matrix exponentiation [[Chapter 3 - Physics of Quantum Information#expmatrix|3.2]]&lt;br /&gt;
:maximally entangled states [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:maximally mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]]&lt;br /&gt;
::two qubits&lt;br /&gt;
:mean (see Average)&lt;br /&gt;
:median [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:minimum distance of a code (also code distance) [[Appendix F - Classical Error Correcting Codes#Definition 5|'''F.3.5''']]&lt;br /&gt;
:mixed state density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
:modulus squared [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;O&amp;quot;&amp;gt;&amp;lt;big&amp;gt;O&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:open quantum systems [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:open-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:operator-sum decomposition [[Chapter 8 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|8.4]]&lt;br /&gt;
:orthogonal [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|5.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
::vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;big&amp;gt;P&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:parity check [[Appendix F - Classical Error Correcting Codes#Defintion 1|'''F.3.1''']]&lt;br /&gt;
:parity check matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.2''']]&lt;br /&gt;
:partial trace&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:Pauli matrices [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:phase gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:phase-flip [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Planck's constant [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:projection operator [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]]&lt;br /&gt;
:pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Q&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Q&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Qbit (see qubit)&lt;br /&gt;
:quantum bit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
:quantum dense coding (see [[#D|dense coding]])&lt;br /&gt;
:quantum gates [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:qubit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;R&amp;quot;&amp;gt;&amp;lt;big&amp;gt;R&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:reduced density matrix [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::see reduced density operator&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:requirements for scalable quantum computing [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;S&amp;quot;&amp;gt;&amp;lt;big&amp;gt;S&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:scalability&lt;br /&gt;
:Schrodinger Equation [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
::for density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:separable state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
::simply separable [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:similar matrices [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:similarity transformation [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:singular values [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:special unitary matrix [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]]&lt;br /&gt;
:spectrum [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:standard deviation [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:SU [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;big&amp;gt;T&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:teleportation [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:tensor product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
:trace [[Appendix C - Vectors and Linear Algebra#The Trace|'''C.3.5''']]&lt;br /&gt;
::partial(see partial trace)&lt;br /&gt;
:transformation [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Circuit Diagrams for Qubit Gates|2.3.1]], [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]], [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 8 - Noise in Quantum Systems#Modelling Open System Evolution|8.3]], [[Chapter 8 - Noise in Quantum Systems#Fixed-Basis Operations|8.3.2]], [[Chapter 8 - Noise in Quantum Systems#Unitary Freedom|8.4.1]], [[Chapter 8 - Noise in Quantum Systems#Physical Interpretation of the Unitary Freedom|8.4.2]], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']], [[Appendix D - Group Theory#Introduction|'''D.1''']]&lt;br /&gt;
::active [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
::passive [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:transpose [[Appendix C - Vectors and Linear Algebra#Transpose|'''C.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;U&amp;quot;&amp;gt;&amp;lt;big&amp;gt;U&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:uncertainty principle [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
:unitary matrix [[Chapter 2 - Qubits and Collections of Qubits#Chapter 2 - Qubits and Collections of Qubits|2.3]], [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']], [[Appendix D - Group Theory#Infinite Order Groups: Lie Groups|'''D.7.2''']]&lt;br /&gt;
:universal set of gates [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:universality [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;V&amp;quot;&amp;gt;&amp;lt;big&amp;gt;V&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:variance [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;W&amp;quot;&amp;gt;&amp;lt;big&amp;gt;W&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:weight, or Hamming weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;X&amp;quot;&amp;gt;&amp;lt;big&amp;gt;X&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:X-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Y&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Y&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Y-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Z&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Z&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Z-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1739</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1739"/>
		<updated>2011-11-21T16:16:53Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Binary Operations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and these two operations follow the distributive law, the set becomes a '''field''' (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate''' of a code is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;(I_k,A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T,I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1738</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1738"/>
		<updated>2011-11-21T16:15:50Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Binary Operations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and these two operations follow the distributive law &amp;lt;math&amp;gt; X(Y+Z)=XY+XZ &amp;lt;/math&amp;gt;, the set becomes a '''field''' (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate''' of a code is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;(I_k,A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T,I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1737</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1737"/>
		<updated>2011-11-21T16:14:56Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Binary Operations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and these two operations follow the distributive law &amp;lt;math&amp;gt; x(y+z)=xy+xz &amp;lt;/math&amp;gt;, the set becomes a '''field''' (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate''' of a code is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;(I_k,A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T,I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1736</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1736"/>
		<updated>2011-11-21T16:13:29Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Binary Operations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and these two operations follow the distributive law (&amp;lt;math&amp;gt; x(y+z)=xy+xz &amp;lt;/math&amp;gt;), the set becomes a '''field''' (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate''' of a code is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;(I_k,A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T,I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1735</id>
		<title>Index</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Index&amp;diff=1735"/>
		<updated>2011-11-21T16:10:09Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;A&amp;quot;&amp;gt;&amp;lt;big&amp;gt;A&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:average - [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;big&amp;gt;B&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:basis vectors real [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
:binary numbers [[Appendix F - Classical Error Correcting Codes#Binary Operations|'''F.2''']]&lt;br /&gt;
:bit [[Chapter 1 - Introduction#Bits and Qubits: An Introduction|1.3]]&lt;br /&gt;
:bit-flip operation [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Bloch Sphere [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:bra [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:bracket [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;C&amp;quot;&amp;gt;&amp;lt;big&amp;gt;C&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:check-sum [[Appendix F - Classical Error Correcting Codes#Definition 1|'''F.3.1''']]&lt;br /&gt;
:closed-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:CNOT gate(see controlled NOT) &lt;br /&gt;
:Code [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code word [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:Code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']]&lt;br /&gt;
:commutator [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], &lt;br /&gt;
:complex conjugate [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
::of a matrix [[Appendix C - Vectors and Linear Algebra#Complex Conjugate|'''C.3.1''']], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']]&lt;br /&gt;
:complex number [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
:computational basis [[Chapter 2 - Qubits and Collections of Qubits#Qubit States|2.2]]&lt;br /&gt;
:controlled NOT [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:controlled phase gate [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|6.1]]&lt;br /&gt;
:controlled unitary operation [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;big&amp;gt;D&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:decoherence [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:degenerate [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:delta&lt;br /&gt;
::Kronecker [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dense coding [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]]&lt;br /&gt;
:density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]],[[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::for two qubits [[Chapter 3 - Physics of Quantum Information#Density Matrix for a Mixed State: Two States|3.5.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
::mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
::pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:density operator [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:determinant [[Appendix C - Vectors and Linear Algebra#The Determinant|'''C.3.6''']]&lt;br /&gt;
:disjointness condition [[Appendix F - Classical Error Correcting Codes#Errors|'''F.5''']]&lt;br /&gt;
:distance (see also, code distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.4''']])&lt;br /&gt;
:DiVencenzo's requirements [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
:Dirac notation [[Appendix C - Vectors and Linear Algebra#Introduction|'''C.2.1''']], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']], [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:dot product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;E&amp;quot;&amp;gt;&amp;lt;big&amp;gt;E&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:eigenvalue decomposition [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:eigenvectors [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:epsilon tensor (see Levi-Civita Tensor)&lt;br /&gt;
:entangled states (see entanglement)&lt;br /&gt;
:entanglement [[Chapter 4 - Entanglement|4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Quantum Dense Coding|5.4]], [[Chapter 1 - Introduction#How do quantum computers provide an advantage?|1.2.5]]&lt;br /&gt;
::pure state [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
::mixed state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:error syndrome [[Appendix F - Classical Error Correcting Codes#Parity Check Matrix|'''F.4.2''']]&lt;br /&gt;
:expectation value [[Chapter 3 - Physics of Quantum Information#Expectation Values|3.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;F&amp;quot;&amp;gt;&amp;lt;big&amp;gt;F&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Field [[Appendix F - Classical Error Correcting Codes#Field|'''F.2''']]&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;G&amp;quot;&amp;gt;&amp;lt;big&amp;gt;G&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:generator matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.1''']]&lt;br /&gt;
:group [[Appendix D - Group Theory#Definitions and Examples|'''D.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;H&amp;quot;&amp;gt;&amp;lt;big&amp;gt;H&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Hadamard gate [[Chapter 2 - Qubits and Collections of Qubits#eq2.16|2.16]]&lt;br /&gt;
:Hamiltonian [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:Hamming distance [[Appendix F - Classical Error Correcting Codes#Definition 4|'''F.3.3''']]&lt;br /&gt;
:Hamming weight, or weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
:Hermitian matrix [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5]], [[Chapter 8 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|8.2]], [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]], [[Appendix C - Vectors and Linear Algebra#Hermitian Conjugate|'''C.3.3''']], [[Appendix C - Vectors and Linear Algebra#Examples|'''C.6.1''']], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
:Hilbert-Schmidt inner product [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;I&amp;quot;&amp;gt;&amp;lt;big&amp;gt;I&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:inner product  &lt;br /&gt;
::for real vectors [[Appendix C - Vectors and Linear Algebra#Real Vectors|'''C.2.1''']]&lt;br /&gt;
::for complex vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:inverse of a matrix [[Appendix C - Vectors and Linear Algebra#The Inverse of a Matrix|'''C.3.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;K&amp;quot;&amp;gt;&amp;lt;big&amp;gt;K&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:ket [[Chapter 2 - Qubits and Collections of Qubits#States of Many Qubits|2.5]], [[Appendix C - Vectors and Linear Algebra#Complex Vectors|'''C.2.2''']]&lt;br /&gt;
:Kraus operators [[Chapter 8 - Noise in Quantum Systems#Physics Behind the Noise and Completely Positive Maps|8.3]]&lt;br /&gt;
:Kronecker delta [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']]&lt;br /&gt;
:Kronecker product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;L&amp;quot;&amp;gt;&amp;lt;big&amp;gt;L&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Levi-Civita Tensor [[Appendix C - Vectors and Linear Algebra#eqC.9|'''C.3.6''']]&lt;br /&gt;
::Generalized [[Appendix C - Vectors and Linear Algebra#eqC.8|'''C.3.6''']]&lt;br /&gt;
:linear code [[Appendix F - Classical Error Correcting Codes#Definition 6|'''F.3.8''']]&lt;br /&gt;
:local operations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]]&lt;br /&gt;
:local unitary transformations [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;big&amp;gt;M&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:matrix exponentiation [[Chapter 3 - Physics of Quantum Information#expmatrix|3.2]]&lt;br /&gt;
:maximally entangled states [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:maximally mixed state [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]]&lt;br /&gt;
::two qubits&lt;br /&gt;
:mean (see Average)&lt;br /&gt;
:median [[Appendix A - Basic Probability Concepts#Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:minimum distance of a code (also code distance) [[Appendix F - Classical Error Correcting Codes#Definition 5|'''F.3.5''']]&lt;br /&gt;
:mixed state density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|3.5]]&lt;br /&gt;
:modulus squared [[Appendix B - Complex Numbers#Appendix B - Complex Numbers|'''B''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;O&amp;quot;&amp;gt;&amp;lt;big&amp;gt;O&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:open quantum systems [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:open-system evolution [[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|1.4]]&lt;br /&gt;
:operator-sum decomposition [[Chapter 8 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|8.4]]&lt;br /&gt;
:orthogonal [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]], [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#No Cloning!|5.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
::vectors [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|'''C.4''']], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;big&amp;gt;P&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:parity check [[Appendix F - Classical Error Correcting Codes#Defintion 1|'''F.3.1''']]&lt;br /&gt;
:parity check matrix [[Appendix F - Classical Error Correcting Codes#Generator Matrix|'''F.4.2''']]&lt;br /&gt;
:partial trace&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:Pauli matrices [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Two-State Example: Bloch Sphere|3.5.4]]&lt;br /&gt;
:phase gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:phase-flip [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
:Planck's constant [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
:projection operator [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]]&lt;br /&gt;
:pure state [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Appendix E - Density Operator: Extensions#Appendix E - Density Operator: Extensions|'''E''']]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 3%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;float: left; width: 31%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Q&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Q&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Qbit (see qubit)&lt;br /&gt;
:quantum bit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
:quantum dense coding (see [[#D|dense coding]])&lt;br /&gt;
:quantum gates [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:qubit [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;R&amp;quot;&amp;gt;&amp;lt;big&amp;gt;R&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::of a Bell state [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:reduced density matrix [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
::see reduced density operator&lt;br /&gt;
:reduced density operator [[Chapter 4 - Entanglement#Reduced Density Operators and the Partial Trace|4.3.1]]&lt;br /&gt;
:requirements for scalable quantum computing [[Chapter 2 - Qubits and Collections of Qubits#Introduction|2.1]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;S&amp;quot;&amp;gt;&amp;lt;big&amp;gt;S&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:scalability&lt;br /&gt;
:Schrodinger Equation [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]]&lt;br /&gt;
::for density matrix [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]]&lt;br /&gt;
:separable state [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
::simply separable [[Chapter 4 - Entanglement#Entangled Mixed States|4.3]]&lt;br /&gt;
:similar matrices [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:similarity transformation [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:singular values [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:special unitary matrix [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]]&lt;br /&gt;
:spectrum [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']]&lt;br /&gt;
:standard deviation [[Appendix A - Basic Probability Concepts|'''A''']]&lt;br /&gt;
:SU [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;big&amp;gt;T&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:teleportation [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Teleporting a Quantum State|5.5]]&lt;br /&gt;
:tensor product [[Appendix C - Vectors and Linear Algebra#Tensor Products|'''C.7''']]&lt;br /&gt;
:trace [[Appendix C - Vectors and Linear Algebra#The Trace|'''C.3.5''']]&lt;br /&gt;
::partial(see partial trace)&lt;br /&gt;
:transformation [[Chapter 1 - Introduction#Bits and qubits: An Introduction|1.3]], [[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|2.3]], [[Chapter 2 - Qubits and Collections of Qubits#Circuit Diagrams for Qubit Gates|2.3.1]], [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]], [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|2.4]], [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]], [[Chapter 2 - Qubits and Collections of Qubits#Controlled Operations|2.6.1]], [[Chapter 2 - Qubits and Collections of Qubits#Many-qubit Circuits|2.6.2]], [[Chapter 2 - Qubits and Collections of Qubits#Standard Prescription|2.7.1]], [[Chapter 2 - Qubits and Collections of Qubits#Projection Operators|2.7.2]], [[Chapter 3 - Physics of Quantum Information#Schrodinger's Equation|3.2]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|3.3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|3.4]], [[Chapter 3 - Physics of Quantum Information#Density Matrix for the Description of Open Quantum Systems: An Example|3.5.3]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Bell States|4.2.1]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 4 - Entanglement#Entangled Pure States|4.2]], [[Chapter 8 - Noise in Quantum Systems#Modelling Open System Evolution|8.3]], [[Chapter 8 - Noise in Quantum Systems#Fixed-Basis Operations|8.3.2]], [[Chapter 8 - Noise in Quantum Systems#Unitary Freedom|8.4.1]], [[Chapter 8 - Noise in Quantum Systems#Physical Interpretation of the Unitary Freedom|8.4.2]], [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']], [[Appendix D - Group Theory#Introduction|'''D.1''']]&lt;br /&gt;
::active [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
::passive [[Appendix C - Vectors and Linear Algebra#Transformations|'''C.5''']]&lt;br /&gt;
:transpose [[Appendix C - Vectors and Linear Algebra#Transpose|'''C.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;U&amp;quot;&amp;gt;&amp;lt;big&amp;gt;U&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:uncertainty principle [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
:unitary matrix [[Chapter 2 - Qubits and Collections of Qubits#Chapter 2 - Qubits and Collections of Qubits|2.3]], [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|'''C.3.8''']], [[Appendix D - Group Theory#Infinite Order Groups: Lie Groups|'''D.7.2''']]&lt;br /&gt;
:universal set of gates [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
:universality [[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|2.6]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;V&amp;quot;&amp;gt;&amp;lt;big&amp;gt;V&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:variance [[Chapter 5 - Quantum Information: Basic Principles and Simple Examples#Uncertainty Principle|5.3]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;W&amp;quot;&amp;gt;&amp;lt;big&amp;gt;W&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:weight, or Hamming weight [[Appendix F - Classical Error Correcting Codes#Definition 2|'''F.3.2''']]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;X&amp;quot;&amp;gt;&amp;lt;big&amp;gt;X&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:X-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Y&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Y&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Y-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;div id=&amp;quot;Z&amp;quot;&amp;gt;&amp;lt;big&amp;gt;Z&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
:Z-gate [[Chapter 2 - Qubits and Collections of Qubits#Examples of Important Qubit Gates|2.3.2]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1734</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1734"/>
		<updated>2011-11-21T16:03:51Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Binary Operations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and these two operations follow the distributive law (&amp;lt;math&amp;gt; x(y+z)=xy+xz &amp;lt;/math&amp;gt;), the set becomes a field (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate''' of a code is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;(I_k,A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T,I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1733</id>
		<title>Appendix F - Classical Error Correcting Codes</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_F_-_Classical_Error_Correcting_Codes&amp;diff=1733"/>
		<updated>2011-11-21T15:59:00Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Binary Operations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
Classical error correcting codes are in use in a wide variety of digital electronics and other classical information systems.  It is a good idea to learn some of the basic definitions, ideas, methods, and simple examples of classical error correcting codes in order to understand the (slightly) more complicated quantum error correcting codes.  There are many good introductions to classical error correction.  Here we follow a few sources which also discuss quantum error correcting codes: the book by [[Bibliography#LoeppWootters|Loepp and Wootters]], an article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane, [[Bibliography#GottDiss|Gottesman's Thesis]], and [[Bibliography#Gaitan:book|Gaitan's Book]] on quantum error correction, which also discusses classical error correction.&lt;br /&gt;
&lt;br /&gt;
===Binary Operations===&lt;br /&gt;
&lt;br /&gt;
The set &amp;lt;math&amp;gt; \{0,1\} \,\!&amp;lt;/math&amp;gt; is a group under addition.  (See [[Appendix D - Group Theory#Example 3|Section D.2.8]] of [[Appendix D - Group Theory|Appendix D]].)  The way this is achieved is by deciding that we will only use these two numbers in our language and using addition modulo 2, meaning &amp;lt;math&amp;gt; 0+0=0, 1+0 = 0+1 = 1, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1+1 =0\,\!&amp;lt;/math&amp;gt;.   If we also include the operation of multiplication and say that the two operations are distributive, the set becomes a field (a Galois Field), which is denoted GF&amp;lt;math&amp;gt;(2)\,\!&amp;lt;/math&amp;gt;.  Since one often works with strings of bits, it is very useful to consider the string of bits to be a vector and to use vector addition (which is component-wise addition) and vector multiplication (which is the inner product).  For example, the addition of the vector &amp;lt;math&amp;gt;(0,0,1)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(0,1,1)\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;(0,0,1) + (0,1,1) = (0,1,0)\,\!&amp;lt;/math&amp;gt;.  The inner product between these two vectors is  &amp;lt;math&amp;gt;(0,0,1) \cdot (0,1,1) = 0\cdot 0 + 0\cdot 1 + 1\cdot 1 = 0 +0 +1=1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Definitions and Basics===&lt;br /&gt;
&lt;br /&gt;
====Definition 1====&lt;br /&gt;
The inner product is also called a '''checksum''' or '''parity check''' since it shows whether or not the first and second vectors agree, or have an even number of 1's at the positions specified by the ones in the other vector.  We may say that the first vector satisfies the parity check of the other vector, or vice versa.&lt;br /&gt;
&lt;br /&gt;
====Definition 2====&lt;br /&gt;
The '''weight''' or '''Hamming weight''' is the number of non-zero components of a vector or string.  The weight of a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; is denoted wt(&amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt;).  &lt;br /&gt;
&lt;br /&gt;
====Definition 3====&lt;br /&gt;
The '''Hamming distance''' is the number of places where two vectors differ.  Let the two vectors be &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt;.  Then the Hamming distance is also equal to wt(&amp;lt;math&amp;gt;v+w\,\!&amp;lt;/math&amp;gt;).  The Hamming distance between &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; will be denoted &amp;lt;math&amp;gt;d_H(v,w)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Definition 4====&lt;br /&gt;
We use &amp;lt;math&amp;gt;\{0,1\}^n\,\!&amp;lt;/math&amp;gt; to denote the set of all binary vectors of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;.  A '''code''' &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; of length &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; is any subset of that set.  The set of all elements of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is called the set of '''codewords'''.  We also say there are &amp;lt;math&amp;gt;2^n\,\!&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-bit words in the space.  &lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; bits are used to encode &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; logical bits.  We use the notation &amp;lt;math&amp;gt;[n,k] \,\!&amp;lt;/math&amp;gt; do denote such a code.&lt;br /&gt;
&lt;br /&gt;
====Definition 5====&lt;br /&gt;
The '''minimum distance''' of a code is the smallest Hamming distance between any two non-equal vectors in a code.  This can be written &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
d_{Hmin}(C) = \underset{v,w\in C,v\neq w}{\mbox{min}}d_H(v,w).&lt;br /&gt;
 \,\!&amp;lt;/math&amp;gt;|F.1}}&lt;br /&gt;
For shorthand, we also use &amp;lt;math&amp;gt; d(C)\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt; d\,\!&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt; C\,\!&amp;lt;/math&amp;gt; is understood.&lt;br /&gt;
&lt;br /&gt;
When that code has a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt;, the notation &amp;lt;math&amp;gt;[n,k,d] \,\!&amp;lt;/math&amp;gt; is used.&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
It is interesting to note that if we encode redundantly using &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt; as our logical zero and logical one respectively, then we could detect single bit errors but not correct them.  For example, if we receive &amp;lt;math&amp;gt; 01\,\!&amp;lt;/math&amp;gt;, we know this cannot be one of our encoded states.  So an error must have occurred.  However, we don't know whether the sender sent &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;.  We do know that an error has occurred though, as long as we know only one error has occurred.  Such an encoding can be used as an '''error detecting code'''.  In this case there are two code words, &amp;lt;math&amp;gt; 0_L=00 \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1_L=11\,\!&amp;lt;/math&amp;gt;, but four words in the space.  The minimum distance is 2, which is the distance between the two code words.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
The three-bit redundant encoding was already given in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].  One takes logical zero and logical one states to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
0_L =  000 \;\;\; \mbox{ and } \;\;\; 1_L = 111,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.2}}&lt;br /&gt;
where the subscript &amp;lt;math&amp;gt;L \,\!&amp;lt;/math&amp;gt; is used to denote a &amp;quot;logical&amp;quot; state; that is, one that is encoded.  Recall that this code is able to detect and correct one error.  In this case there are two code words out of eight possible words, and the minimal distance is 3.&lt;br /&gt;
&lt;br /&gt;
====Definition 6====&lt;br /&gt;
The '''rate''' of a code is given by the ration of the number of logical bits to the number of bits, &amp;lt;math&amp;gt;k/n\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
====Definition 7====&lt;br /&gt;
A '''linear code''' &amp;lt;math&amp;gt;C_l\,\!&amp;lt;/math&amp;gt; is a code that is closed under addition.&lt;br /&gt;
&lt;br /&gt;
===Linear Codes===&lt;br /&gt;
&lt;br /&gt;
Linear codes are particularly useful because they are able to efficiently identify errors and the associated correct codewords.  This ability is due to the added structure these codes have.  These will be discussed in the following sections. &lt;br /&gt;
&lt;br /&gt;
====Generator Matrix====&lt;br /&gt;
&lt;br /&gt;
For linear codes, any linear combination of codewords is a codeword.  One key feature of a linear code is that it can be specified by a &amp;lt;nowiki&amp;gt;''generator matrix,''&amp;lt;/nowiki&amp;gt; &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;&amp;lt;ref&amp;gt;Recall that we are working with binary codes.  Thus the entries of the matrix will also be binary numbers, i.e., 0's and 1's.&amp;lt;/ref&amp;gt;. For an &amp;lt;math&amp;gt; [n,k]\,\!&amp;lt;/math&amp;gt; code, the '''generator matrix''' is an &amp;lt;math&amp;gt; n\times k\,\!&amp;lt;/math&amp;gt; matrix with columns that form a basis for the &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt;-dimensional coding sub-space of the &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;-dimensional binary vector space.  In other words, the vectors comprising the rows form a basis that will span the code space.  (Note that one may also use the transpose of this matrix as the definition for &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt;.)  Any code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; described by a vector &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; can be written in terms of the generator matrix as &amp;lt;math&amp;gt;w = Gv\,\!&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is independent of the input and output vectors.  In addition, &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is not unique.  If columns are switched or even added to produce a new vector that replaces a column, then the generator matrix is still valid for the code.  This is due to the requirement that the columns be linearly independent, which is still satisfied if these operations are performed.&lt;br /&gt;
&lt;br /&gt;
====Parity Check Matrix====&lt;br /&gt;
Once &amp;lt;math&amp;gt;G\,\!&amp;lt;/math&amp;gt; is obtained, one can calculate another useful matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;(n\times k)\times n\,\!&amp;lt;/math&amp;gt; matrix which has the property that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
PG = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.3}}&lt;br /&gt;
The matrix &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is called the '''parity check matrix''' or '''dual matrix'''.  The rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; is at most  &amp;lt;math&amp;gt;n- k\,\!&amp;lt;/math&amp;gt; and has the property that it annihilates any code word.  To see this, recall any code word is written as &amp;lt;math&amp;gt;Gv\,\!&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;PGv =0\,\!&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;PG =0\,\!&amp;lt;/math&amp;gt;.  Also, due to the rank of &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt;, it can be shown that &amp;lt;math&amp;gt;Pw =0\,\!&amp;lt;/math&amp;gt; only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  That is to say, &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; is a code word.  This means that &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be used to test whether or not a word is in the code. &lt;br /&gt;
&lt;br /&gt;
Suppose an error occurs on a code word &amp;lt;math&amp;gt;w\,\!&amp;lt;/math&amp;gt; to produce &amp;lt;math&amp;gt;w^\prime = w + e\,\!&amp;lt;/math&amp;gt;.  It follows that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
Pw^\prime = P(w+e) = Pe,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.4}}&lt;br /&gt;
since &amp;lt;math&amp;gt;Pw=0\,\!&amp;lt;/math&amp;gt;.  This result, &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is called the '''error syndrome''' and the measurement to identify &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; is the '''syndrome measurement'''.  Therefore, the result depends only on the error and not on the original code word.  If the error can be determined from this result, then it can be corrected independent of the code word.  However, in order to have &amp;lt;math&amp;gt;Pe\,\!&amp;lt;/math&amp;gt; be unique, two different results, &amp;lt;math&amp;gt;Pe_1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Pe_2\,\!&amp;lt;/math&amp;gt;, must not be equal.  This is possible if a distance &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; code is constructed such that the parity check matrix has &amp;lt;math&amp;gt;d-1=2t\,\!&amp;lt;/math&amp;gt; linearly independent columns.  This enables the errors to be identified and corrected.&lt;br /&gt;
&lt;br /&gt;
===Errors===&lt;br /&gt;
&lt;br /&gt;
For any classical error correcting code, there are general conditions that must be satisfied in order for the code to be able to detect and correct errors.  The two examples above show how the error can be detected; here, the objective is to give some general conditions.  &lt;br /&gt;
&lt;br /&gt;
Note that any state containing an error may be written as the sum of the original (logical or encoded) state  &amp;lt;math&amp;gt;w \,\!&amp;lt;/math&amp;gt; and another vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt;.  The error vector &amp;lt;math&amp;gt;e \,\!&amp;lt;/math&amp;gt; has ones in the places where errors are present and zeroes everywhere else.  To ensure that the error may be corrected, the following condition must be satisfied for two states with errors occurring:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
w_1 + e_1 \neq w_2 + e_2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.5}}&lt;br /&gt;
This condition is called the '''disjointness condition'''.  This condition means that an error on one state cannot be confused with an error on another state.  If it could, then the state including the error could not be uniquely identified with an encoded state and the state could not be corrected to its original state before the error occurred.  More specifically, for a code to correct &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;  single-bit errors, it must have distance at least &amp;lt;math&amp;gt;2t + 1 \,\!&amp;lt;/math&amp;gt; between any two codewords; i.e., it must be true that &amp;lt;math&amp;gt;d(C) \geq 2t + 1 \,\!&amp;lt;/math&amp;gt;.  An &amp;lt;math&amp;gt;[n,k]\,\!&amp;lt;/math&amp;gt; code with minimal distance &amp;lt;math&amp;gt;d \,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;[n,k,d]\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
An important example of an error correcting code is called the &amp;lt;math&amp;gt;[7,4,3]&amp;lt;/math&amp;gt; Hamming code.  This code, as the notation indicates, encodes &amp;lt;math&amp;gt;k=4&amp;lt;/math&amp;gt; bits of information into &amp;lt;math&amp;gt;n=7&amp;lt;/math&amp;gt; bits.  It also does it in such a way that one error can be detected and corrected since it has a distance of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;.  The generator matrix for this code can be taken to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
G^T = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
          0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.6}}&lt;br /&gt;
(See for example [[Bibliography#LoeppWootters|Loepp and Wootters]].)  From this the parity check matrix, &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; can be calculated by finding a set of &amp;lt;math&amp;gt;n-k\,\!&amp;lt;/math&amp;gt; mutually orthogonal vectors that are also orthogonal to the code space defined by the generator matrix.  Alternatively, one could find the generator matrix from the parity check matrix.  A method for doing this can be found in Steane's article in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]].  One first puts &amp;lt;math&amp;gt;G^T\,\!&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;(I_k,A),\,\!&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;I_k\,\!&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;k\times k\,\!&amp;lt;/math&amp;gt; identity matrix.  Then the parity check matrix is &amp;lt;math&amp;gt;P = (A^T,I_{n-k}).\,\!&amp;lt;/math&amp;gt;  In either case, one can arrive at the following parity check matrix for this code:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
P = \left(\begin{array}{ccccccc}&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
          1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
          0 &amp;amp; 1 &amp;amp; 1 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 &lt;br /&gt;
    \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.7}}&lt;br /&gt;
It is useful to note that the code can also be defined by the parity check matrix.  Only the codewords are annihilated by the parity check matrix.&lt;br /&gt;
&lt;br /&gt;
===The Disjointness Condition and Correcting Errors===&lt;br /&gt;
&lt;br /&gt;
The motivation for the disjointness condition, [[#eqF.5|Eq.(F.5)]], is to associate each vector in the space with a particular code word.  That is, assuming that only certain errors occur, each error vector should be associated to a particular vector in the code space when the error is added to the original code word.  This partitions the set into disjoint subsets, with each containing only one code vector.  A message is decoded correctly if the vector (the one containing the error) is in the subset that is associated with the original vector (the one with no error).  For example, if one vector is sent, say &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;, and an error occurs during transmission to produce &amp;lt;math&amp;gt; v_2 = v_1 +e\,\!&amp;lt;/math&amp;gt;, then this vector must be in the subset containing &amp;lt;math&amp;gt; v_1 \,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A way to decode is to record an array of possible code words, possible errors, and the combinations of those errors and code words.  The array can be set up as a top row of the code word vectors and a leftmost column of errors, with the element of the first row and the first column being the zero vector and all subsequent entries in the column being errors.  Then the element at the top of a column (say the jth column) is added to the error in the corresponding row (say the kth row) to get the j,k entry of the array.  With this array one can associate a column with a subset that is disjoint with the other sets.  Identifying the erred code word in a column associates it with a code word and thus corrects the error.&lt;br /&gt;
&lt;br /&gt;
===The Hamming Bound===&lt;br /&gt;
&lt;br /&gt;
The Hamming bound is a bound that restricts the rate of the code.  Due to the disjointness condition, a certain number of bits are required to ensure our ability to detect and correct errors.  Suppose there is a set of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bit vectors for encoding &amp;lt;math&amp;gt; k\,\!&amp;lt;/math&amp;gt; bits of information.  There is a set of error vectors of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; that has &amp;lt;math&amp;gt; C(n,t)\,\!&amp;lt;/math&amp;gt; elements&amp;lt;ref&amp;gt;That is, &amp;lt;math&amp;gt; n \,\!&amp;lt;/math&amp;gt; choose &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt; vectors. The notation is &amp;lt;math&amp;gt; C(n,t) = {n\choose t} = \frac{n!}{(n-t)!t!}.\,\!&amp;lt;/math&amp;gt;&amp;lt;/ref&amp;gt;.  So the number of error vectors, including errors of weight up to &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{i=0}^t C(n,i). \,\!&amp;lt;/math&amp;gt;  (Note that no error is also part of the set of error vectors.  The objective is to be able to design a code that can correct all errors up to those of weight &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, and this includes no error at all.)  Since there are &amp;lt;math&amp;gt; 2^n\,\!&amp;lt;/math&amp;gt; vectors in the whole space of &amp;lt;math&amp;gt; n\,\!&amp;lt;/math&amp;gt; bits, and assuming &amp;lt;math&amp;gt; m\,\!&amp;lt;/math&amp;gt; vectors are used for the encoding, the Hamming bound is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
m\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.8}}&lt;br /&gt;
For linear codes, &amp;lt;math&amp;gt; m=2^k,\,\!&amp;lt;/math&amp;gt; so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
2^k\sum_{i=0}^t C(n,i) \leq 2^n.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.9}}&lt;br /&gt;
Taking the logarithm, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
k \leq n - \log_2\left(\sum_{i=0}^t C(n,i)\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.10}}&lt;br /&gt;
For large &amp;lt;math&amp;gt; n, k \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, we can use [[#LoPopescueSpiller|Stirling's formula]] to show that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\frac{k}{n} \leq 1 - H\left(\frac{t}{n}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|F.11}}&lt;br /&gt;
where &amp;lt;math&amp;gt; H(x) = -x\log x -(1-x)\log (1-x) \,\!&amp;lt;/math&amp;gt; and we have neglected an overall multiplicative constant that goes to 1 as  &amp;lt;math&amp;gt; n\rightarrow \infty. \,\!&amp;lt;/math&amp;gt;  (Again, see the article in [[Bibliography#LoPopescueSpiller|Lo, Popescu, and Spiller]] by Steane.)&lt;br /&gt;
&lt;br /&gt;
===More Definitions===&lt;br /&gt;
&lt;br /&gt;
====Definition 11: Dual Code====&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{C}\,\!&amp;lt;/math&amp;gt; be a code and let &amp;lt;math&amp;gt;v\,\!&amp;lt;/math&amp;gt; be a vector in the code space.  The '''dual code''', denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;, is the set of all vectors that have zero inner product with all &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  In other words, it is the set of all vectors &amp;lt;math&amp;gt;u\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;u\cdot v = 0\,\!&amp;lt;/math&amp;gt; for all  &amp;lt;math&amp;gt;v\in \mathcal{C}\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
For binary vectors, a vector can be orthogonal to itself.  Note that this is different from ordinary vectors in 3-d space.  &lt;br /&gt;
&lt;br /&gt;
The dual code is a useful entity in classical error correction and will be used in the construction of the quantum error correcting codes known as [[Chapter 7 - Quantum Error Correcting Codes#CSS codes|CSS codes]].&lt;br /&gt;
&lt;br /&gt;
===Final Comments===&lt;br /&gt;
&lt;br /&gt;
As can be seen from the Hamming bound, there is a limit to the rate of an error correcting code.  This does not indicate whether or not codes that satisfy these bounds exist, but it does tell us that no codes exist that do not satisfy these bounds.  Encoding, decoding, error detection and correction are all difficult problems to solve in general.  One of the advantages of the linear codes is that they provide a systematic method for identifying errors on a code through the use of the parity check operation.  More generally, checking to see whether or not a bit string (vector) is in the code space would require a look-up table.  This would be much more time-consuming than using the parity check matrix; matrix multiplication is quite efficient relative to the look-up table.  &lt;br /&gt;
&lt;br /&gt;
Many of these ideas and definitions will be utilized in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]] on quantum error correction.  Some linear codes, including the Hamming code above, will have quantum analogues---as do many quantum error correcting codes.  In quantum computers, as will be discussed, error correction is necessary due to the delicacy of quantum information.  Such discussions will be taken up in [[Chapter 7 - Quantum Error Correcting Codes|Chapter 7]].&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_8_-_Decoherence-Free/Noiseless_Subsystems&amp;diff=1187</id>
		<title>Chapter 8 - Decoherence-Free/Noiseless Subsystems</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_8_-_Decoherence-Free/Noiseless_Subsystems&amp;diff=1187"/>
		<updated>2011-04-14T02:12:43Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Phase-Protected Two-Qubit Decoherence-Free Subspace */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
In the last chapter we saw that it is possible, at least in principle, to detect and correct errors in quantum systems.  In this chapter a different method for protecting against errors is explored.  This method encodes information into quantum states such that the information avoids errors.  The information is encoded in such a way that it is invariant under the errors produced by the system-bath Hamiltonian.  This requires the identification of a symmetry and then subsequently taking advantage of that symmetry.  It turns out that there is one main advantage to this method of encoding against errors which was not initially a motivation for their study.  In some important examples, the states can be used to enable universal quantum computing on a set of encoded states even when it is not possible on the set of physical states.  This will be explored in [[Chapter 8 - Decoherence-Free/Noiseless Subsystems#Quantum Computing on a DNS|Section 8.6]] below, and further applications will be discussed in [[Chapter 10 - Hybrid Methods of Quantum Error Prevention|Chapter 10]].  &lt;br /&gt;
&lt;br /&gt;
The initial work to find error-avoiding codes involved what are called decoherence-free subspaces.  They were later generalized to subsystems.  These terms are defined below and reviews may be found in [[Bibliography#Whaley/Lidar:03|Whaley/Lidar]] and [[Bibliography#Byrd/Wu/Lidar:04|Byrd/Wu/Lidar]].  Although there are alternative descriptions of decoherence-free subspaces and the subsystem generalization in terms of master equations (see [[Bibliography#Whaley/Lidar:03|Whaley/Lidar]] and references therein), in this chapter the Hamiltonian description is used.  This aids in the intuition behind these constructions and, as an introduction, this preferred here.&lt;br /&gt;
&lt;br /&gt;
==General Considerations==&lt;br /&gt;
&lt;br /&gt;
In what follows, a general quantum system will be assumed to be coupled non-trivially to a bath such that the entire system-bath Hamiltonian is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = H_S\otimes I_B + I_S\otimes H_B + H_I, \,\!&amp;lt;/math&amp;gt;|8.1}}&lt;br /&gt;
where &amp;lt;math&amp;gt;H_S \,\!&amp;lt;/math&amp;gt; acts only on the system, &amp;lt;math&amp;gt;H_B\,\!&amp;lt;/math&amp;gt; acts only on the bath, and &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; H_I = \sum_\alpha S_\alpha\otimes B_\alpha, \,\!&amp;lt;/math&amp;gt;|8.2}}&lt;br /&gt;
is the interaction Hamiltonian with the &amp;lt;math&amp;gt;S_\alpha\,\!&amp;lt;/math&amp;gt; acting only on the system and the &amp;lt;math&amp;gt;B_\alpha\,\!&amp;lt;/math&amp;gt; acting only on the bath.  The &amp;quot;error algebra&amp;quot; is denoted &amp;lt;math&amp;gt; \mathcal{A}\,\!&amp;lt;/math&amp;gt; and is the algebra generated by the set &amp;lt;math&amp;gt; \{H_S,S_\alpha\}\,\!&amp;lt;/math&amp;gt;.  (In the words of Paulo Zanardi, these are all the bad things that can happen to the system.)  The &amp;lt;math&amp;gt; S_\alpha\,\!&amp;lt;/math&amp;gt; obviously cause errors because they describe the interaction between the system and the bath.  The reason the error algebra contains other terms is that when the system and bath together evolve unitarily, the exponential of the Hamiltonian &amp;lt;math&amp;gt; H\,\!&amp;lt;/math&amp;gt; gives products of the &amp;lt;math&amp;gt; S_\alpha\,\!&amp;lt;/math&amp;gt; with each other and also with &amp;lt;math&amp;gt; H_S\,\!&amp;lt;/math&amp;gt;, and will be present in the unitary evolution.  In the case that &amp;lt;math&amp;gt; H_S\,\!&amp;lt;/math&amp;gt; is identically zero, or we can remove it by changing basis (to a rotating frame), the problem simplifies to the consideration only of the algebra of the &amp;lt;math&amp;gt; S_\alpha\,\!&amp;lt;/math&amp;gt; or the modified &amp;lt;math&amp;gt; S_\alpha\,\!&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt; S_\alpha\,\!&amp;lt;/math&amp;gt; in the rotating frame) need be considered, respectively.  &lt;br /&gt;
&lt;br /&gt;
At this point, the objective is to find a set of states which will be immune to the errors which are present.  Such states are identified using the error algebra.  The way this is done is to put the algebra in a form which is block-diagonal.  This type of algebra is said to be &amp;quot;reducible&amp;quot; which means that one may always block-diagonalize it.  (See [[Appendix D - Group Theory|Appendix D]].)  Suppose that each element of the algebra can be put into the same block-diagonal form using a particular unitary transformation &amp;lt;math&amp;gt; U_{dns}\,\!&amp;lt;/math&amp;gt;.  Then for any element of the error algebra, &amp;lt;math&amp;gt;A_\alpha\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt; U_{dns} A_\alpha U_{dns}^\dagger \,\!&amp;lt;/math&amp;gt; is block-diagonal.  If the information is stored in states which are acted upon by these blocks, and only these blocks, then the information is protected because the information stays in the states which are defined by these blocks.  If the blocks are &amp;lt;math&amp;gt; 1\times 1 \,\!&amp;lt;/math&amp;gt; blocks, i.e., &amp;lt;math&amp;gt; 1\times 1 \,\!&amp;lt;/math&amp;gt; submatrices, which are just numbers, then the states which make up such a system are called decoherence-free &amp;quot;subspaces&amp;quot;.  If the blocks are larger, then they are called decoherence-free subsystems, or noiseless subsystems.  &lt;br /&gt;
&lt;br /&gt;
In the next few sections, the examples will illustrate this construction and how the states are protected.  &lt;br /&gt;
&lt;br /&gt;
==DNS Examples==&lt;br /&gt;
&lt;br /&gt;
The formal description for a DNS is quite useful for finding a suitable DNS given a particular set of errors.  However, several examples are know which not only illustrate how one would use the general methods, but also provide examples of importance to experimental physics for reasons which will become clear later in this chapter.  Familiarity with the addition of angular momenta will help understand the examples and then also the general formalism.  References are [[Bibliography#Griffiths:qmbook|Griffiths' book]] (introductory) and [[Bibliography#Bohmqm|Arno Bohm's book]] (more advanced) and [[Appendix D - Group Theory|Appendix D]] which provides a basic introduction to group theory.  However, the objective here is to provide the ideas and examples that will aid in understanding the key points of the theory.  &lt;br /&gt;
&lt;br /&gt;
===Phase-Protected Two-Qubit Decoherence-Free Subspace ===&lt;br /&gt;
&lt;br /&gt;
One of the simplest example of a decoherence-free subspace is a method for using two physical qubits to encode one logical qubit in such a way that the logical qubit is protected against phase errors which operate on both physical qubits in the same way.  This is called a ''collective phase error''.  &lt;br /&gt;
&lt;br /&gt;
Let us begin by assuming there is no system Hamiltonian and that the two physical qubits in the system are acted upon by the Hamiltonian&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H_{cpe} = Z^{(1)}\otimes B + Z^{(2)}\otimes B = (Z^{(1)} + Z^{(2)})\otimes B, \,\!&amp;lt;/math&amp;gt;|8.3}}&lt;br /&gt;
where &amp;lt;math&amp;gt;Z^{(i)} \equiv \sigma_z^{(i)} \,\!&amp;lt;/math&amp;gt; is a phase operator which acts on the &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;th qubit.  &lt;br /&gt;
This Hamiltonian acts the same on each of the two qubits to produce the same phase error on each.  If we now choose our logical states to be &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert 0_L\right\rangle = \left\vert 01 \right\rangle, \;\;\; \left\vert 1_L\right\rangle = \left\vert 10 \right\rangle, \,\!&amp;lt;/math&amp;gt;|8.4}}&lt;br /&gt;
then our logical states will be given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \psi_L \right\rangle = a \left\vert 0_L \right\rangle + b\left\vert 1_L\right\rangle. \,\!&amp;lt;/math&amp;gt;|8.5}}&lt;br /&gt;
If we now suppose that the system and bath are initially uncoupled, &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert\Psi_{SB}\right\rangle = \left\vert \psi_L \right\rangle \otimes \left\vert \psi_B\right\rangle, \,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
then the states, acted upon by the Hamiltonian &amp;lt;math&amp;gt; H_{cpe} \,\!&amp;lt;/math&amp;gt; gives&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H_{cpe}\left\vert\Psi_{SB}\right\rangle = (0)\left\vert \psi_L \right\rangle \otimes B\left\vert \psi_B\right\rangle, \,\!&amp;lt;/math&amp;gt; |8.6}}&lt;br /&gt;
which gives a decoupled system and bath.  This is clear since the exponential of this Hamiltonian gives the unitary evolution of the state.  Thus&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\exp(-i H_{cpe}t/\hbar)\left\vert\Psi_{SB}\right\rangle = (\left\vert \psi_L \right\rangle \otimes U_B\left\vert \psi_B\right\rangle), \,\!&amp;lt;/math&amp;gt; |8.7}}&lt;br /&gt;
where &amp;lt;math&amp;gt; U_B=\exp(-iBt/\hbar) \,\!&amp;lt;/math&amp;gt;.  This Hamiltonian acts as the identity on the logical states of the system.  To be even more explicit, when one can trace out the bath degrees of freedom to find that the state of the system remains unchanged by this Hamiltonian.  Thus the system has been encoded in such a way that this type of error does not adversely affect the state of the system.  This allows for perfect storage.  &lt;br /&gt;
&lt;br /&gt;
It is perhaps worth emphasizing that this is a simple model of a particular type of decoherence which would ordinarily lead to collective phase errors on the system states.  Such noise has also been called &amp;quot;weak decoherence.&amp;quot;  However, because the information is encoded, it is protected against these types of errors.  In the language of a quantum error correcting codes, this is an infinite distance code since the error does not lead to an error no matter how long, or to what extent the error acts.  In the next subsection we will see how this can be extended to collective errors acting on a number of qubits in an arbitrary way, not just codes that will protect against phase errors.&lt;br /&gt;
&lt;br /&gt;
===Four-Qubit Decoherence-Free Subspace===&lt;br /&gt;
&lt;br /&gt;
Consider a Hamiltonian which causes arbitrary collective errors on a collection of four qubits, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}H_{ce4} &amp;amp;= a_1(X^{(1)} + X^{(2)} +X^{(3)}+X^{(4)})\otimes B \\&lt;br /&gt;
  &amp;amp;\;\; +  a_2(Y^{(1)} + Y^{(2)} +Y^{(3)}+Y^{(4)})\otimes B \\&lt;br /&gt;
  &amp;amp;\;\; +  a_3(Z^{(1)} + Z^{(2)} +Z^{(3)}+Z^{(4)})\otimes B,  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|8.8}}&lt;br /&gt;
where &amp;lt;math&amp;gt;  a_1, a_2, a_3\,\!&amp;lt;/math&amp;gt; are arbitrary constants.  The standard procedure would be to find irreducible representations of the algebra of errors which is generated by the three collective errors&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
  S_x &amp;amp;= X^{(1)} + X^{(2)} + X^{(3)} + X^{(4)} \\&lt;br /&gt;
  S_y &amp;amp;= Y^{(1)} + Y^{(2)} + Y^{(3)} + Y^{(4)} \\&lt;br /&gt;
  S_z &amp;amp;= Z^{(1)} + Z^{(2)} + Z^{(3)} + Z^{(4)}.  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|8.9}}&lt;br /&gt;
Here again we are supposing that there is no system Hamiltonian, &amp;lt;math&amp;gt;H_S=0\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
The standard procedure would be to block-diagonalize the set of &amp;lt;math&amp;gt;S_\alpha \;\!&amp;lt;/math&amp;gt;, which can be done.  However, there are at least two other methods for identifying the decoherence-free subspace of states.  One is to use the condition that we found in the last subsection that &amp;lt;math&amp;gt;S_\alpha \;\!&amp;lt;/math&amp;gt; acting on the states will give zero thus ensuring their invariance.  This can be done by noticing that collective errors are angular momentum operators and that we could try to identify states which give zero when acted upon by these operators.  A little thought would lead one to the conclusion that singlet states do not transform under the angular momentum operators since they have total angular momentum zero as well as the z-component of their angular momenta zero.  Another way is to know from previous experience that the addition of angular momenta of four spin-1/2 particles will produce two singlet states.  More generally, for collective errors, one may simply note that degeneracies in the representations may be used for the storage and manipulation of information.  This allows one to use other techniques, such as the Young Tableaux, to identify such degeneracies.  Still, this is a bit unsatisfying to those who are not familiar with any of these methods.  For that reason, and also for generalizations, there does exist an algorithm for the identification of these systems which will be discussed later.  For now, it will be shown that it is possible to construct states which are immune to this type of noise.  &lt;br /&gt;
&lt;br /&gt;
As stated, there are two singlet states in the tensor product of three two-state systems.  These can be obtained by using the Wigner-Clebsch-Gordan coefficients for the addition of angular momenta.  (There are many good reference for this in standard texts including [[Bibliography#Griffiths:qmbook|Griffiths book]] (introductory) and [[Bibliography#Bohmqm|Arno Bohm's book]] (more advanced).)  The objective is to find linear combinations of the states of the two-state systems which will give singlet states.  In standard spin notation the states of the two-state systems include the following basis elements:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
&amp;amp;\left\vert 1/2,1/2,1/2,1/2\right\rangle,\; \left\vert 1/2,1/2,1/2,-1/2 \right\rangle,\;\left\vert 1/2,1/2,-1/2,1/2\right\rangle,\; \left\vert 1/2,1/2,-1/2,-1/2 \right\rangle,\\ &lt;br /&gt;
&amp;amp;\left\vert 1/2,-1/2,1/2,1/2 \right\rangle,\; \left\vert 1/2,-1/2,1/2,-1/2 \right\rangle,\; \left\vert 1/2,-1/2,-1/2,1/2  \right\rangle,\; \left\vert 1/2,-1/2,-1/2,-1/2 \right\rangle,\\&lt;br /&gt;
&amp;amp;\left\vert -1/2,1/2,1/2,1/2\right\rangle,\; \left\vert -1/2,1/2,1/2,-1/2 \right\rangle,\;\left\vert -1/2,1/2,-1/2,1/2\right\rangle,\; \left\vert -1/2,1/2,-1/2,-1/2 \right\rangle,\\ &lt;br /&gt;
&amp;amp;\left\vert -1/2,-1/2,1/2,1/2 \right\rangle,\; \left\vert -1/2,-1/2,1/2,-1/2 \right\rangle,\; \left\vert -1/2,-1/2,-1/2,1/2  \right\rangle,\; \left\vert -1/2,-1/2,-1/2,-1/2 \right\rangle,&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|8.10}}&lt;br /&gt;
which correspond to the computational basis states &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
&amp;amp;\left\vert 0,0,0,0 \right\rangle,\; \left\vert 0,0,0,1 \right\rangle,\;\left\vert 0,0,1,0\right\rangle,\; \left\vert 0,0,1,1 \right\rangle, \\&lt;br /&gt;
&amp;amp;\left\vert 0,1,0,0 \right\rangle,\; \left\vert 0,1,0,1 \right\rangle,\; \left\vert 0,1,1,0  \right\rangle,\; \left\vert 0,1,1,1 \right\rangle, \\&lt;br /&gt;
&amp;amp;\left\vert 1,0,0,0 \right\rangle,\; \left\vert 1,0,0,1 \right\rangle,\; \left\vert 1,0,1,0  \right\rangle,\; \left\vert 1,0,1,1 \right\rangle,\\&lt;br /&gt;
&amp;amp;\left\vert 1,1,0,0 \right\rangle,\; \left\vert 1,1,0,1 \right\rangle,\; \left\vert 1,1,1,0  \right\rangle,\; \left\vert 1,1,1,1 \right\rangle.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|8.11}}&lt;br /&gt;
&lt;br /&gt;
In the language of angular momentum, the objective is to change basis to obtain states of a given total angular momentum given by the standard angular momentum addition rules.  For our purposes here, the transformation which takes these states to the new basis will be denoted &amp;lt;math&amp;gt; U_{dns} \,\!&amp;lt;/math&amp;gt; and is a unitary transformation corresponding to the collection of Wigner-Clebsch-Gordan coefficients.  This is the matrix which will block-diagonalize all of the collective operators &amp;lt;math&amp;gt;S_\alpha\,\!&amp;lt;/math&amp;gt;.  Explicitly, the singlet states are &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert s_0\right\rangle &amp;amp;= \frac{1}{2}(\left\vert 0101\right\rangle +\left\vert 1010 \right\rangle -\left\vert 1001\right\rangle -\left\vert 0110\right\rangle), \\&lt;br /&gt;
\left\vert s_1\right\rangle &amp;amp;= \frac{1}{\sqrt{12}}(2\left\vert 0011\right\rangle + 2\left\vert 1100\right\rangle -\left\vert 0110\right\rangle -\left\vert 1001\right\rangle-\left\vert 0101\right\rangle -\left\vert 1010\right\rangle). &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|8.12}}&lt;br /&gt;
One can show by explicit computation that&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; S_\alpha \left\vert s_i\right\rangle = 0,\,\!&amp;lt;/math&amp;gt;|8.13}}&lt;br /&gt;
for &amp;lt;math&amp;gt; \alpha=x,y,z \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; 0,1\,\!&amp;lt;/math&amp;gt;.  The two states &amp;lt;math&amp;gt;\left\vert s_0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert s_1\right\rangle \,\!&amp;lt;/math&amp;gt; can now be used as logical zero and logical one for a logical qubit state which is immune to any collective error on the four physical qubits.&lt;br /&gt;
&lt;br /&gt;
===Three-Qubit Noiseless Subsystem===&lt;br /&gt;
&lt;br /&gt;
As stated in [[Chapter 8 - Decoherence-Free/Noiseless Subsystems#General Considerations|Section 8.2]], ''subsystems'' have a more complex structure than subspaces which are, by definition, one-dimensional.  The simplest DNS which protects against collective errors is comprised of three qubits.  In this case there is a set of two states which represent the logical zero state and a set of two states which represent the logical one state as will be shown below.  &lt;br /&gt;
&lt;br /&gt;
The error operators &amp;lt;math&amp;gt;S_\alpha \,\!&amp;lt;/math&amp;gt; have the same form as  [[#eq8.9|Eq.(8.9)]], but with only three particles not four.  In the language of the addition of angular momentum, total angular momentum states consist of two two-dimensional subsystems and one four-dimensional.  The Wigner-Clebsch-Gordan coefficients can again be used as the DNS transformation to put the states into the DNS basis.  After block-diagonalization the errors take the form &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \left(\begin{array}{cc}&lt;br /&gt;
  A  &amp;amp;  0  \\&lt;br /&gt;
  0 &amp;amp;  B  &lt;br /&gt;
\end{array}\right),\,\!&amp;lt;/math&amp;gt;|8.14}}&lt;br /&gt;
where the zeroes represent  &amp;lt;math&amp;gt; 4\times 4\;\!&amp;lt;/math&amp;gt; matrices of zeroes, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cccccccc}&lt;br /&gt;
-a_3     &amp;amp; a_1+ia_2 &amp;amp;   0      &amp;amp;   0        \\&lt;br /&gt;
a_1-ia_2 &amp;amp;   a_3    &amp;amp;   0      &amp;amp;   0        \\&lt;br /&gt;
    0    &amp;amp;    0     &amp;amp;   a_3    &amp;amp;  a_1+ia_2  \\&lt;br /&gt;
    0    &amp;amp;    0     &amp;amp; a_1-ia_2 &amp;amp;   -a_3  &lt;br /&gt;
\end{array}\right),\,\!&amp;lt;/math&amp;gt;|8.15}}&lt;br /&gt;
and the form of &amp;lt;math&amp;gt; B\;\!&amp;lt;/math&amp;gt; is not relevant since it acts on the subspace orthogonal to the code space.  The DNS transformation takes the vector of computational basis states &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \left(\begin{array}{c}&lt;br /&gt;
 000 \\&lt;br /&gt;
 001 \\&lt;br /&gt;
 010 \\&lt;br /&gt;
 011 \\&lt;br /&gt;
 100 \\&lt;br /&gt;
 101 \\&lt;br /&gt;
 110 \\&lt;br /&gt;
 111 &lt;br /&gt;
\end{array}\right),\,\!&amp;lt;/math&amp;gt;|8.16}}&lt;br /&gt;
to the new basis states &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \left(\begin{array}{c}&lt;br /&gt;
\frac{1}{\sqrt{2}}(\left\vert 010\right\rangle -\left\vert 100 \right\rangle) \\&lt;br /&gt;
\frac{1}{\sqrt{2}}(\left\vert 011\right\rangle - \left\vert 101\right\rangle)   \\&lt;br /&gt;
\frac{1}{\sqrt{6}}(2\left\vert 001\right\rangle -\left\vert 010 \right\rangle -\left\vert 100\right\rangle) \\&lt;br /&gt;
\frac{1}{\sqrt{6}}(-2\left\vert 110\right\rangle + \left\vert 011\right\rangle +\left\vert 101\right\rangle) \\&lt;br /&gt;
\left\vert 000 \right\rangle) \\&lt;br /&gt;
\frac{1}{\sqrt{3}}(\left\vert 100\right\rangle + \left\vert 010 \right\rangle + \left\vert 001\right\rangle) \\&lt;br /&gt;
\frac{1}{\sqrt{3}}(\left\vert 011\right\rangle + \left\vert 101 \right\rangle + \left\vert 110\right\rangle)  \\&lt;br /&gt;
(\left\vert 111\right\rangle) &lt;br /&gt;
\end{array}\right).\,\!&amp;lt;/math&amp;gt;|8.17}}&lt;br /&gt;
Under collective noise the two states representing the logical zero, which are the first two states in the column, can be mixed together, as well as the two which represent the logical one in the next two entries in the column.  However, the two sets, in other words the logical zero states and logical one states do not get mixed with each other.  The last four states in the column are often referred to as the orthogonal subspace (to the code) which is denoted &amp;lt;math&amp;gt;\mathcal{C}^\perp\,\!&amp;lt;/math&amp;gt;.  In this case the states are not annihilated by the Hamiltonian as was the case in the first two examples above.  However, the information is still able to be reliably stored in the subsystems without loss of information.  &lt;br /&gt;
&lt;br /&gt;
The next question which one might naturally ask is, how can one perform quantum computing on these states?&lt;br /&gt;
&lt;br /&gt;
==Quantum Computing on a DNS==&lt;br /&gt;
&lt;br /&gt;
It is perhaps surprising that one of the most intriguing and promising uses of a DNS encoding is for universal quantum computing rather than for avoiding noise.  In some important cases, executing a complete set of universal quantum gates on physical qubits is impractical.  However a DNS encoding can be used to enable universal quantum computing on the logical subsystem even when it is practical, or even not possible, on the physical qubits.  In this section a general criterion and several examples are given to show exactly how this works. &lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is important to note that another use for DNS is as part of a comprehensive treatment of noise which involves other error prevention methods as well.  Thus the DNS is not used as a method for the complete elimination of noise since noises are not often ''only'' collective and designing a DNS for noise which is not collective is difficult.  Thus a DNS may be used to eliminate noise, and/or for encoded universality, while other noises are treated with other error prevention methods.  (This is discussed more thoroughly in [[Chapter 10 - Hybrid Methods of Quantum Error Prevention|Chapter 10]].)  When quantum computing on a DNS, it is important to preserve the subsystem structure.  Gating operations which preserve the subsystem structure do not take the states out of the protected subsystem during computation.  Such operations are called ''compatible transformations''.   Before presenting explicit examples, a general criterion for how to construct gating operations on a DNS is given.  (This result is from [[Bibliography#Kempeetal:01|Kempe, et al.]])&lt;br /&gt;
&lt;br /&gt;
Let us consider an encoded state &amp;lt;math&amp;gt; \Psi\,\!&amp;lt;/math&amp;gt;.  If this state is in the DNS code space, then a stabilizer of the code, &amp;lt;math&amp;gt; \mathcal{S}, \,\!&amp;lt;/math&amp;gt; can be defined for the subspace.  This is the set of operations which leave a state unchanged.  In the case of an encoded subsystem, the set of elements which leave the state unchanged is exactly the noise operators.  This is due to the fact that the encoded states are chosen to avoid exactly that noise.  Let a basis for the noise operators be &amp;lt;math&amp;gt; \{\Lambda_i\}\,\!&amp;lt;/math&amp;gt;.  The a stabilizer element &amp;lt;math&amp;gt; S\in\mathcal{S} \,\!&amp;lt;/math&amp;gt; can be parametrized using&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
S=\exp\left(\sum_i a_i \Lambda_i\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|8.18}}&lt;br /&gt;
for some parameters &amp;lt;math&amp;gt; a_i \,\!&amp;lt;/math&amp;gt;.  Since the stabilizer elements fix the code words, then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
S\left\vert \Psi\right\rangle= \left\vert \Psi\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|8.19}}&lt;br /&gt;
For a unitary transformation to be compatible with the code, the unitary must not take the state out of the code space, which implies, for any  &amp;lt;math&amp;gt; S_1,S_2 \in \mathcal{S}\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
US_1 \left\vert \Psi\right\rangle = S_2U\left\vert \Psi\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|8.20}}&lt;br /&gt;
so that acting before or after the transformation with any element of the stabilizer should give another state in the code space.  For this to be true for &amp;lt;math&amp;gt; U = \exp(-iHt) \,\!&amp;lt;/math&amp;gt; all times &amp;lt;math&amp;gt; t \,\!&amp;lt;/math&amp;gt;, a sufficient, but not necessary condition is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
[H,S] = 0, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|8.21}}&lt;br /&gt;
for all &amp;lt;math&amp;gt; S\in\mathcal{S} \,\!&amp;lt;/math&amp;gt;.  Though this is only a sufficient condition and therefore not the most general case, it is surprisingly useful as will be shown in the following examples.&lt;br /&gt;
&lt;br /&gt;
==QC Examples==&lt;br /&gt;
&lt;br /&gt;
In quantum dot quantum computing proposals, single qubit rotations are much slower than two-qubit operations which use the Heisenberg exchange interaction.  Therefore any quantum computing proposal which could use only exchange and forgo the much slower single physical qubit rotations would be a great advantage.  Here, several examples are given which do just this, therefore saving a great deal of time by enabling operations which are much faster than would otherwise be required.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Phase-Protected DFS===&lt;br /&gt;
&lt;br /&gt;
Recall the phase-protected DNS which was discussed in [[#Phase-Protected Two-Qubit Decoherence-Free Subspace|Section 3.1]], &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert 0_L\right\rangle = \left\vert 01 \right\rangle, \;\;\; \left\vert 1_L\right\rangle = \left\vert 10 \right\rangle. \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Logical operations on this subspace can be performed using &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; H_{xy}^{1,2} = X_1X_2 +Y_1Y_2, \,\!&amp;lt;/math&amp;gt;|8.22}}&lt;br /&gt;
where  &amp;lt;math&amp;gt; X_1 \,\!&amp;lt;/math&amp;gt; is the Pauli x-operation on the first qubit and similarly for the other operators.  The logical &amp;lt;math&amp;gt; X \,\!&amp;lt;/math&amp;gt; operation, denoted &amp;lt;math&amp;gt; \bar{X} \,\!&amp;lt;/math&amp;gt;, is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \bar{X} = \exp(-i \pi H_{xy}). \,\!&amp;lt;/math&amp;gt;|8.23}}&lt;br /&gt;
A logical &amp;lt;math&amp;gt;Z \,\!&amp;lt;/math&amp;gt; is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \bar{Z} = Z_1-Z_2, \,\!&amp;lt;/math&amp;gt;|8.24}}&lt;br /&gt;
which is a Zeeman splitting which can be always on and is due to an external magnetic field.  Given &amp;lt;math&amp;gt; \bar{X}, \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \bar{Z}, \,\!&amp;lt;/math&amp;gt;, any single qubit rotation may be implemented using the Euler angle parametrization.  A logical two-qubit entangling gate, &amp;lt;math&amp;gt; \bar{ZZ}, \,\!&amp;lt;/math&amp;gt; can be implemented using the same Hamiltonian &amp;lt;math&amp;gt; H_{xy}. \,\!&amp;lt;/math&amp;gt;  Suppose physical qubits 1 and 2 are used to encode logical qubit 1 and physical qubits 3 and 4 are used to encode logical qubit 2, then  &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \bar{ZZ} = Z_2Z_3. \,\!&amp;lt;/math&amp;gt;|8.25}}&lt;br /&gt;
The corresponding diagram is shown in Fig. 8.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[File:2qdfs.jpg]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Figure 8.1: ''Diagram showing the two-qubit DFS encoded states.''  Each circle contains one physical qubit.  Each solid ellipse shows one logical qubit comprised of two physical qubits.  The dotted ellipse shows the action of a logical two qubit gate which would act between neighboring physical qubits and simultaneously between two logical qubits.&lt;br /&gt;
&lt;br /&gt;
===Four-Qubit DFS===&lt;br /&gt;
&lt;br /&gt;
The Heisenberg exchange interaction Hamiltonian between two qubits labelled &amp;lt;math&amp;gt; i \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; j \,\!&amp;lt;/math&amp;gt; is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
H_{ex}^{i,j} =  X_iX_j + Y_iY_j + Z_iZ_j,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|8.26}}&lt;br /&gt;
where, again &amp;lt;math&amp;gt; X_i \,\!&amp;lt;/math&amp;gt; is the Pauli x-operation on the &amp;lt;math&amp;gt; i\,\!&amp;lt;/math&amp;gt;th qubit and similarly for the other operators.  The exponential of the exchange operation between qubits &amp;lt;math&amp;gt; i \,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; j \,\!&amp;lt;/math&amp;gt; for a time &amp;lt;math&amp;gt; \pi/4\,\!&amp;lt;/math&amp;gt; gives an operator  &amp;lt;math&amp;gt; E_{ij} = I + X_iX_j + Y_iY_j + Z_iZ_j,\,\!&amp;lt;/math&amp;gt; which acts to swap the states &amp;lt;math&amp;gt;E_{ij}\left\vert i \right\rangle\left\vert j\right\rangle =  \left\vert j \right\rangle\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Logical operations can then be given in terms of this operator.  The logical &amp;lt;math&amp;gt; X \,\!&amp;lt;/math&amp;gt; operator is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\bar{X}=\frac{1}{\sqrt{3}}(E_{23}-E_{13})&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|8.27}}&lt;br /&gt;
and the logical &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\bar{Z}=-E_{12}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|8.28}}&lt;br /&gt;
From these, the logical &amp;lt;math&amp;gt; Y \,\!&amp;lt;/math&amp;gt; operation can be obtained by commutation or using an Euler angle parametrization.&lt;br /&gt;
&lt;br /&gt;
===Three-Qubit NS===&lt;br /&gt;
&lt;br /&gt;
Again, using &amp;lt;math&amp;gt; E_{ij},\,\!&amp;lt;/math&amp;gt; the logical &amp;lt;math&amp;gt; X \,\!&amp;lt;/math&amp;gt; operation can be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\bar{X}=\frac{1}{\sqrt{3}}(E_{23}-E_{13}),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|8.29}}&lt;br /&gt;
and the logical &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; operation can be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\bar{Z}=\frac{1}{3}(E_{13}+E_{23}-2E_{12}).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|8.30}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 9 - Dynamical Decoupling Controls#Introduction|Continue to '''Chapter 9 - Dynamical Decoupling Controls''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_3_-_Physics_of_Quantum_Information&amp;diff=956</id>
		<title>Chapter 3 - Physics of Quantum Information</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_3_-_Physics_of_Quantum_Information&amp;diff=956"/>
		<updated>2011-02-14T21:37:30Z</updated>

		<summary type="html">&lt;p&gt;Tjones: /* Description of Open Quantum Systems: An Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
It was a great realization that information is physical and that a&lt;br /&gt;
(classical) Turing machine is not the end of the story of&lt;br /&gt;
computation.  The physical system in which the information is stored&lt;br /&gt;
and manipulated is important and qubits are quite different from&lt;br /&gt;
bits.  &lt;br /&gt;
&lt;br /&gt;
In this chapter, some background in quantum mechanics is provided.&lt;br /&gt;
Not all of this chapter will be directly relevant to our discussion,&lt;br /&gt;
but it is included for the sake of completeness of our understanding&lt;br /&gt;
of how quantum mechanics from a textbook is related to quantum&lt;br /&gt;
computing.  The connection is, as of yet, clear but the story seems&lt;br /&gt;
incomplete from a physicists perspective and for the subject of error&lt;br /&gt;
prevention methods, some of this chapter will be vital.  In&lt;br /&gt;
particular, the section(s) concerning the density matrix.  Not only&lt;br /&gt;
is this vital, but not usually covered in most quantum mechanics&lt;br /&gt;
classes, either undergraduate or graduate.  &lt;br /&gt;
&lt;br /&gt;
It is also worth emphasizing that this chapter is primarily aimed at&lt;br /&gt;
physicists and for those others which are interested in the background&lt;br /&gt;
physics.  It is not necessary for much of what follows.&lt;br /&gt;
&lt;br /&gt;
===Schrodinger's Equation===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A common starting point in quantum mechanics is Schrodinger's equation.  This equation is not derived, or justified here, but is given in a general form:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
H \left\vert \Psi\right\rangle = i\hbar\frac{\partial}{\partial t}\left\vert \Psi\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.1}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the Hamiltonian, &amp;lt;!-- \index{Hamiltonian} --&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\hbar\,\!&amp;lt;/math&amp;gt; is Planck's constant &lt;br /&gt;
&amp;lt;!-- \index{Planck's constant} --&amp;gt; &lt;br /&gt;
(divided by &amp;lt;math&amp;gt;2\pi\,\!&amp;lt;/math&amp;gt;), and &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is time.  The Hamiltonian contains what&lt;br /&gt;
is known about the system's evolution.  &lt;br /&gt;
Most of the time in these notes, we let &amp;lt;math&amp;gt;\hbar = 1\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This equation is (formally) solved by taking the time derivative to be&lt;br /&gt;
an ordinary derivative (we assume no explicit time dependence for&lt;br /&gt;
&amp;lt;math&amp;gt;H \,\!&amp;lt;/math&amp;gt;), so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
H \left\vert \Psi\right\rangle = i\frac{d \left\vert \Psi\right\rangle}{dt}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.2}}&amp;lt;br /&amp;gt;&lt;br /&gt;
This means that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
-iHdt =  \frac{d \left\vert \Psi\right\rangle}{\left\vert \Psi\right\rangle},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.3}}&amp;lt;br /&amp;gt;&lt;br /&gt;
so&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \ln \left\vert \Psi\right\rangle &amp;amp;= -iHt + C, \\&lt;br /&gt;
\Rightarrow\left\vert \Psi(t)\right\rangle &amp;amp;= e^{-iHt}\left\vert \Psi(0)\right\rangle.  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.4}}&lt;br /&gt;
Now if &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is Hermitian, and it is, then the matrix &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U =  e^{-iHt}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.5}}&amp;lt;br /&amp;gt;&lt;br /&gt;
is unitary.  &amp;lt;!-- \index{unitary matrix}--&amp;gt;&lt;br /&gt;
(See [[Appendix C - Vectors and Linear Algebra]], in particular the section entitled [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Unitary Matrices]].)  Any&lt;br /&gt;
transformation on a closed system can be described by a unitary&lt;br /&gt;
transformation and any unitary transformation can be obtained by the&lt;br /&gt;
exponentiation of a Hermitian matrix.  &lt;br /&gt;
&lt;br /&gt;
The end result and important point is that the evolution of a quantum&lt;br /&gt;
state is, in general, given by a unitary matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \Psi(t)\right\rangle = U\left\vert \Psi(0)\right\rangle.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.6}}&amp;lt;br /&amp;gt;&lt;br /&gt;
So our objective in quantum information processing is to create a&lt;br /&gt;
unitary evolution, and eventual measurement, which will produce a&lt;br /&gt;
particular outcome.&lt;br /&gt;
&lt;br /&gt;
====Exponentiating a Matrix====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div id=&amp;quot;expmatrix&amp;quot;&amp;gt; ''Aside: a note about exponentiation of a matrix.''&amp;lt;/div&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
It may seem strange to exponentiate a matrix.  However, you can define&lt;br /&gt;
a function of a matrix according to its Taylor expansion.  The details&lt;br /&gt;
of this are primarily unimportant here, but just to show how it goes,&lt;br /&gt;
it is written out.  &lt;br /&gt;
&lt;br /&gt;
The Taylor expansion of an exponential is the following:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
e^x = \sum_{n=0}^\infty \frac{x^n}{n!}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.7}}&amp;lt;br /&amp;gt;&lt;br /&gt;
and this can be used to exponentiate a matrix by letting the matrix&lt;br /&gt;
replace &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; in the equation.  This can also be used to prove that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
e^{ix}=\cos x +i\sin x.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.8}}&amp;lt;br /&amp;gt;&lt;br /&gt;
''End Aside''&lt;br /&gt;
&lt;br /&gt;
===Density Matrix for Pure States===&lt;br /&gt;
&lt;br /&gt;
Now let us consider the object (a ''density matrix, or &lt;br /&gt;
density operator, of rank one'') &amp;lt;!-- \index{density matrix}\index{density&lt;br /&gt;
matrix!pure state} --&amp;gt;&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho = \left\vert\psi\right\rangle \left\langle \psi\right\vert,&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.9}}&amp;lt;br /&amp;gt;&lt;br /&gt;
which is just the outer product of two vectors.  For example, if &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\psi\right\vert = (0,0,1),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.10}}&amp;lt;br /&amp;gt;&lt;br /&gt;
then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\psi\mid\psi\right\rangle = \left\langle\psi\right\vert(\left\langle\psi\right\vert)^\dagger = 1.&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
However, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
 \left\vert\psi\right\rangle\left\langle\psi\right\vert = \left(\begin{array}{ccc}&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; 1 \end{array}\right).&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.11}}&amp;lt;br /&amp;gt;&lt;br /&gt;
Again &amp;lt;math&amp;gt;\left\vert \psi\right\rangle = \left\vert \psi(t)\right\rangle\,\!&amp;lt;/math&amp;gt;, so &amp;lt;math&amp;gt;\rho=\rho(t)\,\!&amp;lt;/math&amp;gt;.  If we&lt;br /&gt;
differentiate this with respect to &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;,&amp;lt;!-- \index{Schr\&amp;quot;odinger Equation!&lt;br /&gt;
  for density matrix} --&amp;gt;&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{\partial \rho }{\partial t} &amp;amp;= &lt;br /&gt;
           \left(\frac{\partial \left\vert \psi\right\rangle}{\partial t}\right)\left\langle\psi\right\vert &lt;br /&gt;
            + \left\vert \psi\right\rangle\left(\frac{\partial \left\langle\psi\right\vert}{\partial t}\right)\\&lt;br /&gt;
                   &amp;amp;= (-iH)\rho + \rho (iH) = -i[H,\rho],&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.12}} &lt;br /&gt;
which is the Schrodinger equation for the density matrix, with solution,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho(t) = U\rho(0)U^\dagger.&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.13}}&amp;lt;br /&amp;gt;&lt;br /&gt;
This follows from &amp;lt;math&amp;gt;\left\vert\psi(t)\right\rangle\left\langle\psi(t)\right\vert =&lt;br /&gt;
U\left\vert\psi(0)\right\rangle\left\langle\psi(0)\right\vert U^\dagger\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Consider our two-state system &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert 0\right\rangle = \left(\begin{array}{c} 1 \\ 0 \end{array}\right), &lt;br /&gt;
                   \;\;\; \mbox{and} \;\;\; &lt;br /&gt;
\left\vert 1\right\rangle = \left(\begin{array}{c} 0 \\ 1 \end{array}\right).&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.14}}&amp;lt;br /&amp;gt;&lt;br /&gt;
A ''superposition'' of these two states is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \psi\right\rangle = \alpha_0\left\vert 0\right\rangle + \alpha_1\left\vert 1\right\rangle &lt;br /&gt;
           = \left(\begin{array}{c} \alpha_0 \\ \alpha_1 \end{array}\right),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.15}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers such that &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  The corresponding &lt;br /&gt;
''pure state, (i.e. rank one) density matrix'' is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_p = \left\vert\psi\right\rangle\left\langle\psi\right\vert&lt;br /&gt;
     = \left(\begin{array}{cc}&lt;br /&gt;
              |\alpha_0|^2 &amp;amp; \alpha_0 \alpha_1^* \\ &lt;br /&gt;
              \alpha_0^* \alpha_1 &amp;amp; |\alpha_1|^2 \end{array}\right).&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.16}}&amp;lt;br /&amp;gt;&lt;br /&gt;
Note that the superposition in Eq.[[#eq3.15|(3.15)]] can be obtained&lt;br /&gt;
from any pure state by a unitary transformation.  Here, the trace of&lt;br /&gt;
the density matrix is an important quantity; it is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(\rho_p) = |\alpha_0|^2 + |\alpha_1|^2 = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.17}}&amp;lt;br /&amp;gt;&lt;br /&gt;
Notice also that the determinant of this matrix is zero:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\rho_p) = |\alpha_0|^2|\alpha_1|^2 - \alpha_0 \alpha_1^*\alpha_0^*&lt;br /&gt;
\alpha_1 = 0.&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.18}}&amp;lt;br /&amp;gt;&lt;br /&gt;
To see this another way, note that the density operator of rank one&lt;br /&gt;
can be written as &amp;lt;math&amp;gt;U(\left\vert 0\right\rangle\left\langle0\right\vert)U^\dagger\,\!&amp;lt;/math&amp;gt;, so that the determinant&lt;br /&gt;
is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\det(U(\left\vert 0\right\rangle \left\langle 0\right\vert)U^\dagger) &amp;amp;= \det(U(\left\vert 0\right\rangle\left\langle 0\right\vert)U^{-1})\\&lt;br /&gt;
                            &amp;amp;=  \det(U)\det(\left\vert0\right\rangle\left\langle 0\right\vert)\frac{1}{\det(U)} \\&lt;br /&gt;
                            &amp;amp;= \det(\left\vert 0\right\rangle \left\langle 0\right\vert) = 0.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.19}}&lt;br /&gt;
&lt;br /&gt;
===Measurements Revisited===&lt;br /&gt;
&lt;br /&gt;
If the state of a quantum system is described by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left\vert \psi\right\rangle = \alpha_0\left\vert 0\right\rangle + \alpha_1\left\vert 1\right\rangle, &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.20}}&amp;lt;br /&amp;gt;&lt;br /&gt;
the probability of finding it in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; when measured in&lt;br /&gt;
the computational basis is &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt;.  However, this is a&lt;br /&gt;
particular superposition which could be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \psi\right\rangle = U \left\vert 0\right\rangle.  &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.21}}&amp;lt;br /&amp;gt;&lt;br /&gt;
In the section entitled [[#Schrodinger's Equation|Schrodinger's Equation]] it was shown that this matrix &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; results&lt;br /&gt;
from the exponentiation of a Hermitian matrix and from the section entitled [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|The Pauli Matrices]] any &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
Hermitian matrix can be written in terms of the Pauli matrices.&amp;lt;!-- \index{Pauli matrices}--&amp;gt;  To make this explicit using standard conventions, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert \psi\right\rangle &amp;amp;= U\left\vert 0\right\rangle  \\&lt;br /&gt;
           &amp;amp;= \exp(-i\vec{n}\cdot\vec{\sigma} \theta) \left\vert 0\right\rangle \\&lt;br /&gt;
           &amp;amp;= (\mathbb{I}\cos(\theta) -i\vec{n}\cdot\vec{\sigma} \sin(\theta))\left\vert 0\right\rangle,&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.22}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; is a unit vector, &amp;lt;math&amp;gt;|\vec{n}|=1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{n}\cdot\vec{\sigma} =&lt;br /&gt;
n_1\sigma_1+n_2\sigma_2+n_3\sigma_3\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
One can write this matrix out explicitly &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \exp(-i\vec{n}\cdot\vec{\sigma} \theta) &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; 1 \end{array}\right)\cos(\theta) \\&lt;br /&gt;
                        &amp;amp; \;\;\;   + (-i)\left[ n_1\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; 1 \\ &lt;br /&gt;
                                  1 &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_2\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; -i \\ &lt;br /&gt;
                                  i &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_3\left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; -1 \end{array}\right)\right]\sin(\theta) \\&lt;br /&gt;
                                &amp;amp;= &lt;br /&gt;
         \left(\begin{array}{cc}&lt;br /&gt;
  \cos(\theta) -in_3\sin(\theta) &amp;amp; (-in_1-n_2)\sin(\theta) \\ &lt;br /&gt;
   (-in_1+n_2)\sin(\theta) &amp;amp; \cos(\theta) +in_3\sin(\theta)  \end{array}\right).&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.23}}&lt;br /&gt;
Notice this is a  ''special unitary matrix.''  (See [[Appendix C - Vectors and Linear Algebra]], in particular the subsection [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Unitary Matrices]].)&lt;br /&gt;
&lt;br /&gt;
To see that any state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; for arbitrary coefficients&lt;br /&gt;
&amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; can be obtained by choosing &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
appropriately, the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; can be chosen as a starting point.  &lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
U\left\vert 0\right\rangle &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
  \cos(\theta) -in_3\sin(\theta) &amp;amp; (-in_1-n_2)\sin(\theta) \\ &lt;br /&gt;
   (-in_1+n_2)\sin(\theta) &amp;amp; \cos(\theta) +in_3\sin(\theta)  &lt;br /&gt;
         \end{array}\right)&lt;br /&gt;
       \left(\begin{array}{c} 1 \\ 0\end{array}\right) \\&lt;br /&gt;
         &amp;amp;=  \left(\begin{array}{c} &lt;br /&gt;
                            \cos(\theta) -in_3\sin(\theta)  \\ &lt;br /&gt;
                            (-in_1+n_2)\sin(\theta)\end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.24}}&lt;br /&gt;
For example, choosing &amp;lt;math&amp;gt;\theta=0\,\!&amp;lt;/math&amp;gt; gives the original state; choosing&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{n} = (0,1,0)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\theta = \pi/2\,\!&amp;lt;/math&amp;gt; gives &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;; and choosing&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{n} = (0,1,0)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\theta = \pi/4\,\!&amp;lt;/math&amp;gt; gives an equal superposition.  &lt;br /&gt;
In general, when the system is in the state  &amp;lt;math&amp;gt;\left\vert \psi\right\rangle = U\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
the probability of finding the state &amp;lt;math&amp;gt;\left\vert 0 \right\rangle \,\!&amp;lt;/math&amp;gt; when a measurement is made in the computational basis is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|\left\langle 0\right\vert U\left\vert 0\right\rangle|^2 &amp;amp;= |\cos(\theta) -in_3\sin(\theta)|^2   \\&lt;br /&gt;
                    &amp;amp;= \cos^2(\theta) +n_3^2\sin^2(\theta),  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.25}}&lt;br /&gt;
and the probability of finding &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|\left\langle 1\right\vert U\left\vert 1\right\rangle|^2 &amp;amp;= |(-in_1+n_2)\sin(\theta)|^2   \\&lt;br /&gt;
                    &amp;amp;= (n_1^2+n_2^2)\sin^2(\theta).   &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.26}}&lt;br /&gt;
Notice the probabilities add up to one if &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; is a unit vector.  &lt;br /&gt;
&lt;br /&gt;
What this shows is that there is a transformation that takes the state&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;, which has probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
probability 0 of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and transform it&lt;br /&gt;
(using a &amp;lt;nowiki&amp;gt;&amp;quot;rotation''&amp;lt;/nowiki&amp;gt; into a state with a different (and generic)&lt;br /&gt;
probability of each.  This means that the density matrix corresponding&lt;br /&gt;
to this system always has determinant zero, meaning (for a two-state system) it has one&lt;br /&gt;
eigenvalue 1 and another eigenvalue 0.  (The determinant is the&lt;br /&gt;
product of the eigenvalues.)&lt;br /&gt;
&lt;br /&gt;
===Density Matrix for Mixed States===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a system with &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; dimensions, a ''mixed state density matrix'' &lt;br /&gt;
(or density operator, see Appendix \ref{app:cohvec}), &amp;lt;!-- \index{density matrix} \index{density operator} --&amp;gt; is a matrix which us used to&lt;br /&gt;
describe a more general state of a quantum system and can be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_D = \sum_i a_i \rho_i,&lt;br /&gt;
\,\! &amp;lt;/math&amp;gt;|3.27}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;a_i\geq 0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{\sum}_i a_i=1\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;\rho_i\,\!&amp;lt;/math&amp;gt; are pure states.  There is also a generalization of the Bloch sphere which is described in Appendix {app:polvec}.  &lt;br /&gt;
&lt;br /&gt;
The ''mixed state'' &amp;lt;!-- \index{density matrix!mixed state} --&amp;gt; density matrices are important in all descriptions of physical implementations of quantum information processing.  For this reason, a bit of labor should go into understanding the density matrix, the rest of this section is devoted to the physical interpretation and properties of this description of a quantum system.  The first description presented is called the ensemble interpretation of the density matrix.  This is perhaps the easiest to understand.  Another set of physical systems which are described by density matrices will be given elsewhere.&lt;br /&gt;
&lt;br /&gt;
====General Properties====&lt;br /&gt;
&lt;br /&gt;
In general, a density matrix has the following properties:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\rho = \rho^\dagger, &amp;amp;\;\;\; \mbox{it is hermitian}, \\&lt;br /&gt;
\rho \geq 0,\; &amp;amp;\;\;\; \mbox{it is positive semi-definite},&lt;br /&gt;
                         \\&lt;br /&gt;
\mbox{Tr}(\rho) = 1,\; &amp;amp;\;\;\; \mbox{it is normalized}. &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.28}}&lt;br /&gt;
If, in addition, it is a pure state, then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho^2 = \rho.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.29}}&amp;lt;br /&amp;gt;&lt;br /&gt;
The second property in Eq.[[#eq3.28|(3.28)]] really means that the eigenvalues of the density matrix are greater than or equal to zero.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Density Matrix for a Mixed State: Two States====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A mixed state density matrix (for a two-state system) is a rank two density matrix, &amp;lt;math&amp;gt;\rho_m\,\!&amp;lt;/math&amp;gt;, which can be described by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_m = \left[a_1\rho_1 + a_2\rho_2\right],&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.30}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_1 = \left\vert\psi_1\right\rangle\left\langle \psi_1\right\vert\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\rho_2 = \left \vert \psi_2\right\rangle\left\langle \psi_2\right\vert \,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
and &amp;lt;math&amp;gt;a_1 + a_2=1\,\!&amp;lt;/math&amp;gt;.  The &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; are probabilities and must sum to one.&lt;br /&gt;
(Note, if &amp;lt;math&amp;gt;\left\vert \psi\right\rangle_1=\left\vert \psi\right\rangle_2\,\!&amp;lt;/math&amp;gt;, or if one &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; or one&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \psi\right\rangle_i\,\!&amp;lt;/math&amp;gt; is zero, this reduces to a pure state.)  For &lt;br /&gt;
example, the probability of finding the state &amp;lt;math&amp;gt;\left\vert \psi_1\right\rangle\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;a_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and the probability of finding the state &amp;lt;math&amp;gt;\left\vert \psi_2\right\rangle\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;a_2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Description of Open Quantum Systems: An Example====&lt;br /&gt;
&lt;br /&gt;
One example of the utility of a density matrix is the following&lt;br /&gt;
statistical problem.  Let us consider the collection of two-state&lt;br /&gt;
systems, this will be a collection of electrons in a box and their&lt;br /&gt;
spin is a two-state system, being either up or down when measured.  If&lt;br /&gt;
a subset of these electrons was prepared in the state &amp;lt;nowiki&amp;gt;''up''&amp;lt;/nowiki&amp;gt; before&lt;br /&gt;
being put in the box, and the rest &amp;lt;nowiki&amp;gt;''down,''&amp;lt;/nowiki&amp;gt; then the description of&lt;br /&gt;
the system of particles is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho = a_u \left\vert\uparrow\right\rangle\left\langle\uparrow\right\vert +&lt;br /&gt;
         a_d\left\vert\downarrow\right\rangle\left\langle\downarrow\right\vert,&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.31}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where the fraction of  &amp;lt;nowiki&amp;gt;''up''&amp;lt;/nowiki&amp;gt; particles is &amp;lt;math&amp;gt;a_u\,\!&amp;lt;/math&amp;gt; and the fraction of &amp;lt;nowiki&amp;gt;''down''&amp;lt;/nowiki&amp;gt; is &amp;lt;math&amp;gt;a_d\,\!&amp;lt;/math&amp;gt;.  Our system is described by this density matrix because if a particle is chosen at random from the box and measured, the state of the particle is &amp;lt;math&amp;gt;\left\vert \uparrow\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;a_u\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert \downarrow\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;a_d\,\!&amp;lt;/math&amp;gt;.  This is known as the statistical&lt;br /&gt;
interpretation of the density operator. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is another example which is more relevant for our purposes.  For&lt;br /&gt;
a certain system (again a two-state system is take as an example)&lt;br /&gt;
if there is some probability &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; for an error to occur, let us say our&lt;br /&gt;
example is a unitary operator &amp;lt;math&amp;gt;U_e\,\!&amp;lt;/math&amp;gt;, then the density matrix for the&lt;br /&gt;
system is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_e = (1-p)\left\vert\psi\right\rangle\left\langle\psi\right\vert + pU_e\left\vert\psi\right\rangle\left\langle\psi\right\vert U_e^\dagger.  &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.32}}&amp;lt;br /&amp;gt;&lt;br /&gt;
This is the same form as Eq.[[#eq3.31|(3.31)]].  &lt;br /&gt;
&lt;br /&gt;
Note that in each &lt;br /&gt;
case the probabilities associated with the density matrix &amp;lt;math&amp;gt;p,1-p\,\!&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;a_u,a_d\,\!&amp;lt;/math&amp;gt;, (generally, the &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt;) are classical probabilities.  That&lt;br /&gt;
is, they are associated with a classical probability distribution--the&lt;br /&gt;
probability for error/no error and up/down.  These are not&lt;br /&gt;
probabilities associated with the superposition of the quantum state&lt;br /&gt;
in the equation &amp;lt;math&amp;gt;\left\vert \psi\right\rangle = \alpha_0 \left\vert 0\right\rangle + \alpha_1\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
given by the square of the moduli of the coefficients.  This is an&lt;br /&gt;
important distinction for the following reason.  The state&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; can be taken to the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with a unitary&lt;br /&gt;
transformation.  This state is deterministic in the sense that the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; will be obtained from a measurement in the&lt;br /&gt;
computational basis since there is no probability for obtaining&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;.  However, for nonzero &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; and a non-identity&lt;br /&gt;
operator &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;, the matrix &amp;lt;math&amp;gt;\rho_e\,\!&amp;lt;/math&amp;gt; has rank two and thus can never have&lt;br /&gt;
probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; for either of the two states, &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Thus, we have maximum knowledge about a pure state since&lt;br /&gt;
there is a way to choose a measurement, perhaps after a unitary&lt;br /&gt;
transformation, which achieves a certain result with probability one.&lt;br /&gt;
For the mixed state density operator this is not possible.  The state &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho = \left(\begin{array}{cc} &lt;br /&gt;
              1/2 &amp;amp; 0 \\ &lt;br /&gt;
                0 &amp;amp; 1/2 \end{array}\right),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.33}}&amp;lt;br /&amp;gt;&lt;br /&gt;
for which we have the least amount of knowledge is called the&lt;br /&gt;
maximally mixed state. &amp;lt;!-- \index{maximally mixed state! two qubits}--&amp;gt;  The&lt;br /&gt;
state could be either up or down with equal probability and neither is&lt;br /&gt;
a better guess.  If the two eigenvalues are not equal, then there is a&lt;br /&gt;
better guess, or bet, as to the result of a measurement and if one&lt;br /&gt;
eigenvalue is zero, there is a definite best guess.  &lt;br /&gt;
&lt;br /&gt;
To be more specific, independent of basis (unitary transformations),&lt;br /&gt;
one always has probability greater than zero of measuring&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \uparrow\right\rangle\,\!&amp;lt;/math&amp;gt; and probability greater than zero of measuring&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \downarrow\right\rangle\,\!&amp;lt;/math&amp;gt;. Thus the  state described by the density matrix is&lt;br /&gt;
a ''mixed state'' &amp;lt;!-- \index{mixed state density matrix}--&amp;gt; in the sense&lt;br /&gt;
that it can be considered a statistical mixture of the  two states&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \uparrow\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \downarrow\right\rangle\,\!&amp;lt;/math&amp;gt;.  This, because classical&lt;br /&gt;
probabilities are included separately, is significantly different from&lt;br /&gt;
the pure state density matrix, which is a special case of all density&lt;br /&gt;
matrices.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see that mixtures remain after a unitary transformation on the&lt;br /&gt;
system, note that a unitary matrix does not change the eigenvalues.  &lt;br /&gt;
This is because the eigenvalue equation is the same for a Hermitian&lt;br /&gt;
matrix and its corresponding diagonal matrix.  Let &amp;lt;math&amp;gt;\rho =&lt;br /&gt;
U\rho_d U^\dagger\,\!&amp;lt;/math&amp;gt;, then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\det(\rho -\lambda\mathbb{I}) &amp;amp;=&lt;br /&gt;
               \det(U(\rho_d-\lambda\mathbb{I})U^\dagger) \\&lt;br /&gt;
                        &amp;amp;=&lt;br /&gt;
                        \det(U)\det(\rho_d-\lambda\mathbb{I})\det(U^\dagger) \\&lt;br /&gt;
                        &amp;amp;=&lt;br /&gt;
                        \det(U)\det(\rho_d-\lambda\mathbb{I})\det(U^{-1}) \\&lt;br /&gt;
                        &amp;amp;= \det(\rho_d-\lambda\mathbb{I}).&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.34}}&lt;br /&gt;
&lt;br /&gt;
====Two-State Example: Bloch Sphere====&lt;br /&gt;
&lt;br /&gt;
Since our interest is primarily in qubits, which are two-state&lt;br /&gt;
systems, we return again to an example.  &lt;br /&gt;
&lt;br /&gt;
A very convenient representation of two state density matrices, one&lt;br /&gt;
can written in the so-called Bloch sphere &amp;lt;!-- \index{Bloch sphere}--&amp;gt;&lt;br /&gt;
representation given the fact that the density matrix is Hermitian, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_2 = \frac{1}{2}(\mathbb{I} + \vec{n}\cdot\vec{\sigma}),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.35}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where, for the density matrix to be positive &amp;lt;math&amp;gt;|\vec{n}| \leq 1\,\!&amp;lt;/math&amp;gt;, and the&lt;br /&gt;
&amp;lt;math&amp;gt;\sigma_i\,\!&amp;lt;/math&amp;gt; are the Pauli matrices &amp;lt;!-- \index{Pauli matrices}--&amp;gt;&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{\sigma} = (\sigma_x,\sigma_y,\sigma_z) = \left(&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
              0 &amp;amp; 1 \\ &lt;br /&gt;
              1 &amp;amp; 0 \end{array}\right),&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
               0 &amp;amp; -i \\ &lt;br /&gt;
               i &amp;amp;  0 \end{array}\right),&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
              1 &amp;amp; 0 \\ &lt;br /&gt;
              0 &amp;amp; -1 \end{array}\right)&lt;br /&gt;
\right).&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.36}}&amp;lt;br /&amp;gt;&lt;br /&gt;
The matrix entries on the RHS of this equation are the [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|The Pauli matrices]] discussed above.  It is not difficult to convince yourself that any Hermitian matrix can be written as a real linear combination of the three Pauli matrices and the identity.  The eigenvalues are given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_\pm = \frac{1\pm|\vec{n}|}{2}.&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.37}}&amp;lt;br /&amp;gt;&lt;br /&gt;
When &amp;lt;math&amp;gt;|\vec{n}| = 1\,\!&amp;lt;/math&amp;gt;, the state is pure, i.e., that the matrix &lt;br /&gt;
has rank one since it has one eigenvalue one and one zero.  If &amp;lt;math&amp;gt;|\vec{n}|&lt;br /&gt;
&amp;lt; 1\,\!&amp;lt;/math&amp;gt;, the density matrix represents a mixed state since rank is&lt;br /&gt;
greater than one--there are two non-zero eigenvalues.  These leads to&lt;br /&gt;
the following picture: the pure states lie on the surface of the&lt;br /&gt;
sphere (&amp;lt;math&amp;gt;\vec{n}\cdot \vec{n} =1\,\!&amp;lt;/math&amp;gt;), and mixed states lie in the interior of&lt;br /&gt;
the sphere with the maximally mixed state at the origin.  This is&lt;br /&gt;
supposedly due to Bloch. Hence the name Bloch sphere.  &lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;\rho^2 =\rho\,\!&amp;lt;/math&amp;gt; the condition that &amp;lt;math&amp;gt;\vec{n}\cdot\vec{n} =1\,\!&amp;lt;/math&amp;gt; for a pure&lt;br /&gt;
state can also be determined.  The square in the Bloch sphere&lt;br /&gt;
representation yields&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_2^2 = \frac{1}{4}\left(\mathbb{I} + 2\vec{n}\cdot\vec{\sigma} + (\vec{n}\cdot\vec{\sigma})^2\right), &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.38}}&amp;lt;br /&amp;gt;&lt;br /&gt;
and using &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\sigma_i \sigma_j = \mathbb{I}\delta_{ij} + i\epsilon_{ijk}\sigma_k,&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.39}}&amp;lt;br /&amp;gt;&lt;br /&gt;
then &amp;lt;math&amp;gt;\rho_2^2 =\rho_2\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;\vec{n}\cdot\vec{n} =1\,\!&amp;lt;/math&amp;gt;.  This technique is&lt;br /&gt;
used for higher dimensions.  See [[Appendix E - Density Operator: Extensions|Appendix E]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two density matrices &amp;lt;math&amp;gt;\rho_1=(1/2)(\mathbb{I} +\vec{n}\cdot\vec{\sigma})\,\!&amp;lt;/math&amp;gt; and &lt;br /&gt;
&amp;lt;math&amp;gt;\rho_2=(1/2)(\mathbb{I} +\vec{m}\cdot\vec{\sigma})\,\!&amp;lt;/math&amp;gt;, correspond to orthogonal &lt;br /&gt;
states when &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Tr}(\rho_1\rho_2) &amp;amp;= \frac{1}{4}\mbox{Tr}\big(\mathbb{I} + (\vec{n}\cdot\vec{\sigma})(\vec{m}\cdot\vec{\sigma})\big) \\&lt;br /&gt;
                  &amp;amp;= \frac{1}{2}(1+\vec{n}\cdot\vec{m}) =0.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|3.40}}&amp;lt;br /&amp;gt;&lt;br /&gt;
This implies that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{n}\cdot\vec{m} = |\vec{n}||\vec{m}|\cos(\theta) = -1.&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.41}}&amp;lt;br /&amp;gt;&lt;br /&gt;
Since the magnitudes must be one, the orthogonal states correspond to &lt;br /&gt;
pure states on a surface of a sphere which are represented by &lt;br /&gt;
antipodal points.&lt;br /&gt;
&lt;br /&gt;
===Expectation Values===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The expectation value &amp;lt;!-- \index{expectation value}--&amp;gt; &lt;br /&gt;
of an operator &amp;lt;math&amp;gt;\mathcal{O}\,\!&amp;lt;/math&amp;gt;, is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\langle \mathcal{O} \rangle = \mbox{Tr}(\rho \mathcal{O}),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.42}}&amp;lt;br /&amp;gt;&lt;br /&gt;
and is the &amp;quot;average value&amp;quot; of the operator.  For a pure state &lt;br /&gt;
&amp;lt;math&amp;gt;\rho_p = \left\vert\psi\right\rangle\left\langle\psi\right\vert\,\!&amp;lt;/math&amp;gt;, this reduces to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(\langle \mathcal{O} \rangle)_p = \left\langle\psi\right\vert \mathcal{O}\left\vert \psi\right\rangle.  &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.43}}&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 4 - Entanglement#Introduction|Continue to '''Chapter 4 - Entanglement''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_3_-_Physics_of_Quantum_Information&amp;diff=955</id>
		<title>Chapter 3 - Physics of Quantum Information</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_3_-_Physics_of_Quantum_Information&amp;diff=955"/>
		<updated>2011-02-14T21:21:07Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
It was a great realization that information is physical and that a&lt;br /&gt;
(classical) Turing machine is not the end of the story of&lt;br /&gt;
computation.  The physical system in which the information is stored&lt;br /&gt;
and manipulated is important and qubits are quite different from&lt;br /&gt;
bits.  &lt;br /&gt;
&lt;br /&gt;
In this chapter, some background in quantum mechanics is provided.&lt;br /&gt;
Not all of this chapter will be directly relevant to our discussion,&lt;br /&gt;
but it is included for the sake of completeness of our understanding&lt;br /&gt;
of how quantum mechanics from a textbook is related to quantum&lt;br /&gt;
computing.  The connection is, as of yet, clear but the story seems&lt;br /&gt;
incomplete from a physicists perspective and for the subject of error&lt;br /&gt;
prevention methods, some of this chapter will be vital.  In&lt;br /&gt;
particular, the section(s) concerning the density matrix.  Not only&lt;br /&gt;
is this vital, but not usually covered in most quantum mechanics&lt;br /&gt;
classes, either undergraduate or graduate.  &lt;br /&gt;
&lt;br /&gt;
It is also worth emphasizing that this chapter is primarily aimed at&lt;br /&gt;
physicists and for those others which are interested in the background&lt;br /&gt;
physics.  It is not necessary for much of what follows.&lt;br /&gt;
&lt;br /&gt;
===Schrodinger's Equation===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A common starting point in quantum mechanics is Schrodinger's equation.  This equation is not derived, or justified here, but is given in a general form:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
H \left\vert \Psi\right\rangle = i\hbar\frac{\partial}{\partial t}\left\vert \Psi\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.1}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is the Hamiltonian, &amp;lt;!-- \index{Hamiltonian} --&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\hbar\,\!&amp;lt;/math&amp;gt; is Planck's constant &lt;br /&gt;
&amp;lt;!-- \index{Planck's constant} --&amp;gt; &lt;br /&gt;
(divided by &amp;lt;math&amp;gt;2\pi\,\!&amp;lt;/math&amp;gt;), and &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt; is time.  The Hamiltonian contains what&lt;br /&gt;
is known about the system's evolution.  &lt;br /&gt;
Most of the time in these notes, we let &amp;lt;math&amp;gt;\hbar = 1\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This equation is (formally) solved by taking the time derivative to be&lt;br /&gt;
an ordinary derivative (we assume no explicit time dependence for&lt;br /&gt;
&amp;lt;math&amp;gt;H \,\!&amp;lt;/math&amp;gt;), so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
H \left\vert \Psi\right\rangle = i\frac{d \left\vert \Psi\right\rangle}{dt}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.2}}&amp;lt;br /&amp;gt;&lt;br /&gt;
This means that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
-iHdt =  \frac{d \left\vert \Psi\right\rangle}{\left\vert \Psi\right\rangle},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.3}}&amp;lt;br /&amp;gt;&lt;br /&gt;
so&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \ln \left\vert \Psi\right\rangle &amp;amp;= -iHt + C, \\&lt;br /&gt;
\Rightarrow\left\vert \Psi(t)\right\rangle &amp;amp;= e^{-iHt}\left\vert \Psi(0)\right\rangle.  &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.4}}&lt;br /&gt;
Now if &amp;lt;math&amp;gt;H\,\!&amp;lt;/math&amp;gt; is Hermitian, and it is, then the matrix &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U =  e^{-iHt}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.5}}&amp;lt;br /&amp;gt;&lt;br /&gt;
is unitary.  &amp;lt;!-- \index{unitary matrix}--&amp;gt;&lt;br /&gt;
(See [[Appendix C - Vectors and Linear Algebra]], in particular the section entitled [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Unitary Matrices]].)  Any&lt;br /&gt;
transformation on a closed system can be described by a unitary&lt;br /&gt;
transformation and any unitary transformation can be obtained by the&lt;br /&gt;
exponentiation of a Hermitian matrix.  &lt;br /&gt;
&lt;br /&gt;
The end result and important point is that the evolution of a quantum&lt;br /&gt;
state is, in general, given by a unitary matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \Psi(t)\right\rangle = U\left\vert \Psi(0)\right\rangle.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.6}}&amp;lt;br /&amp;gt;&lt;br /&gt;
So our objective in quantum information processing is to create a&lt;br /&gt;
unitary evolution, and eventual measurement, which will produce a&lt;br /&gt;
particular outcome.&lt;br /&gt;
&lt;br /&gt;
====Exponentiating a Matrix====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div id=&amp;quot;expmatrix&amp;quot;&amp;gt; ''Aside: a note about exponentiation of a matrix.''&amp;lt;/div&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
It may seem strange to exponentiate a matrix.  However, you can define&lt;br /&gt;
a function of a matrix according to its Taylor expansion.  The details&lt;br /&gt;
of this are primarily unimportant here, but just to show how it goes,&lt;br /&gt;
it is written out.  &lt;br /&gt;
&lt;br /&gt;
The Taylor expansion of an exponential is the following:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
e^x = \sum_{n=0}^\infty \frac{x^n}{n!}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.7}}&amp;lt;br /&amp;gt;&lt;br /&gt;
and this can be used to exponentiate a matrix by letting the matrix&lt;br /&gt;
replace &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; in the equation.  This can also be used to prove that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
e^{ix}=\cos x +i\sin x.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.8}}&amp;lt;br /&amp;gt;&lt;br /&gt;
''End Aside''&lt;br /&gt;
&lt;br /&gt;
===Density Matrix for Pure States===&lt;br /&gt;
&lt;br /&gt;
Now let us consider the object (a ''density matrix, or &lt;br /&gt;
density operator, of rank one'') &amp;lt;!-- \index{density matrix}\index{density&lt;br /&gt;
matrix!pure state} --&amp;gt;&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho = \left\vert\psi\right\rangle \left\langle \psi\right\vert,&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.9}}&amp;lt;br /&amp;gt;&lt;br /&gt;
which is just the outer product of two vectors.  For example, if &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\psi\right\vert = (0,0,1),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.10}}&amp;lt;br /&amp;gt;&lt;br /&gt;
then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\psi\mid\psi\right\rangle = \left\langle\psi\right\vert(\left\langle\psi\right\vert)^\dagger = 1.&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
However, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
 \left\vert\psi\right\rangle\left\langle\psi\right\vert = \left(\begin{array}{ccc}&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; 1 \end{array}\right).&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.11}}&amp;lt;br /&amp;gt;&lt;br /&gt;
Again &amp;lt;math&amp;gt;\left\vert \psi\right\rangle = \left\vert \psi(t)\right\rangle\,\!&amp;lt;/math&amp;gt;, so &amp;lt;math&amp;gt;\rho=\rho(t)\,\!&amp;lt;/math&amp;gt;.  If we&lt;br /&gt;
differentiate this with respect to &amp;lt;math&amp;gt;t\,\!&amp;lt;/math&amp;gt;,&amp;lt;!-- \index{Schr\&amp;quot;odinger Equation!&lt;br /&gt;
  for density matrix} --&amp;gt;&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{\partial \rho }{\partial t} &amp;amp;= &lt;br /&gt;
           \left(\frac{\partial \left\vert \psi\right\rangle}{\partial t}\right)\left\langle\psi\right\vert &lt;br /&gt;
            + \left\vert \psi\right\rangle\left(\frac{\partial \left\langle\psi\right\vert}{\partial t}\right)\\&lt;br /&gt;
                   &amp;amp;= (-iH)\rho + \rho (iH) = -i[H,\rho],&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.12}} &lt;br /&gt;
which is the Schrodinger equation for the density matrix, with solution,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho(t) = U\rho(0)U^\dagger.&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.13}}&amp;lt;br /&amp;gt;&lt;br /&gt;
This follows from &amp;lt;math&amp;gt;\left\vert\psi(t)\right\rangle\left\langle\psi(t)\right\vert =&lt;br /&gt;
U\left\vert\psi(0)\right\rangle\left\langle\psi(0)\right\vert U^\dagger\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Consider our two-state system &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert 0\right\rangle = \left(\begin{array}{c} 1 \\ 0 \end{array}\right), &lt;br /&gt;
                   \;\;\; \mbox{and} \;\;\; &lt;br /&gt;
\left\vert 1\right\rangle = \left(\begin{array}{c} 0 \\ 1 \end{array}\right).&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.14}}&amp;lt;br /&amp;gt;&lt;br /&gt;
A ''superposition'' of these two states is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \psi\right\rangle = \alpha_0\left\vert 0\right\rangle + \alpha_1\left\vert 1\right\rangle &lt;br /&gt;
           = \left(\begin{array}{c} \alpha_0 \\ \alpha_1 \end{array}\right),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.15}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers such that &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  The corresponding &lt;br /&gt;
''pure state, (i.e. rank one) density matrix'' is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_p = \left\vert\psi\right\rangle\left\langle\psi\right\vert&lt;br /&gt;
     = \left(\begin{array}{cc}&lt;br /&gt;
              |\alpha_0|^2 &amp;amp; \alpha_0 \alpha_1^* \\ &lt;br /&gt;
              \alpha_0^* \alpha_1 &amp;amp; |\alpha_1|^2 \end{array}\right).&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.16}}&amp;lt;br /&amp;gt;&lt;br /&gt;
Note that the superposition in Eq.[[#eq3.15|(3.15)]] can be obtained&lt;br /&gt;
from any pure state by a unitary transformation.  Here, the trace of&lt;br /&gt;
the density matrix is an important quantity; it is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(\rho_p) = |\alpha_0|^2 + |\alpha_1|^2 = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.17}}&amp;lt;br /&amp;gt;&lt;br /&gt;
Notice also that the determinant of this matrix is zero:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\rho_p) = |\alpha_0|^2|\alpha_1|^2 - \alpha_0 \alpha_1^*\alpha_0^*&lt;br /&gt;
\alpha_1 = 0.&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.18}}&amp;lt;br /&amp;gt;&lt;br /&gt;
To see this another way, note that the density operator of rank one&lt;br /&gt;
can be written as &amp;lt;math&amp;gt;U(\left\vert 0\right\rangle\left\langle0\right\vert)U^\dagger\,\!&amp;lt;/math&amp;gt;, so that the determinant&lt;br /&gt;
is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\det(U(\left\vert 0\right\rangle \left\langle 0\right\vert)U^\dagger) &amp;amp;= \det(U(\left\vert 0\right\rangle\left\langle 0\right\vert)U^{-1})\\&lt;br /&gt;
                            &amp;amp;=  \det(U)\det(\left\vert0\right\rangle\left\langle 0\right\vert)\frac{1}{\det(U)} \\&lt;br /&gt;
                            &amp;amp;= \det(\left\vert 0\right\rangle \left\langle 0\right\vert) = 0.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.19}}&lt;br /&gt;
&lt;br /&gt;
===Measurements Revisited===&lt;br /&gt;
&lt;br /&gt;
If the state of a quantum system is described by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
\left\vert \psi\right\rangle = \alpha_0\left\vert 0\right\rangle + \alpha_1\left\vert 1\right\rangle, &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.20}}&amp;lt;br /&amp;gt;&lt;br /&gt;
the probability of finding it in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; when measured in&lt;br /&gt;
the computational basis is &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt;.  However, this is a&lt;br /&gt;
particular superposition which could be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \psi\right\rangle = U \left\vert 0\right\rangle.  &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.21}}&amp;lt;br /&amp;gt;&lt;br /&gt;
In the section entitled [[#Schrodinger's Equation|Schrodinger's Equation]] it was shown that this matrix &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; results&lt;br /&gt;
from the exponentiation of a Hermitian matrix and from the section entitled [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|The Pauli Matrices]] any &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
Hermitian matrix can be written in terms of the Pauli matrices.&amp;lt;!-- \index{Pauli matrices}--&amp;gt;  To make this explicit using standard conventions, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert \psi\right\rangle &amp;amp;= U\left\vert 0\right\rangle  \\&lt;br /&gt;
           &amp;amp;= \exp(-i\vec{n}\cdot\vec{\sigma} \theta) \left\vert 0\right\rangle \\&lt;br /&gt;
           &amp;amp;= (\mathbb{I}\cos(\theta) -i\vec{n}\cdot\vec{\sigma} \sin(\theta))\left\vert 0\right\rangle,&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.22}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; is a unit vector, &amp;lt;math&amp;gt;|\vec{n}|=1\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{n}\cdot\vec{\sigma} =&lt;br /&gt;
n_1\sigma_1+n_2\sigma_2+n_3\sigma_3\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
One can write this matrix out explicitly &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \exp(-i\vec{n}\cdot\vec{\sigma} \theta) &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; 1 \end{array}\right)\cos(\theta) \\&lt;br /&gt;
                        &amp;amp; \;\;\;   + (-i)\left[ n_1\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; 1 \\ &lt;br /&gt;
                                  1 &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_2\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; -i \\ &lt;br /&gt;
                                  i &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_3\left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; -1 \end{array}\right)\right]\sin(\theta) \\&lt;br /&gt;
                                &amp;amp;= &lt;br /&gt;
         \left(\begin{array}{cc}&lt;br /&gt;
  \cos(\theta) -in_3\sin(\theta) &amp;amp; (-in_1-n_2)\sin(\theta) \\ &lt;br /&gt;
   (-in_1+n_2)\sin(\theta) &amp;amp; \cos(\theta) +in_3\sin(\theta)  \end{array}\right).&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.23}}&lt;br /&gt;
Notice this is a  ''special unitary matrix.''  (See [[Appendix C - Vectors and Linear Algebra]], in particular the subsection [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Unitary Matrices]].)&lt;br /&gt;
&lt;br /&gt;
To see that any state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; for arbitrary coefficients&lt;br /&gt;
&amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; can be obtained by choosing &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
appropriately, the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; can be chosen as a starting point.  &lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
U\left\vert 0\right\rangle &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
  \cos(\theta) -in_3\sin(\theta) &amp;amp; (-in_1-n_2)\sin(\theta) \\ &lt;br /&gt;
   (-in_1+n_2)\sin(\theta) &amp;amp; \cos(\theta) +in_3\sin(\theta)  &lt;br /&gt;
         \end{array}\right)&lt;br /&gt;
       \left(\begin{array}{c} 1 \\ 0\end{array}\right) \\&lt;br /&gt;
         &amp;amp;=  \left(\begin{array}{c} &lt;br /&gt;
                            \cos(\theta) -in_3\sin(\theta)  \\ &lt;br /&gt;
                            (-in_1+n_2)\sin(\theta)\end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.24}}&lt;br /&gt;
For example, choosing &amp;lt;math&amp;gt;\theta=0\,\!&amp;lt;/math&amp;gt; gives the original state; choosing&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{n} = (0,1,0)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\theta = \pi/2\,\!&amp;lt;/math&amp;gt; gives &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;; and choosing&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{n} = (0,1,0)\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\theta = \pi/4\,\!&amp;lt;/math&amp;gt; gives an equal superposition.  &lt;br /&gt;
In general, when the system is in the state  &amp;lt;math&amp;gt;\left\vert \psi\right\rangle = U\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
the probability of finding the state &amp;lt;math&amp;gt;\left\vert 0 \right\rangle \,\!&amp;lt;/math&amp;gt; when a measurement is made in the computational basis is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|\left\langle 0\right\vert U\left\vert 0\right\rangle|^2 &amp;amp;= |\cos(\theta) -in_3\sin(\theta)|^2   \\&lt;br /&gt;
                    &amp;amp;= \cos^2(\theta) +n_3^2\sin^2(\theta),  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.25}}&lt;br /&gt;
and the probability of finding &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|\left\langle 1\right\vert U\left\vert 1\right\rangle|^2 &amp;amp;= |(-in_1+n_2)\sin(\theta)|^2   \\&lt;br /&gt;
                    &amp;amp;= (n_1^2+n_2^2)\sin^2(\theta).   &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.26}}&lt;br /&gt;
Notice the probabilities add up to one if &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; is a unit vector.  &lt;br /&gt;
&lt;br /&gt;
What this shows is that there is a transformation that takes the state&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;, which has probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
probability 0 of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and transform it&lt;br /&gt;
(using a &amp;lt;nowiki&amp;gt;&amp;quot;rotation''&amp;lt;/nowiki&amp;gt; into a state with a different (and generic)&lt;br /&gt;
probability of each.  This means that the density matrix corresponding&lt;br /&gt;
to this system always has determinant zero, meaning (for a two-state system) it has one&lt;br /&gt;
eigenvalue 1 and another eigenvalue 0.  (The determinant is the&lt;br /&gt;
product of the eigenvalues.)&lt;br /&gt;
&lt;br /&gt;
===Density Matrix for Mixed States===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a system with &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; dimensions, a ''mixed state density matrix'' &lt;br /&gt;
(or density operator, see Appendix \ref{app:cohvec}), &amp;lt;!-- \index{density matrix} \index{density operator} --&amp;gt; is a matrix which us used to&lt;br /&gt;
describe a more general state of a quantum system and can be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_D = \sum_i a_i \rho_i,&lt;br /&gt;
\,\! &amp;lt;/math&amp;gt;|3.27}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;a_i\geq 0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;{\sum}_i a_i=1\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;\rho_i\,\!&amp;lt;/math&amp;gt; are pure states.  There is also a generalization of the Bloch sphere which is described in Appendix {app:polvec}.  &lt;br /&gt;
&lt;br /&gt;
The ''mixed state'' &amp;lt;!-- \index{density matrix!mixed state} --&amp;gt; density matrices are important in all descriptions of physical implementations of quantum information processing.  For this reason, a bit of labor should go into understanding the density matrix, the rest of this section is devoted to the physical interpretation and properties of this description of a quantum system.  The first description presented is called the ensemble interpretation of the density matrix.  This is perhaps the easiest to understand.  Another set of physical systems which are described by density matrices will be given elsewhere.&lt;br /&gt;
&lt;br /&gt;
====General Properties====&lt;br /&gt;
&lt;br /&gt;
In general, a density matrix has the following properties:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\rho = \rho^\dagger, &amp;amp;\;\;\; \mbox{it is hermitian}, \\&lt;br /&gt;
\rho \geq 0,\; &amp;amp;\;\;\; \mbox{it is positive semi-definite},&lt;br /&gt;
                         \\&lt;br /&gt;
\mbox{Tr}(\rho) = 1,\; &amp;amp;\;\;\; \mbox{it is normalized}. &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.28}}&lt;br /&gt;
If, in addition, it is a pure state, then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho^2 = \rho.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|3.29}}&amp;lt;br /&amp;gt;&lt;br /&gt;
The second property in Eq.[[#eq3.28|(3.28)]] really means that the eigenvalues of the density matrix are greater than or equal to zero.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Density Matrix for a Mixed State: Two States====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A mixed state density matrix (for a two-state system) is a rank two density matrix, &amp;lt;math&amp;gt;\rho_m\,\!&amp;lt;/math&amp;gt;, which can be described by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_m = \left[a_1\rho_1 + a_2\rho_2\right],&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.30}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_1 = \left\vert\psi_1\right\rangle\left\langle \psi_1\right\vert\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\rho_2 = \left \vert \psi_2\right\rangle\left\langle \psi_2\right\vert \,\!&amp;lt;/math&amp;gt; &lt;br /&gt;
and &amp;lt;math&amp;gt;a_1 + a_2=1\,\!&amp;lt;/math&amp;gt;.  The &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; are probabilities and must sum to one.&lt;br /&gt;
(Note, if &amp;lt;math&amp;gt;\left\vert \psi\right\rangle_1=\left\vert \psi\right\rangle_2\,\!&amp;lt;/math&amp;gt;, or if one &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt; or one&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \psi\right\rangle_i\,\!&amp;lt;/math&amp;gt; is zero, this reduces to a pure state.)  For &lt;br /&gt;
example, the probability of finding the state &amp;lt;math&amp;gt;\left\vert \psi_1\right\rangle\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;a_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and the probability of finding the state &amp;lt;math&amp;gt;\left\vert \psi_2\right\rangle\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;a_2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Description of Open Quantum Systems: An Example====&lt;br /&gt;
&lt;br /&gt;
One example of the utility of a density matrix is the following&lt;br /&gt;
statistical problem.  Let us consider the collection of two-state&lt;br /&gt;
systems, this will be a collection of electrons in a box and their&lt;br /&gt;
spin is a two-state system, being either up or down when measured.  If&lt;br /&gt;
a subset of these electrons was prepared in the state &amp;lt;nowiki&amp;gt;''up''&amp;lt;/nowiki&amp;gt; before&lt;br /&gt;
being put in the box, and the rest &amp;lt;nowiki&amp;gt;''down,''&amp;lt;/nowiki&amp;gt; then the description of&lt;br /&gt;
the system of particles is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho = a_u \left\vert\uparrow\right\rangle\left\langle\uparrow\right\vert +&lt;br /&gt;
         a_d\left\vert\downarrow\right\rangle\left\langle\downarrow\right\vert,&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.31}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where the fraction of  &amp;lt;nowiki&amp;gt;''up''&amp;lt;/nowiki&amp;gt; particles is &amp;lt;math&amp;gt;a_u\,\!&amp;lt;/math&amp;gt; and the fraction of &amp;lt;nowiki&amp;gt;''down''&amp;lt;/nowiki&amp;gt; is &amp;lt;math&amp;gt;a_d\,\!&amp;lt;/math&amp;gt;.  Our system is described by this density matrix because if a particle is chosen at random from the box and measured, the state of the particle is &amp;lt;math&amp;gt;\left\vert \uparrow\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;a_u\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert \downarrow\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;a_d\,\!&amp;lt;/math&amp;gt;.  This is known as the statistical&lt;br /&gt;
interpretation of the density operator. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is another example which is more relevant for our purposes.  For&lt;br /&gt;
a certain system (again a two-state system is take as an example)&lt;br /&gt;
if there is some probability &amp;lt;math&amp;gt;p\,\!&amp;lt;/math&amp;gt; for an error to occur, let us say our&lt;br /&gt;
example is a unitary operator &amp;lt;math&amp;gt;U_e\,\!&amp;lt;/math&amp;gt;, then the density matrix for the&lt;br /&gt;
system is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_e = (1-p)\left\vert\psi\right\rangle\left\langle\psi\right\vert + pU_e\left\vert\psi\right\rangle\left\langle\psi\right\vert U_e^\dagger.  &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.32}}&amp;lt;br /&amp;gt;&lt;br /&gt;
This is the same form as Eq.(\ref{eq:tsdmatex1}).  &lt;br /&gt;
&lt;br /&gt;
Note that in each &lt;br /&gt;
case the probabilities associated with the density matrix &amp;lt;math&amp;gt;p,1-p\,\!&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;a_u,a_d\,\!&amp;lt;/math&amp;gt;, (generally, the &amp;lt;math&amp;gt;a_i\,\!&amp;lt;/math&amp;gt;) are classical probabilities.  That&lt;br /&gt;
is, they are associated with a classical probability distribution--the&lt;br /&gt;
probability for error/no error and up/down.  These are not&lt;br /&gt;
probabilities associated with the superposition of the quantum state&lt;br /&gt;
in the equation &amp;lt;math&amp;gt;\left\vert \psi\right\rangle = \alpha_0 \left\vert 0\right\rangle + \alpha_1\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
given by the square of the moduli of the coefficients.  This is an&lt;br /&gt;
important distinction for the following reason.  The state&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; can be taken to the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with a unitary&lt;br /&gt;
transformation.  This state is deterministic in the sense that the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; will be obtained from a measurement in the&lt;br /&gt;
computational basis since there is no probability for obtaining&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;.  However, for nonzero &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; and a non-identity&lt;br /&gt;
operator &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;, the matrix &amp;lt;math&amp;gt;\rho_e\,\!&amp;lt;/math&amp;gt; has rank two and thus can never have&lt;br /&gt;
probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; for either of the two states, &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Thus, we have maximum knowledge about a pure state since&lt;br /&gt;
there is a way to choose a measurement, perhaps after a unitary&lt;br /&gt;
transformation, which achieves a certain result with probability one.&lt;br /&gt;
For the mixed state density operator this is not possible.  The state &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho = \left(\begin{array}{cc} &lt;br /&gt;
              1/2 &amp;amp; 0 \\ &lt;br /&gt;
                0 &amp;amp; 1/2 \end{array}\right),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.33}}&amp;lt;br /&amp;gt;&lt;br /&gt;
for which we have the least amount of knowledge is called the&lt;br /&gt;
maximally mixed state. &amp;lt;!-- \index{maximally mixed state! two qubits}--&amp;gt;  The&lt;br /&gt;
state could be either up or down with equal probability and neither is&lt;br /&gt;
a better guess.  If the two eigenvalues are not equal, then there is a&lt;br /&gt;
better guess, or bet, as to the result of a measurement and if one&lt;br /&gt;
eigenvalue is zero, there is a definite best guess.  &lt;br /&gt;
&lt;br /&gt;
To be more specific, independent of basis (unitary transformations),&lt;br /&gt;
one always has probability greater than zero of measuring&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \uparrow\right\rangle\,\!&amp;lt;/math&amp;gt; and probability greater than zero of measuring&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \downarrow\right\rangle\,\!&amp;lt;/math&amp;gt;. Thus the  state described by the density matrix is&lt;br /&gt;
a ''mixed state'' &amp;lt;!-- \index{mixed state density matrix}--&amp;gt; in the sense&lt;br /&gt;
that it can be considered a statistical mixture of the  two states&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert \uparrow\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \downarrow\right\rangle\,\!&amp;lt;/math&amp;gt;.  This, because classical&lt;br /&gt;
probabilities are included separately, is significantly different from&lt;br /&gt;
the pure state density matrix, which is a special case of all density&lt;br /&gt;
matrices.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see that mixtures remain after a unitary transformation on the&lt;br /&gt;
system, note that a unitary matrix does not change the eigenvalues.  &lt;br /&gt;
This is because the eigenvalue equation is the same for a Hermitian&lt;br /&gt;
matrix and its corresponding diagonal matrix.  Let &amp;lt;math&amp;gt;\rho =&lt;br /&gt;
U\rho_d U^\dagger\,\!&amp;lt;/math&amp;gt;, then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\det(\rho -\lambda\mathbb{I}) &amp;amp;=&lt;br /&gt;
               \det(U(\rho_d-\lambda\mathbb{I})U^\dagger) \\&lt;br /&gt;
                        &amp;amp;=&lt;br /&gt;
                        \det(U)\det(\rho_d-\lambda\mathbb{I})\det(U^\dagger) \\&lt;br /&gt;
                        &amp;amp;=&lt;br /&gt;
                        \det(U)\det(\rho_d-\lambda\mathbb{I})\det(U^{-1}) \\&lt;br /&gt;
                        &amp;amp;= \det(\rho_d-\lambda\mathbb{I}).&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.34}}&lt;br /&gt;
&lt;br /&gt;
====Two-State Example: Bloch Sphere====&lt;br /&gt;
&lt;br /&gt;
Since our interest is primarily in qubits, which are two-state&lt;br /&gt;
systems, we return again to an example.  &lt;br /&gt;
&lt;br /&gt;
A very convenient representation of two state density matrices, one&lt;br /&gt;
can written in the so-called Bloch sphere &amp;lt;!-- \index{Bloch sphere}--&amp;gt;&lt;br /&gt;
representation given the fact that the density matrix is Hermitian, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_2 = \frac{1}{2}(\mathbb{I} + \vec{n}\cdot\vec{\sigma}),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.35}}&amp;lt;br /&amp;gt;&lt;br /&gt;
where, for the density matrix to be positive &amp;lt;math&amp;gt;|\vec{n}| \leq 1\,\!&amp;lt;/math&amp;gt;, and the&lt;br /&gt;
&amp;lt;math&amp;gt;\sigma_i\,\!&amp;lt;/math&amp;gt; are the Pauli matrices &amp;lt;!-- \index{Pauli matrices}--&amp;gt;&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{\sigma} = (\sigma_x,\sigma_y,\sigma_z) = \left(&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
              0 &amp;amp; 1 \\ &lt;br /&gt;
              1 &amp;amp; 0 \end{array}\right),&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
               0 &amp;amp; -i \\ &lt;br /&gt;
               i &amp;amp;  0 \end{array}\right),&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
              1 &amp;amp; 0 \\ &lt;br /&gt;
              0 &amp;amp; -1 \end{array}\right)&lt;br /&gt;
\right).&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.36}}&amp;lt;br /&amp;gt;&lt;br /&gt;
The matrix entries on the RHS of this equation are the [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|The Pauli matrices]] discussed above.  It is not difficult to convince yourself that any Hermitian matrix can be written as a real linear combination of the three Pauli matrices and the identity.  The eigenvalues are given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_\pm = \frac{1\pm|\vec{n}|}{2}.&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.37}}&amp;lt;br /&amp;gt;&lt;br /&gt;
When &amp;lt;math&amp;gt;|\vec{n}| = 1\,\!&amp;lt;/math&amp;gt;, the state is pure, i.e., that the matrix &lt;br /&gt;
has rank one since it has one eigenvalue one and one zero.  If &amp;lt;math&amp;gt;|\vec{n}|&lt;br /&gt;
&amp;lt; 1\,\!&amp;lt;/math&amp;gt;, the density matrix represents a mixed state since rank is&lt;br /&gt;
greater than one--there are two non-zero eigenvalues.  These leads to&lt;br /&gt;
the following picture: the pure states lie on the surface of the&lt;br /&gt;
sphere (&amp;lt;math&amp;gt;\vec{n}\cdot \vec{n} =1\,\!&amp;lt;/math&amp;gt;), and mixed states lie in the interior of&lt;br /&gt;
the sphere with the maximally mixed state at the origin.  This is&lt;br /&gt;
supposedly due to Bloch. Hence the name Bloch sphere.  &lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;math&amp;gt;\rho^2 =\rho\,\!&amp;lt;/math&amp;gt; the condition that &amp;lt;math&amp;gt;\vec{n}\cdot\vec{n} =1\,\!&amp;lt;/math&amp;gt; for a pure&lt;br /&gt;
state can also be determined.  The square in the Bloch sphere&lt;br /&gt;
representation yields&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\rho_2^2 = \frac{1}{4}\left(\mathbb{I} + 2\vec{n}\cdot\vec{\sigma} + (\vec{n}\cdot\vec{\sigma})^2\right), &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.38}}&amp;lt;br /&amp;gt;&lt;br /&gt;
and using &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\sigma_i \sigma_j = \mathbb{I}\delta_{ij} + i\epsilon_{ijk}\sigma_k,&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.39}}&amp;lt;br /&amp;gt;&lt;br /&gt;
then &amp;lt;math&amp;gt;\rho_2^2 =\rho_2\,\!&amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt;\vec{n}\cdot\vec{n} =1\,\!&amp;lt;/math&amp;gt;.  This technique is&lt;br /&gt;
used for higher dimensions.  See [[Appendix E - Density Operator: Extensions|Appendix E]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two density matrices &amp;lt;math&amp;gt;\rho_1=(1/2)(\mathbb{I} +\vec{n}\cdot\vec{\sigma})\,\!&amp;lt;/math&amp;gt; and &lt;br /&gt;
&amp;lt;math&amp;gt;\rho_2=(1/2)(\mathbb{I} +\vec{m}\cdot\vec{\sigma})\,\!&amp;lt;/math&amp;gt;, correspond to orthogonal &lt;br /&gt;
states when &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Tr}(\rho_1\rho_2) &amp;amp;= \frac{1}{4}\mbox{Tr}\big(\mathbb{I} + (\vec{n}\cdot\vec{\sigma})(\vec{m}\cdot\vec{\sigma})\big) \\&lt;br /&gt;
                  &amp;amp;= \frac{1}{2}(1+\vec{n}\cdot\vec{m}) =0.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|3.40}}&amp;lt;br /&amp;gt;&lt;br /&gt;
This implies that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{n}\cdot\vec{m} = |\vec{n}||\vec{m}|\cos(\theta) = -1.&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.41}}&amp;lt;br /&amp;gt;&lt;br /&gt;
Since the magnitudes must be one, the orthogonal states correspond to &lt;br /&gt;
pure states on a surface of a sphere which are represented by &lt;br /&gt;
antipodal points.&lt;br /&gt;
&lt;br /&gt;
===Expectation Values===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The expectation value &amp;lt;!-- \index{expectation value}--&amp;gt; &lt;br /&gt;
of an operator &amp;lt;math&amp;gt;\mathcal{O}\,\!&amp;lt;/math&amp;gt;, is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\langle \mathcal{O} \rangle = \mbox{Tr}(\rho \mathcal{O}),&lt;br /&gt;
&amp;lt;/math&amp;gt;|3.42}}&amp;lt;br /&amp;gt;&lt;br /&gt;
and is the &amp;quot;average value&amp;quot; of the operator.  For a pure state &lt;br /&gt;
&amp;lt;math&amp;gt;\rho_p = \left\vert\psi\right\rangle\left\langle\psi\right\vert\,\!&amp;lt;/math&amp;gt;, this reduces to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(\langle \mathcal{O} \rangle)_p = \left\langle\psi\right\vert \mathcal{O}\left\vert \psi\right\rangle.  &lt;br /&gt;
&amp;lt;/math&amp;gt;|3.43}}&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 4 - Entanglement#Introduction|Continue to '''Chapter 4 - Entanglement''']]&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=954</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=954"/>
		<updated>2011-02-14T20:34:38Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by&lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be the topics of&lt;br /&gt;
later sections. The fourth and fifth will be discussed in this chapter.&lt;br /&gt;
&lt;br /&gt;
Backwards is it? Not from a computer science perspective or from a motivational perspective.&lt;br /&gt;
Besides, to a large extent, the first two rely very heavily on experimental physics&lt;br /&gt;
and engineering. These topics are primarily beyond the scope of this introductory material,&lt;br /&gt;
but will be treated superficially in Chapter 6.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system which has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, or the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
Thus this vector is normalized, i.e. it has magnitude, or length one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Therefore,&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform a one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the untary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt; then we write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate which implements the unitary transformation&lt;br /&gt;
&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right.&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters a fixed. There are several such transformations which are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters. Therefore, it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics, that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states. This is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to this latter point in the&lt;br /&gt;
next chapter.  &lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; beteen &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the defintions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner&lt;br /&gt;
product'' which is defined, for matrices&amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
By subtracting the equation with the product reversed&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
so&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra|Section D.6]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra|Section D.6]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis'' with the following notation also perfectly acceptable&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties which may&lt;br /&gt;
share some information or may wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice may be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an abitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits. Nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit which is next to the one we would like it to interact with. Then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one which is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;, in other words a controlled NOT operation for qubits. This gate is used so often that it is discussed here in detail.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;, and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
Then the CNOT is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x,y}\right\rangle \overset{CNOT}{\rightarrow} \left\vert{x,x\oplus y}\right\rangle.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.2. &lt;br /&gt;
The first qubit, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, at the top of the diagam, is called the&lt;br /&gt;
''control bit'' and the second, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, at the top of the diagam,&lt;br /&gt;
is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.2: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.3, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.4|Fig. 2.4]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.5 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit which, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.4&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a Controlled-phase gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;HZH = X\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.6 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics, and therefore for&lt;br /&gt;
classical bits in classical computers, one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons which will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the intoduction a simple example was provided as motivation for&lt;br /&gt;
distinguishing quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle, can be used (cautiously) here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  Recall&lt;br /&gt;
that the particle is in this state and that this state really means&lt;br /&gt;
that the particle is not in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt;, it really means that it is in both at the same time. &lt;br /&gt;
&lt;br /&gt;
Just to emphasize the point, it really ''cannot'' be thought of as &lt;br /&gt;
being in state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; ''or'' in&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  To see this, act on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; unitary transformation, has probability zero of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability one of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed, that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt;-component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; and the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.7.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary&lt;br /&gt;
in Figure 2.8 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projecter?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projecter.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projecter and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstact cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that since &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expressioin, if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, it has no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, as we will see later, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=953</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=953"/>
		<updated>2011-02-14T20:16:11Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by&lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be the topics of&lt;br /&gt;
later sections. The fourth and fifth will be discussed in this chapter.&lt;br /&gt;
&lt;br /&gt;
Backwards is it? Not from a computer science perspective or from a motivational perspective.&lt;br /&gt;
Besides, to a large extent, the first two rely very heavily on experimental physics&lt;br /&gt;
and engineering. These topics are primarily beyond the scope of this introductory material,&lt;br /&gt;
but will be treated superficially in Chapter 6.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system which has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, or the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
Thus this vector is normalized, i.e. it has magnitude, or length one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Therefore,&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform a one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the untary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt; then we write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate which implements the unitary transformation&lt;br /&gt;
&amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right.&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters a fixed. There are several such transformations which are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters. Therefore, it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics, that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states. This is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to this latter point in the&lt;br /&gt;
next chapter.  &lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; beteen &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the defintions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner&lt;br /&gt;
product'' which is defined, for matrices&amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
By subtracting the equation with the product reversed&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
so&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra|Section D.6]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra|Section D.6]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis'' with the following notation also perfectly acceptable&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties which may&lt;br /&gt;
share some information or may wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice may be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an abitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits. Nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit which is next to the one we would like it to interact with. Then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one which is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;, in other words a controlled NOT operation for qubits. This gate is used so often that it is discussed here in detail.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;, and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
Then the CNOT is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x,y}\right\rangle \overset{CNOT}{\rightarrow} \left\vert{x,x\oplus y}\right\rangle.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.2. &lt;br /&gt;
The first qubit, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, at the top of the diagam, is called the&lt;br /&gt;
''control bit'' and the second, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, at the top of the diagam,&lt;br /&gt;
is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.2: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.3, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.4|Fig. 2.4]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.5 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit which, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.4&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a Controlled-phase gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;HZH = X\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.6 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics, and therefore for&lt;br /&gt;
classical bits in classical computers, one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons which will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the intoduction a simple example was provided as motivation for&lt;br /&gt;
distinguishing quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle, can be used (cautiously) here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  Recall&lt;br /&gt;
that the particle is in this state and that this state really means&lt;br /&gt;
that the particle is not in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt;, it really means that it is in both at the same time. &lt;br /&gt;
&lt;br /&gt;
Just to emphasize the point, it really ''cannot'' be thought of as &lt;br /&gt;
being in state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; ''or'' in&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert 1\right\rangle \,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  To see this, act on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; unitary transformation, has probability zero of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability one of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed, that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt;-component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; and the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.7.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary&lt;br /&gt;
in Figure 2.8 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projecter?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outter product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projecter.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projecter and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact this property essentially defines a project.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstact cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that since &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expressioin, if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, it has no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, as we will see later, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_B_-_Complex_Numbers&amp;diff=922</id>
		<title>Appendix B - Complex Numbers</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_B_-_Complex_Numbers&amp;diff=922"/>
		<updated>2010-09-08T15:24:47Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Complex numbers arise naturally from an attempt to solve the equation &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
a^2 + 1 = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It's easy enough to write such an equation down, but how would you&lt;br /&gt;
solve it?  The answer is&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
a = \pm \sqrt{-1} =\pm i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We let the symbol &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;\sqrt{-1}\,\!&amp;lt;/math&amp;gt;, so that &amp;lt;math&amp;gt;i^2=-1\,\!&amp;lt;/math&amp;gt;.  Then&lt;br /&gt;
any number of the form &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z = x + iy, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; are real, is called a ''complex'' number.&lt;br /&gt;
&amp;lt;!-- \index{complex number} --&amp;gt; &lt;br /&gt;
Let's take some other complex number to be &amp;lt;math&amp;gt;\eta = c+ id\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;c\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; are real.  Then the two complex numbers are equal, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\eta = z &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which is to say&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
x +iy = c+id, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
if and only if&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
x = c, \;\;\mbox{ and } \;\; y = d.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We refer to &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; as the ''real part'' of the complex number &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; as the ''complex part''.  Sometimes these are written as Re(&amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
and Im(&amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt;), respectively.  &lt;br /&gt;
&lt;br /&gt;
We may restate the equivalence condition as &amp;lt;math&amp;gt;z=\eta\,\!&amp;lt;/math&amp;gt; if and only if&lt;br /&gt;
the real part of &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; is equal to the real part of&lt;br /&gt;
&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; ''and'' the imaginary part of &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; is equal to the imaginary part of &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Complex numbers are multiplied like any other binomial expression:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z\eta = (x+iy)(c+id) = xc - yd +i(yc + xd),  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where we have used &amp;lt;math&amp;gt;i^2 = -1\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
The ''complex conjugate'' &amp;lt;!-- \index{complex conjugate} --&amp;gt;of the complex&lt;br /&gt;
number &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;z^*\,\!&amp;lt;/math&amp;gt; and is given by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z^* = x-iy.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
One reason for defining this is that a number times its own complex&lt;br /&gt;
conjugate is real,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
zz^* = (x+iy)(x-iy) = x^2 + y^2 +i(xy - yx) = x^2 +y^2.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Note that the complex conjugate of the complex conjugate is the&lt;br /&gt;
original complex number and &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z^*z = (x-iy)(x+iy) = x^2 + y^2 +i(yx - xy) = x^2 +y^2 = zz^*.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We also call this the ''modulus squared'' &amp;lt;!-- \index{modulus squared}--&amp;gt; so&lt;br /&gt;
that the ''modulus'' is&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
|z| = \sqrt{(z^*z)} = \sqrt{x^2 + y^2}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Note that the complex conjugate of a product is the product of the complex&lt;br /&gt;
conjugates: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
(z\eta)^* = z^* \eta^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is often useful to look at a graph for a complex number.  The graph&lt;br /&gt;
consists of an x-axis for the real part, and a y-axis for the&lt;br /&gt;
complex part.  This is shown in [[#Figure B.1|Fig. B.1]].  In this&lt;br /&gt;
figure, it is easily seen that we can think of &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; as a&lt;br /&gt;
two-dimensional vector and that the magnitude (length) of the vector&lt;br /&gt;
is the modulus of the complex number, &amp;lt;math&amp;gt;|z| = \sqrt{x^2 + y^2}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure B.1&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:complexgraph1.jpeg]]&lt;br /&gt;
Figure B.1: A complex number in Cartesian coordinates.&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- \begin{figure}&lt;br /&gt;
\begin{center}&lt;br /&gt;
\includegraphics[scale=0.5]{/home/mbyrd/tex/books/qcomp/figures/complexgraph1.eps}&lt;br /&gt;
\caption{\label{fig:compg1} A Cartesian coordinate representation of&lt;br /&gt;
  a complex number &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt;.}&lt;br /&gt;
\end{center}&lt;br /&gt;
\end{figure} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another useful way to represent this is with polar coordinates.  We&lt;br /&gt;
can do this by writing &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z = |z|(\cos\theta +i \sin\theta).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It turns out that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
e^{i\theta} = \cos\theta + i \sin\theta,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
so we could also write&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z = |z|e^{i\theta}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It is often the case that people will write this as &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z = re^{i\theta},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;r = \sqrt{x^2+y^2}\,\!&amp;lt;/math&amp;gt; as is usual for polar coordinates.  Then,&lt;br /&gt;
everything is just like polar coordinates, with the exception of the&lt;br /&gt;
inclusion of the factor &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;.  (See [[#Figure B.2|Fig. B.2]].)  &lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure B.2&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:complexgraph2.jpeg]]&lt;br /&gt;
Figure B.2: A polar coordinate representation of a complex number.&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_B_-_Complex_Numbers&amp;diff=921</id>
		<title>Appendix B - Complex Numbers</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_B_-_Complex_Numbers&amp;diff=921"/>
		<updated>2010-09-08T15:15:15Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Complex numbers arise naturally from an attempt to solve the equation &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
a^2 + 1 = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It's easy enough to write such an equation down, but how would you&lt;br /&gt;
solve it?  The answer is&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
a = \pm \sqrt{-1} =\pm i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We let the symbol &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;\sqrt{-1}\,\!&amp;lt;/math&amp;gt;, so that &amp;lt;math&amp;gt;i^2=-1\,\!&amp;lt;/math&amp;gt;.  Then&lt;br /&gt;
any number of the form &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z = x + iy, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; are real, is called a ''complex'' number.&lt;br /&gt;
&amp;lt;!-- \index{complex number} --&amp;gt; &lt;br /&gt;
Let's take some other complex number to be &amp;lt;math&amp;gt;\eta = c+ id\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;c\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;d\,\!&amp;lt;/math&amp;gt; are real.  Then the two complex numbers are equal, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\eta = z &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which is to say&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
x +iy = c+id, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
if and only if&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
x = c, \;\;\mbox{ and } \;\; y = d.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We refer to &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; as the ''real part'' of the complex number &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; as the ''complex part''.  Sometimes these are written as Re(&amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt;)&lt;br /&gt;
and Im(&amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt;), respectively.  &lt;br /&gt;
&lt;br /&gt;
We may restate the equivalence condition as &amp;lt;math&amp;gt;z=\eta\,\!&amp;lt;/math&amp;gt; if and only if&lt;br /&gt;
the real part of &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; is equal to the real part of&lt;br /&gt;
&amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt; ''and'' the imaginary part of &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; is equal to the imaginary part of &amp;lt;math&amp;gt;\eta\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Complex numbers are multiplied like any other binomial expression:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z\eta = (x+iy)(c+id) = xc - yd +i(yc + xd),  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where we have used &amp;lt;math&amp;gt;i^2 = -1\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
The ''complex conjugate'' &amp;lt;!-- \index{complex conjugate} --&amp;gt;of the complex&lt;br /&gt;
number &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;z^*\,\!&amp;lt;/math&amp;gt; and is given by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z^* = x-iy.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
One reason for defining this is that a number times its own complex&lt;br /&gt;
conjugate is real,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
zz^* = (x+iy)(x-iy) = x^2 + y^2 +i(xy - yx) = x^2 +y^2.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Note that the complex conjugate of the complex conjugate is the&lt;br /&gt;
original complex number and &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z^*z = (x+iy)(x-iy) = x^2 + y^2 +i(xy - yx) = x^2 +y^2 = zz^*.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We also call this the ''modulus squared'' &amp;lt;!-- \index{modulus squared}--&amp;gt; so&lt;br /&gt;
that the ''modulus'' is&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
|z| = \sqrt{(z^*z)} = \sqrt{x^2 + y^2}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Note that the complex conjugate of a product is the product of the complex&lt;br /&gt;
conjugates: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
(z\eta)^* = z^* \eta^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is often useful to look at a graph for a complex number.  The graph&lt;br /&gt;
consists of an x-axis for the real part, and a y-axis for the&lt;br /&gt;
complex part.  This is shown in [[#Figure B.1|Fig. B.1]].  In this&lt;br /&gt;
figure, it is easily seen that we can think of &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt; as a&lt;br /&gt;
two-dimensional vector and that the magnitude (length) of the vector&lt;br /&gt;
is the modulus of the complex number, &amp;lt;math&amp;gt;|z| = \sqrt{x^2 + y^2}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure B.1&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:complexgraph1.jpeg]]&lt;br /&gt;
Figure B.1: A complex number in Cartesian coordinates.&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- \begin{figure}&lt;br /&gt;
\begin{center}&lt;br /&gt;
\includegraphics[scale=0.5]{/home/mbyrd/tex/books/qcomp/figures/complexgraph1.eps}&lt;br /&gt;
\caption{\label{fig:compg1} A Cartesian coordinate representation of&lt;br /&gt;
  a complex number &amp;lt;math&amp;gt;z\,\!&amp;lt;/math&amp;gt;.}&lt;br /&gt;
\end{center}&lt;br /&gt;
\end{figure} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another useful way to represent this is with polar coordinates.  We&lt;br /&gt;
can do this by writing &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z = |z|(\cos\theta +i \sin\theta).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It turns out that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
e^{i\theta} = \cos\theta + i \sin\theta,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
so we could also write&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z = |z|e^{i\theta}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It is often the case that people will write this as &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
z = re^{i\theta},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;r = \sqrt{x^2+y^2}\,\!&amp;lt;/math&amp;gt; as is usual for polar coordinates.  Then,&lt;br /&gt;
everything is just like polar coordinates, with the exception of the&lt;br /&gt;
inclusion of the factor &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;.  (See [[#Figure B.2|Fig. B.2]].)  &lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure B.2&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:complexgraph2.jpeg]]&lt;br /&gt;
Figure B.2: A polar coordinate representation of a complex number.&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_A_-_Basic_Probability_Concepts&amp;diff=920</id>
		<title>Appendix A - Basic Probability Concepts</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_A_-_Basic_Probability_Concepts&amp;diff=920"/>
		<updated>2010-09-08T15:13:47Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this appendix definitions and some example calculations are&lt;br /&gt;
presented which will aid in our discussions.  This is not meant to be&lt;br /&gt;
a comprehensive introduction to the topic.  It is primarily meant to&lt;br /&gt;
serve as a means for introducing notation and terminology for the&lt;br /&gt;
course.  This example is a variation of one given by David Griffiths&lt;br /&gt;
in ''Intoduction to Quantum Mechanics'' ([[Bibliography#Griffiths:qmbook|David J. Giffiths’s book]]).&lt;br /&gt;
&lt;br /&gt;
''Example'':  Suppose that in some room, there are four people.  Their heights in meters are:  &lt;br /&gt;
#1 person is 1.5 meters tall&lt;br /&gt;
#1 person is 1.6 meters tall&lt;br /&gt;
#2 people are 1.8 meters tall&lt;br /&gt;
Let&amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; will stand for the number of people.  We might write their heights as &lt;br /&gt;
&amp;lt;math&amp;gt;N(1.5) = 1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;N(1.6)=1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;N(1.8)=2\,\!&amp;lt;/math&amp;gt; and the total number of people is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N = \sum_{j=0}^\infty N(j),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; runs over all values and in this case we are rounding to the&lt;br /&gt;
nearest tenth of a meter.  Here &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; =4 of course.  &lt;br /&gt;
&lt;br /&gt;
Now if I draw a name out of a hat which contains each person's name&lt;br /&gt;
once, I will get the person's name which is 1.6 meters tall with&lt;br /&gt;
probability &amp;lt;math&amp;gt;1/4\,\!&amp;lt;/math&amp;gt;.  (We assume that each person has a unique name and&lt;br /&gt;
that it appears once and only once in the hat.)  We write this as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
P(1.6) = 1/4,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and we would generally write for any value&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
P(j) = \frac{N(j)}{N}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now since we are going to get someone's name when we draw, we must&lt;br /&gt;
have &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_j P(j) = 1,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which is easy enough to check.  &lt;br /&gt;
&lt;br /&gt;
There are several aspects of this probability distribution that we might like to know.  Here are some which are particularly useful: &amp;lt;!-- \index{median}\index{mean} \index{average}--&amp;gt;&lt;br /&gt;
#The ''most probable'' values for the height is 1.8 meters.&lt;br /&gt;
#The ''median'' is 1.7 meters (two people below, and two above)&lt;br /&gt;
#The ''average'' (or ''mean'') is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}\left\langle height\right\rangle &amp;amp;= \frac{1(1.5)+1(1.6)+2(1.8)}{4} \\ &amp;amp;=  \frac{6.8}{4} = 1.7 \end{align}&amp;lt;/math&amp;gt;|A.1}} &lt;br /&gt;
Note that the mean and the median do not have to be the same.  The&lt;br /&gt;
median is the middle number in the list, if there is an odd number,&lt;br /&gt;
otherwise it is the mean of the two in the middle.  These two just&lt;br /&gt;
happen to be the same here.  &lt;br /&gt;
The bracket &amp;lt;math&amp;gt;\left\langle\cdot\right\rangle\,\!&amp;lt;/math&amp;gt; is fairly standard notation and it means&lt;br /&gt;
generally, that we obtain the ''average value''&amp;lt;!-- \index{average}--&amp;gt; &lt;br /&gt;
of a function by calculating &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\left\langle f(j)\right\rangle = \sum_{j=0}^\infty f(j)P(j).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
For the average this is just &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle j\right\rangle = \sum_{j=0}^\infty jP(j)= \sum_{j=0}^\infty j\frac{N(j)}{N}.  \,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Note:'''  The ''average value'' is called the ''expectation value'' &amp;lt;!-- \index{expectation value} --&amp;gt; in quantum mechanics.  This can be&lt;br /&gt;
misleading becase it is ''not'' the most probable and it is not &amp;lt;nowiki&amp;gt;''what to expect.''&amp;lt;/nowiki&amp;gt; &lt;br /&gt;
&lt;br /&gt;
When one would like to discuss a properties of a particular probability distribution, describing it takes some effort.  It is not enough to know the average, median, and most probable values.  This leaves a lot of details of the probability distribution unknown to us if these are all we are given.  What else would one like to know?  Without describing it entirely, one may like to know more about the &amp;lt;nowiki&amp;gt;''shape''&amp;lt;/nowiki&amp;gt; of the distribution.  For example, how spread out is it?&lt;br /&gt;
&lt;br /&gt;
The most important measure of this is the ''variance'',&amp;lt;!-- \index{variance}--&amp;gt; which is the ''standard deviation'' &amp;lt;!-- \index{standard deviation} --&amp;gt; squared ( &amp;lt;math&amp;gt;\sigma^2\!&amp;lt;/math&amp;gt; ).  The variance is defined as (in terms of our variable &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;) &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma^2 = \langle(\Delta j)^2\rangle, \,\!&amp;lt;/math&amp;gt;|A.2}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\Delta j = j -\langle j \rangle\,\!&amp;lt;/math&amp;gt;.  This can also be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma^2 = \langle j^2\rangle - \langle j \rangle^2.\,\!&amp;lt;/math&amp;gt;|A.3}}&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_A_-_Basic_Probability_Concepts&amp;diff=919</id>
		<title>Appendix A - Basic Probability Concepts</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_A_-_Basic_Probability_Concepts&amp;diff=919"/>
		<updated>2010-09-08T15:12:49Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this appendix definitions and some example calculations are&lt;br /&gt;
presented which will aid in our discussions.  This is not meant to be&lt;br /&gt;
a comprehensive introduction to the topic.  It is primarily meant to&lt;br /&gt;
serve as a means for introducing notation and terminology for the&lt;br /&gt;
course.  This example is a variation of one given by David Griffiths&lt;br /&gt;
in ''Intoduction to Quantum Mechanics'' ([[Bibliography#Griffiths:qmbook|David J. Giffiths’s book]]).&lt;br /&gt;
&lt;br /&gt;
''Example'':  Suppose that in some room, there are four people.  Their heights in meters are:  &lt;br /&gt;
#1 person is 1.5 meters tall&lt;br /&gt;
#1 person is 1.6 meters tall&lt;br /&gt;
#2 people are 1.8 meters tall&lt;br /&gt;
Let&amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; will stand for the number of people.  We might write their heights as &lt;br /&gt;
&amp;lt;math&amp;gt;N(1.5) = 1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;N(1.6)=1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;N(1.8)=2\,\!&amp;lt;/math&amp;gt; and the total number of people is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N = \sum_{j=0}^\infty N(j),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; runs over all values and in this case we are rounding to the&lt;br /&gt;
nearest tenth of a meter.  Here &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; =4 of course.  &lt;br /&gt;
&lt;br /&gt;
Now if I draw a name out of a hat which contains each person's name&lt;br /&gt;
once, I will get the person's name which is 1.6 meters tall with&lt;br /&gt;
probability &amp;lt;math&amp;gt;1/4\,\!&amp;lt;/math&amp;gt;.  (We assume that each person has a unique name and&lt;br /&gt;
that it appears once and only once in the hat.)  We write this as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
P(1.6) = 1/4,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and we would generally write for any value&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
P(j) = \frac{N(j)}{N}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now since we are going to get someone's name when we draw, we must&lt;br /&gt;
have &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_j P(j) = 1,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which is easy enough to check.  &lt;br /&gt;
&lt;br /&gt;
There are several aspects of this probability distribution that we might like to know.  Here are some which are particularly useful: &amp;lt;!-- \index{median}\index{mean} \index{average}--&amp;gt;&lt;br /&gt;
#The ''most probable'' values for the height is 1.8 meters.&lt;br /&gt;
#The ''median'' is 1.7 meters (two people below, and two above)&lt;br /&gt;
#The ''average'' (or ''mean'') is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}\left\langle height\right\rangle &amp;amp;= \frac{1(1.5)+1(1.6)+2(1.8)}{4} \\ &amp;amp;=  \frac{6.8}{4} = 1.7 \end{align}&amp;lt;/math&amp;gt;|A.1}} &lt;br /&gt;
Note that the mean and the median do not have to be the same.  The&lt;br /&gt;
median is the middle number in the list, if there is an odd number,&lt;br /&gt;
otherwise it is the mean of the two in the middle.  These two just&lt;br /&gt;
happen to be the same here.  &lt;br /&gt;
The bracket &amp;lt;math&amp;gt;\left\langle\cdot\right\rangle\,\!&amp;lt;/math&amp;gt; is fairly standard notation and it means&lt;br /&gt;
generally, that we obtain the ''average value''&amp;lt;!-- \index{average}--&amp;gt; &lt;br /&gt;
of a function by calculating &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\left\langle f(j)\right\rangle = \sum_{j=0}^\infty f(j)P(j).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
For the average this is just &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle j\right\rangle = \sum_{j=0}^\infty jP(j)= \sum_{j=0}^\infty j\frac{N(j)}{N}.  \,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Note:'''  The ''average value'' is called the ''expectation value'' &amp;lt;!-- \index{expectation value} --&amp;gt; in quantum mechanics.  This can be&lt;br /&gt;
misleading becase it is ''not'' the most probable and it is not &amp;lt;nowiki&amp;gt;''what to expect.''&amp;lt;/nowiki&amp;gt; &lt;br /&gt;
&lt;br /&gt;
When one would like to discuss a properties of a particular probability distribution, describing it takes some effort.  It is not enough to know the average, median, and most probable values.  This leaves a lot of details of the probability distribution unknown to us if these are all we are given.  What else would one like to know?  Without describing it entirely, one may like to know more about the &amp;lt;nowiki&amp;gt;''shape''&amp;lt;/nowiki&amp;gt; of the distribution.  For example, how spread out is it?&lt;br /&gt;
&lt;br /&gt;
The most important measure of this is the ''variance'',&amp;lt;!-- \index{variance}--&amp;gt;, which is the ''standard deviation'' &amp;lt;!-- \index{standard deviation} --&amp;gt; squared ( &amp;lt;math&amp;gt;\sigma^2\!&amp;lt;/math&amp;gt; ).  The variance is defined as (in terms of our variable &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;) &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma^2 = \langle(\Delta j)^2\rangle, \,\!&amp;lt;/math&amp;gt;|A.2}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\Delta j = j -\langle j \rangle\,\!&amp;lt;/math&amp;gt;.  This can also be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma^2 = \langle j^2\rangle - \langle j \rangle^2.\,\!&amp;lt;/math&amp;gt;|A.3}}&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_A_-_Basic_Probability_Concepts&amp;diff=918</id>
		<title>Appendix A - Basic Probability Concepts</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_A_-_Basic_Probability_Concepts&amp;diff=918"/>
		<updated>2010-09-08T15:12:20Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this appendix definitions and some example calculations are&lt;br /&gt;
presented which will aid in our discussions.  This is not meant to be&lt;br /&gt;
a comprehensive introduction to the topic.  It is primarily meant to&lt;br /&gt;
serve as a means for introducing notation and terminology for the&lt;br /&gt;
course.  This example is a variation of one given by David Griffiths&lt;br /&gt;
in ''Intoduction to Quantum Mechanics'' ([[Bibliography#Griffiths:qmbook|David J. Giffiths’s book]]).&lt;br /&gt;
&lt;br /&gt;
''Example'':  Suppose that in some room, there are four people.  Their heights in meters are:  &lt;br /&gt;
#1 person is 1.5 meters tall&lt;br /&gt;
#1 person is 1.6 meters tall&lt;br /&gt;
#2 people are 1.8 meters tall&lt;br /&gt;
Let&amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; will stand for the number of people.  We might write their heights as &lt;br /&gt;
&amp;lt;math&amp;gt;N(1.5) = 1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;N(1.6)=1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;N(1.8)=2\,\!&amp;lt;/math&amp;gt; and the total number of people is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N = \sum_{j=0}^\infty N(j),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; runs over all values and in this case we are rounding to the&lt;br /&gt;
nearest tenth of a meter.  Here &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; =4 of course.  &lt;br /&gt;
&lt;br /&gt;
Now if I draw a name out of a hat which contains each person's name&lt;br /&gt;
once, I will get the person's name which is 1.6 meters tall with&lt;br /&gt;
probability &amp;lt;math&amp;gt;1/4\,\!&amp;lt;/math&amp;gt;.  (We assume that each person has a unique name and&lt;br /&gt;
that it appears once and only once in the hat.)  We write this as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
P(1.6) = 1/4,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and we would generally write for any value&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
P(j) = \frac{N(j)}{N}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now since we are going to get someone's name when we draw, we must&lt;br /&gt;
have &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_j P(j) = 1,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which is easy enough to check.  &lt;br /&gt;
&lt;br /&gt;
There are several aspects of this probability distribution that we might like to know.  Here are some which are particularly useful: &amp;lt;!-- \index{median}\index{mean} \index{average}--&amp;gt;&lt;br /&gt;
#The ''most probable'' values for the height is 1.8 meters.&lt;br /&gt;
#The ''median'' is 1.7 meters (two people below, and two above)&lt;br /&gt;
#The ''average'' (or ''mean'') is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}\left\langle height\right\rangle &amp;amp;= \frac{1(1.5)+1(1.6)+2(1.8)}{4} \\ &amp;amp;=  \frac{6.8}{4} = 1.7 \end{align}&amp;lt;/math&amp;gt;|A.1}} &lt;br /&gt;
Note that the mean and the median do not have to be the same.  The&lt;br /&gt;
median is the middle number in the list, if there is an odd number,&lt;br /&gt;
otherwise it is the mean of the two in the middle.  These two just&lt;br /&gt;
happen to be the same here.  &lt;br /&gt;
The bracket &amp;lt;math&amp;gt;\left\langle\cdot\right\rangle\,\!&amp;lt;/math&amp;gt; is fairly standard notation and it means&lt;br /&gt;
generally, that we obtain the ''average value''&amp;lt;!-- \index{average}--&amp;gt; &lt;br /&gt;
of a function by calculating &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\left\langle f(j)\right\rangle = \sum_{j=0}^\infty f(j)P(j).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
For the average this is just &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle j\right\rangle = \sum_{j=0}^\infty jP(j)= \sum_{j=0}^\infty j\frac{N(j)}{N}.  \,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Note:'''  The ''average value'' is called the ''expectation value'' &amp;lt;!-- \index{expectation value} --&amp;gt; in quantum mechanics.  This can be&lt;br /&gt;
misleading becase it is ''not'' the most probable and it is not &amp;lt;nowiki&amp;gt;''what to expect.''&amp;lt;/nowiki&amp;gt; &lt;br /&gt;
&lt;br /&gt;
When one would like to discuss a properties of a particular probability distribution, describing it takes some effort.  It is not enough to know the average, median, and most probable values.  This leaves a lot of details of the probability distribution unknown to us if these are all we are given.  What else would one like to know?  Without describing it entirely, one may like to know more about the &amp;lt;nowiki&amp;gt;''shape''&amp;lt;/nowiki&amp;gt; of the distribution.  For example, how spread out is it?&lt;br /&gt;
&lt;br /&gt;
The most important measure of this is the ''variance'',&amp;lt;!-- \index{variance}--&amp;gt;, which is the ''standard deviation'' &amp;lt;!-- \index{standard deviation} --&amp;gt; squared (&amp;lt;math&amp;gt;\sigma^2\!&amp;lt;/math&amp;gt;).  The variance is defined as (in terms of our variable &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;) &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma^2 = \langle(\Delta j)^2\rangle, \,\!&amp;lt;/math&amp;gt;|A.2}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\Delta j = j -\langle j \rangle\,\!&amp;lt;/math&amp;gt;.  This can also be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma^2 = \langle j^2\rangle - \langle j \rangle^2.\,\!&amp;lt;/math&amp;gt;|A.3}}&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_A_-_Basic_Probability_Concepts&amp;diff=917</id>
		<title>Appendix A - Basic Probability Concepts</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_A_-_Basic_Probability_Concepts&amp;diff=917"/>
		<updated>2010-09-08T15:11:43Z</updated>

		<summary type="html">&lt;p&gt;Tjones: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this appendix definitions and some example calculations are&lt;br /&gt;
presented which will aid in our discussions.  This is not meant to be&lt;br /&gt;
a comprehensive introduction to the topic.  It is primarily meant to&lt;br /&gt;
serve as a means for introducing notation and terminology for the&lt;br /&gt;
course.  This example is a variation of one given by David Griffiths&lt;br /&gt;
in ''Intoduction to Quantum Mechanics'' ([[Bibliography#Griffiths:qmbook|David J. Giffiths’s book]]).&lt;br /&gt;
&lt;br /&gt;
''Example'':  Suppose that in some room, there are four people.  Their heights in meters are:  &lt;br /&gt;
#1 person is 1.5 meters tall&lt;br /&gt;
#1 person is 1.6 meters tall&lt;br /&gt;
#2 people are 1.8 meters tall&lt;br /&gt;
Let&amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; will stand for the number of people.  We might write their heights as &lt;br /&gt;
&amp;lt;math&amp;gt;N(1.5) = 1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;N(1.6)=1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;N(1.8)=2\,\!&amp;lt;/math&amp;gt; and the total number of people is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N = \sum_{j=0}^\infty N(j),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; runs over all values and in this case we are rounding to the&lt;br /&gt;
nearest tenth of a meter.  Here &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; =4 of course.  &lt;br /&gt;
&lt;br /&gt;
Now if I draw a name out of a hat which contains each person's name&lt;br /&gt;
once, I will get the person's name which is 1.6 meters tall with&lt;br /&gt;
probability &amp;lt;math&amp;gt;1/4\,\!&amp;lt;/math&amp;gt;.  (We assume that each person has a unique name and&lt;br /&gt;
that it appears once and only once in the hat.)  We write this as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
P(1.6) = 1/4,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and we would generally write for any value&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
P(j) = \frac{N(j)}{N}. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now since we are going to get someone's name when we draw, we must&lt;br /&gt;
have &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_j P(j) = 1,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which is easy enough to check.  &lt;br /&gt;
&lt;br /&gt;
There are several aspects of this probability distribution that we might like to know.  Here are some which are particularly useful: &amp;lt;!-- \index{median}\index{mean} \index{average}--&amp;gt;&lt;br /&gt;
#The ''most probable'' values for the height is 1.8 meters.&lt;br /&gt;
#The ''median'' is 1.7 meters (two people below, and two above)&lt;br /&gt;
#The ''average'' (or ''mean'') is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}\left\langle height\right\rangle &amp;amp;= \frac{1(1.5)+1(1.6)+2(1.8)}{4} \\ &amp;amp;=  \frac{6.8}{4} = 1.7 \end{align}&amp;lt;/math&amp;gt;|A.1}} &lt;br /&gt;
Note that the mean and the median do not have to be the same.  The&lt;br /&gt;
median is the middle number in the list, if there is an odd number,&lt;br /&gt;
otherwise it is the mean of the two in the middle.  These two just&lt;br /&gt;
happen to be the same here.  &lt;br /&gt;
The bracket &amp;lt;math&amp;gt;\left\langle\cdot\right\rangle\,\!&amp;lt;/math&amp;gt; is fairly standard notation and it means&lt;br /&gt;
generally, that we obtain the ''average value''&amp;lt;!-- \index{average}--&amp;gt; &lt;br /&gt;
of a function by calculating &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\left\langle f(j)\right\rangle = \sum_{j=0}^\infty f(j)P(j).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
For the average this is just &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle j\right\rangle = \sum_{j=0}^\infty jP(j)= \sum_{j=0}^\infty j\frac{N(j)}{N}.  \,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
'''Note:'''  The ''average value'' is called the ''expectation value'' &amp;lt;!-- \index{expectation value} --&amp;gt; in quantum mechanics.  This can be&lt;br /&gt;
misleading becase it is ''not'' the most probable and it is not &amp;lt;nowiki&amp;gt;''what to expect.''&amp;lt;/nowiki&amp;gt; &lt;br /&gt;
&lt;br /&gt;
When one would like to discuss a properties of a particular probability distribution, describing it takes some effort.  It is not enough to know the average, median, and most probable values.  This leaves a lot of details of the probability distribution unknown to us if these are all we are given.  What else would one like to know?  Without describing it entirely, one may like to know more about the &amp;lt;nowiki&amp;gt;''shape''&amp;lt;/nowiki&amp;gt; of the distribution.  For example, how spread out is it?&lt;br /&gt;
&lt;br /&gt;
The most important measure of this is the ''variance'',&amp;lt;!-- \index{variance}--&amp;gt;, which is the ''standard deviation'' &amp;lt;!-- \index{standard deviation} --&amp;gt; squared (&amp;lt;math&amp;gt;\sigma^2,\!&amp;lt;/math&amp;gt;).  The variance is defined as (in terms of our variable &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt;) &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma^2 = \langle(\Delta j)^2\rangle, \,\!&amp;lt;/math&amp;gt;|A.2}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\Delta j = j -\langle j \rangle\,\!&amp;lt;/math&amp;gt;.  This can also be written as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma^2 = \langle j^2\rangle - \langle j \rangle^2.\,\!&amp;lt;/math&amp;gt;|A.3}}&lt;/div&gt;</summary>
		<author><name>Tjones</name></author>
		
	</entry>
</feed>